id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
sequence | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
5,224 | Citizen Kane | Citizen Kane is a 1941 American drama film directed by, produced by, and starring Orson Welles. Welles and Herman J. Mankiewicz wrote the screenplay. The picture was Welles' first feature film. Citizen Kane is frequently cited as the greatest film ever made. For 50 consecutive years, it stood at number 1 in the British Film Institute's Sight & Sound decennial poll of critics, and it topped the American Film Institute's 100 Years ... 100 Movies list in 1998, as well as its 2007 update. The film was nominated for Academy Awards in nine categories and it won for Best Writing (Original Screenplay) by Mankiewicz and Welles. Citizen Kane is praised for Gregg Toland's cinematography, Robert Wise's editing, Bernard Herrmann's music, and its narrative structure, all of which have been considered innovative and precedent-setting.
The quasi-biographical film examines the life and legacy of Charles Foster Kane, played by Welles, a composite character based on American media barons William Randolph Hearst and Joseph Pulitzer, Chicago tycoons Samuel Insull and Harold McCormick, as well as aspects of the screenwriters' own lives. Upon its release, Hearst prohibited any mention of the film in his newspapers.
After the Broadway success of Welles's Mercury Theatre and the controversial 1938 radio broadcast "The War of the Worlds" on The Mercury Theatre on the Air, Welles was courted by Hollywood. He signed a contract with RKO Pictures in 1939. Although it was unusual for an untried director, he was given freedom to develop his own story, to use his own cast and crew, and to have final cut privilege. Following two abortive attempts to get a project off the ground, he wrote the screenplay for Citizen Kane, collaborating with Herman J. Mankiewicz. Principal photography took place in 1940, the same year its innovative trailer was shown, and the film was released in 1941.
Although it was a critical success, Citizen Kane failed to recoup its costs at the box office. The film faded from view after its release, but it returned to public attention when it was praised by French critics such as André Bazin and re-released in 1956. In 1958, the film was voted number 9 on the prestigious Brussels 12 list at the 1958 World Expo. Citizen Kane was selected by the Library of Congress as an inductee of the 1989 inaugural group of 25 films for preservation in the United States National Film Registry for being "culturally, historically, or aesthetically significant". Roger Ebert wrote of it: "Its surface is as much fun as any movie ever made. Its depths surpass understanding. I have analyzed it a shot at a time with more than 30 groups, and together we have seen, I believe, pretty much everything that is there on the screen. The more clearly I can see its physical manifestation, the more I am stirred by its mystery."
In a mansion called Xanadu, part of a vast palatial estate in Florida, the elderly Charles Foster Kane is on his deathbed. Holding a snow globe, he utters his last word, "Rosebud", and dies. A newsreel obituary tells the life story of Kane, an enormously wealthy newspaper publisher and industrial magnate. Kane's death becomes sensational news around the world, and the newsreel's producer tasks reporter Jerry Thompson with discovering the meaning of "Rosebud".
Thompson sets out to interview Kane's friends and associates. He tries to approach his second wife, Susan Alexander Kane, now an alcoholic who runs her own nightclub, but she refuses to talk to him. Thompson goes to the private archive of the late banker Walter Parks Thatcher. Through Thatcher's written memoirs, Thompson learns about Kane's rise from a Colorado boarding house and the decline of his fortune.
In 1871, gold was discovered through a mining deed belonging to Kane's mother, Mary Kane. She hired Thatcher to establish a trust that would provide for Kane's education and assume guardianship of him. While the parents and Thatcher discussed arrangements inside the boarding house, the young Kane played happily with a sled in the snow outside. When Kane's parents introduced him to Thatcher, the boy struck Thatcher with his sled and attempted to run away.
By the time Kane gained control of his trust at the age of 25, the mine's productivity and Thatcher's prudent investing had made Kane one of the richest men in the world. Kane took control of the New York Inquirer newspaper and embarked on a career of yellow journalism, publishing scandalous articles that attacked Thatcher's (and his own) business interests. Kane sold his newspaper empire to Thatcher after the 1929 stock market crash left Kane short of cash.
Thompson interviews Kane's personal business manager, Mr. Bernstein. Bernstein recalls that Kane hired the best journalists available to build the Inquirer's circulation. Kane rose to power by successfully manipulating public opinion regarding the Spanish–American War and marrying Emily Norton, the niece of the President of the United States.
Thompson interviews Kane's estranged best friend, Jedediah Leland, in a retirement home. Leland says that Kane's marriage to Emily disintegrated over the years, and he began an affair with amateur singer Susan Alexander while running for Governor of New York. Both his wife and his political opponent discovered the affair, and the public scandal ended his political career. Kane married Susan and forced her into a humiliating operatic career for which she had neither the talent nor the ambition, even building a large opera house for her. After Leland began to write a negative review of Susan's disastrous opera debut, Kane fired him but finished the negative review and printed it. Susan protested that she never wanted the opera career anyway, but Kane forced her to continue the season.
Susan consents to an interview with Thompson and describes the aftermath of her opera career. She attempted suicide, and so Kane finally allowed her to abandon singing. After many unhappy years and after being hit by Kane, she finally decided to leave him. Kane's butler Raymond recounts that, after Susan left him, he began violently destroying the contents of her bedroom. When he happened upon a snow globe, he grew calm and said "Rosebud". Thompson concludes that he cannot solve the mystery and that the meaning of Kane's last word will remain unknown.
Back at Xanadu, Kane's belongings are cataloged or discarded by the staff. They find the sled on which eight-year-old Kane was playing on the day that he was taken from his home in Colorado and throw it into a furnace with other items. Behind their backs, the sled slowly burns and its trade name, printed on top, becomes visible through the flames: "Rosebud".
The beginning of the film's ending credits states that "Most of the principal actors in Citizen Kane are new to motion pictures. The Mercury Theatre is proud to introduce them." The cast is then listed in the following order, with Orson Welles' credit for playing Charles Foster Kane appearing last:
Additionally, Charles Bennett appears as the entertainer at the head of the chorus line in the Inquirer party sequence, and cinematographer Gregg Toland makes a cameo appearance as an interviewer depicted in part of the News on the March newsreel. Actor Alan Ladd, still unknown at that time, makes a small appearance as a reporter smoking a pipe at the end of the film.
Hollywood had shown interest in Welles as early as 1936. He turned down three scripts sent to him by Warner Bros. In 1937, he declined offers from David O. Selznick, who asked him to head his film company's story department, and William Wyler, who wanted him for a supporting role in Wuthering Heights. "Although the possibility of making huge amounts of money in Hollywood greatly attracted him," wrote biographer Frank Brady, "he was still totally, hopelessly, insanely in love with the theater, and it is there that he had every intention of remaining to make his mark."
Following "The War of the Worlds" broadcast of his CBS radio series The Mercury Theatre on the Air, Welles was lured to Hollywood with a remarkable contract. RKO Pictures studio head George J. Schaefer wanted to work with Welles after the notorious broadcast, believing that Welles had a gift for attracting mass attention. RKO was also uncharacteristically profitable and was entering into a series of independent production contracts that would add more artistically prestigious films to its roster. Throughout the spring and early summer of 1939, Schaefer constantly tried to lure the reluctant Welles to Hollywood. Welles was in financial trouble after failure of his plays Five Kings and The Green Goddess. At first he simply wanted to spend three months in Hollywood and earn enough money to pay his debts and fund his next theatrical season. Welles first arrived on July 20, 1939, and on his first tour, he called the movie studio "the greatest electric train set a boy ever had".
Welles signed his contract with RKO on August 21, which stipulated that Welles would act in, direct, produce and write two films. Mercury would get $100,000 for the first film by January 1, 1940, plus 20% of profits after RKO recouped $500,000, and $125,000 for a second film by January 1, 1941, plus 20% of profits after RKO recouped $500,000. The most controversial aspect of the contract was granting Welles complete artistic control of the two films so long as RKO approved both projects' stories and so long as the budget did not exceed $500,000. RKO executives would not be allowed to see any footage until Welles chose to show it to them, and no cuts could be made to either film without Welles's approval. Welles was allowed to develop the story without interference, select his own cast and crew, and have the right of final cut. Granting the final cut privilege was unprecedented for a studio because it placed artistic considerations over financial investment. The contract was deeply resented in the film industry, and the Hollywood press took every opportunity to mock RKO and Welles. Schaefer remained a great supporter and saw the unprecedented contract as good publicity. Film scholar Robert L. Carringer wrote: "The simple fact seems to be that Schaefer believed Welles was going to pull off something really big almost as much as Welles did himself."
Welles spent the first five months of his RKO contract trying to get his first project going, without success. "They are laying bets over on the RKO lot that the Orson Welles deal will end up without Orson ever doing a picture there," wrote The Hollywood Reporter. It was agreed that Welles would film Heart of Darkness, previously adapted for The Mercury Theatre on the Air, which would be presented entirely through a first-person camera. After elaborate pre-production and a day of test shooting with a hand-held camera—unheard of at the time—the project never reached production because Welles was unable to trim $50,000 from its budget. Schaefer told Welles that the $500,000 budget could not be exceeded; as war loomed, revenue was declining sharply in Europe by the fall of 1939.
He then started work on the idea that became Citizen Kane. Knowing the script would take time to prepare, Welles suggested to RKO that while that was being done—"so the year wouldn't be lost"—he make a humorous political thriller. Welles proposed The Smiler with a Knife, from a novel by Cecil Day-Lewis. When that project stalled in December 1939, Welles began brainstorming other story ideas with screenwriter Herman J. Mankiewicz, who had been writing Mercury radio scripts. "Arguing, inventing, discarding, these two powerful, headstrong, dazzlingly articulate personalities thrashed toward Kane", wrote biographer Richard Meryman.
One of the long-standing controversies about Citizen Kane has been the authorship of the screenplay. Welles conceived the project with screenwriter Herman J. Mankiewicz, who was writing radio plays for Welles's CBS Radio series, The Campbell Playhouse. Mankiewicz based the original outline on the life of William Randolph Hearst, whom he knew socially and came to hate after being exiled from Hearst's circle.
In February 1940 Welles supplied Mankiewicz with 300 pages of notes and put him under contract to write the first draft screenplay under the supervision of John Houseman, Welles's former partner in the Mercury Theatre. Welles later explained, "I left him on his own finally, because we'd started to waste too much time haggling. So, after mutual agreements on storyline and character, Mank went off with Houseman and did his version, while I stayed in Hollywood and wrote mine." Taking these drafts, Welles drastically condensed and rearranged them, then added scenes of his own. The industry accused Welles of underplaying Mankiewicz's contribution to the script, but Welles countered the attacks by saying, "At the end, naturally, I was the one making the picture, after all—who had to make the decisions. I used what I wanted of Mank's and, rightly or wrongly, kept what I liked of my own."
The terms of the contract stated that Mankiewicz was to receive no credit for his work, as he was hired as a script doctor. Before he signed the contract Mankiewicz was particularly advised by his agents that all credit for his work belonged to Welles and the Mercury Theatre, the "author and creator". As the film neared release, however, Mankiewicz began wanting a writing credit for the film and even threatened to take out full-page advertisements in trade papers and to get his friend Ben Hecht to write an exposé for The Saturday Evening Post. Mankiewicz also threatened to go to the Screen Writers Guild and claim full credit for writing the entire script by himself.
After lodging a protest with the Screen Writers Guild, Mankiewicz withdrew it, then vacillated. The question was resolved in January 1941 when the studio, RKO Pictures, awarded Mankiewicz credit. The guild credit form listed Welles first, Mankiewicz second. Welles's assistant Richard Wilson said that the person who circled Mankiewicz's name in pencil, then drew an arrow that put it in first place, was Welles. The official credit reads, "Screenplay by Herman J. Mankiewicz and Orson Welles". Mankiewicz's rancor toward Welles grew over the remaining twelve years of his life.
Questions over the authorship of the Citizen Kane screenplay were revived in 1971 by influential film critic Pauline Kael, whose controversial 50,000-word essay "Raising Kane" was commissioned as an introduction to the shooting script in The Citizen Kane Book, published in October 1971. The book-length essay first appeared in February 1971, in two consecutive issues of The New Yorker magazine. In the ensuing controversy, Welles was defended by colleagues, critics, biographers and scholars, but his reputation was damaged by its charges. The essay's thesis was later questioned and some of Kael's findings were also contested in later years.
Questions of authorship continued to come into sharper focus with Carringer's 1978 thoroughly researched essay, "The Scripts of Citizen Kane". Carringer studied the collection of script records—"almost a day-to-day record of the history of the scripting"—that was then still intact at RKO. He reviewed all seven drafts and concluded that "the full evidence reveals that Welles's contribution to the Citizen Kane script was not only substantial but definitive."
Citizen Kane was a rare film in that its principal roles were played by actors new to motion pictures. Ten were billed as Mercury Actors, members of the skilled repertory company assembled by Welles for the stage and radio performances of the Mercury Theatre, an independent theater company he founded with Houseman in 1937. "He loved to use the Mercury players," wrote biographer Charles Higham, "and consequently he launched several of them on movie careers."
The film represents the feature film debuts of William Alland, Ray Collins, Joseph Cotten, Agnes Moorehead, Erskine Sanford, Everett Sloane, Paul Stewart, and Welles himself. Despite never having appeared in feature films, some of the cast members were already well known to the public. Cotten had recently become a Broadway star in the hit play The Philadelphia Story with Katharine Hepburn and Sloane was well known for his role on the radio show The Goldbergs. Mercury actor George Coulouris was a star of the stage in New York and London.
Not all of the cast came from the Mercury Players. Welles cast Dorothy Comingore, an actress who played supporting parts in films since 1934 using the name "Linda Winters", as Susan Alexander Kane. A discovery of Charlie Chaplin, Comingore was recommended to Welles by Chaplin, who then met Comingore at a party in Los Angeles and immediately cast her.
Welles had met stage actress Ruth Warrick while visiting New York on a break from Hollywood and remembered her as a good fit for Emily Norton Kane, later saying that she looked the part. Warrick told Carringer that she was struck by the extraordinary resemblance between herself and Welles's mother when she saw a photograph of Beatrice Ives Welles. She characterized her own personal relationship with Welles as motherly.
"He trained us for films at the same time that he was training himself," recalled Agnes Moorehead. "Orson believed in good acting, and he realized that rehearsals were needed to get the most from his actors. That was something new in Hollywood: nobody seemed interested in bringing in a group to rehearse before scenes were shot. But Orson knew it was necessary, and we rehearsed every sequence before it was shot."
When The March of Time narrator Westbrook Van Voorhis asked for $25,000 to narrate the News on the March sequence, Alland demonstrated his ability to imitate Van Voorhis and Welles cast him.
Welles later said that casting character actor Gino Corrado in the small part of the waiter at the El Rancho broke his heart. Corrado had appeared in many Hollywood films, often as a waiter, and Welles wanted all of the actors to be new to films.
Other uncredited roles went to Thomas A. Curran as Teddy Roosevelt in the faux newsreel; Richard Baer as Hillman, a man at Madison Square Garden, and a man in the News on the March screening room; and Alan Ladd, Arthur O'Connell and Louise Currie as reporters at Xanadu.
Ruth Warrick (died 2005) was the last surviving member of the principal cast. Sonny Bupp (died 2007), who played Kane's young son, was the last surviving credited cast member. Kathryn Trosper Popper (died March 6, 2016) was reported to have been the last surviving actor to have appeared in Citizen Kane. Jean Forward (died September 2016), a soprano who dubbed the singing voice of Susan Alexander, was the last surviving performer from the film.
Production advisor Miriam Geiger quickly compiled a handmade film textbook for Welles, a practical reference book of film techniques that he studied carefully. He then taught himself filmmaking by matching its visual vocabulary to The Cabinet of Dr. Caligari, which he ordered from the Museum of Modern Art, and films by Frank Capra, René Clair, Fritz Lang, King Vidor and Jean Renoir. The one film he genuinely studied was John Ford's Stagecoach, which he watched 40 times. "As it turned out, the first day I ever walked onto a set was my first day as a director," Welles said. "I'd learned whatever I knew in the projection room—from Ford. After dinner every night for about a month, I'd run Stagecoach, often with some different technician or department head from the studio, and ask questions. 'How was this done?' 'Why was this done?' It was like going to school."
Welles's cinematographer for the film was Gregg Toland, described by Welles as "just then, the number-one cameraman in the world." To Welles's astonishment, Toland visited him at his office and said, "I want you to use me on your picture." He had seen some of the Mercury stage productions (including Caesar) and said he wanted to work with someone who had never made a movie. RKO hired Toland on loan from Samuel Goldwyn Productions in the first week of June 1940.
"And he never tried to impress us that he was doing any miracles," Welles recalled. "I was calling for things only a beginner would have been ignorant enough to think anybody could ever do, and there he was, doing them." Toland later explained that he wanted to work with Welles because he anticipated the first-time director's inexperience and reputation for audacious experimentation in the theater would allow the cinematographer to try new and innovative camera techniques that typical Hollywood films would never have allowed him to do. Unaware of filmmaking protocol, Welles adjusted the lights on set as he was accustomed to doing in the theater; Toland quietly re-balanced them, and was angry when one of the crew informed Welles that he was infringing on Toland's responsibilities. During the first few weeks of June, Welles had lengthy discussions about the film with Toland and art director Perry Ferguson in the morning, and in the afternoon and evening he worked with actors and revised the script.
On June 29, 1940—a Saturday morning when few inquisitive studio executives would be around—Welles began filming Citizen Kane. After the disappointment of having Heart of Darkness canceled, Welles followed Ferguson's suggestion and deceived RKO into believing that he was simply shooting camera tests. "But we were shooting the picture," Welles said, "because we wanted to get started and be already into it before anybody knew about it."
At the time RKO executives were pressuring him to agree to direct a film called The Men from Mars, to capitalize on "The War of the Worlds" radio broadcast. Welles said that he would consider making the project but wanted to make a different film first. At this time he did not inform them that he had already begun filming Citizen Kane.
The early footage was called "Orson Welles Tests" on all paperwork. The first "test" shot was the News on the March projection room scene, economically filmed in a real studio projection room in darkness that masked many actors who appeared in other roles later in the film. "At $809 Orson did run substantially beyond the test budget of $528—to create one of the most famous scenes in movie history," wrote Barton Whaley.
The next scenes were the El Rancho nightclub scenes and the scene in which Susan attempts suicide. Welles later said that the nightclub set was available after another film had wrapped and that filming took 10 to 12 days to complete. For these scenes Welles had Comingore's throat sprayed with chemicals to give her voice a harsh, raspy tone. Other scenes shot in secret included those in which Thompson interviews Leland and Bernstein, which were also shot on sets built for other films.
During production, the film was referred to as RKO 281. Most of the filming took place in what is now Stage 19 on the Paramount Pictures lot in Hollywood. There was some location filming at Balboa Park in San Diego and the San Diego Zoo. Photographs of German-Jewish investment banker Otto Hermann Kahn's real-life estate Oheka Castle were used to portray the fictional Xanadu.
In the end of July, RKO approved the film and Welles was allowed to officially begin shooting, despite having already been filming "tests" for several weeks. Welles leaked stories to newspaper reporters that the "tests" had been so good that there was no need to re-shoot them. The first "official" scene to be shot was the breakfast montage sequence between Kane and his first wife Emily. To strategically save money and appease the RKO executives who opposed him, Welles rehearsed scenes extensively before actually shooting and filmed very few takes of each shot set-up. Welles never shot master shots for any scene after Toland told him that Ford never shot them. To appease the increasingly curious press, Welles threw a cocktail party for selected reporters, promising that they could watch a scene being filmed. When the journalists arrived Welles told them they had "just finished" shooting for the day but still had the party. Welles told the press that he was ahead of schedule (without factoring in the month of "test shooting"), thus discrediting claims that after a year in Hollywood without making a film he was a failure in the film industry.
Welles usually worked 16 to 18 hours a day on the film. He often began work at 4 a.m. since the special effects make-up used to age him for certain scenes took up to four hours to apply. Welles used this time to discuss the day's shooting with Toland and other crew members. The special contact lenses used to make Welles look elderly proved very painful, and a doctor was employed to place them into Welles's eyes. Welles had difficulty seeing clearly while wearing them, which caused him to badly cut his wrist when shooting the scene in which Kane breaks up the furniture in Susan's bedroom. While shooting the scene in which Kane shouts at Gettys on the stairs of Susan Alexander's apartment building, Welles fell ten feet; an X-ray revealed two bone chips in his ankle.
The injury required him to direct the film from a wheelchair for two weeks. He eventually wore a steel brace to resume performing on camera; it is visible in the low-angle scene between Kane and Leland after Kane loses the election. For the final scene, a stage at the Selznick studio was equipped with a working furnace, and multiple takes were required to show the sled being put into the fire and the word "Rosebud" consumed. Paul Stewart recalled that on the ninth take the Culver City Fire Department arrived in full gear because the furnace had grown so hot the flue caught fire. "Orson was delighted with the commotion", he said.
When "Rosebud" was burned, Welles choreographed the scene while he had composer Bernard Herrmann's cue playing on the set.
Unlike Schaefer, many members of RKO's board of governors did not like Welles or the control that his contract gave him. However such board members as Nelson Rockefeller and NBC chief David Sarnoff were sympathetic to Welles. Throughout production Welles had problems with these executives not respecting his contract's stipulation of non-interference and several spies arrived on set to report what they saw to the executives. When the executives would sometimes arrive on set unannounced the entire cast and crew would suddenly start playing softball until they left. Before official shooting began the executives intercepted all copies of the script and delayed their delivery to Welles. They had one copy sent to their office in New York, resulting in it being leaked to press.
Principal shooting wrapped October 24. Welles then took several weeks away from the film for a lecture tour, during which he also scouted additional locations with Toland and Ferguson. Filming resumed November 15 with some re-shoots. Toland had to leave due to a commitment to shoot Howard Hughes' The Outlaw, but Toland's camera crew continued working on the film and Toland was replaced by RKO cinematographer Harry J. Wild. The final day of shooting on November 30 was Kane's death scene. Welles boasted that he only went 21 days over his official shooting schedule, without factoring in the month of "camera tests". According to RKO records, the film cost $839,727. Its estimated budget had been $723,800.
Citizen Kane was edited by Robert Wise and assistant editor Mark Robson. Both would become successful film directors. Wise was hired after Welles finished shooting the "camera tests" and began officially making the film. Wise said that Welles "had an older editor assigned to him for those tests and evidently he was not too happy and asked to have somebody else. I was roughly Orson's age and had several good credits." Wise and Robson began editing the film while it was still shooting and said that they "could tell certainly that we were getting something very special. It was outstanding film day in and day out."
Welles gave Wise detailed instructions and was usually not present during the film's editing. The film was very well planned out and intentionally shot for such post-production techniques as slow dissolves. The lack of coverage made editing easy since Welles and Toland edited the film "in camera" by leaving few options of how it could be put together. Wise said the breakfast table sequence took weeks to edit and get the correct "timing" and "rhythm" for the whip pans and overlapping dialogue. The News on the March sequence was edited by RKO's newsreel division to give it authenticity. They used stock footage from Pathé News and the General Film Library.
During post-production Welles and special effects artist Linwood G. Dunn experimented with an optical printer to improve certain scenes that Welles found unsatisfactory from the footage. Whereas Welles was often immediately pleased with Wise's work, he would require Dunn and post-production audio engineer James G. Stewart to re-do their work several times until he was satisfied.
Welles hired Bernard Herrmann to compose the film's score. Where most Hollywood film scores were written quickly, in as few as two or three weeks after filming was completed, Herrmann was given 12 weeks to write the music. He had sufficient time to do his own orchestrations and conducting, and worked on the film reel by reel as it was shot and cut. He wrote complete musical pieces for some of the montages, and Welles edited many of the scenes to match their length.
Film scholars and historians view Citizen Kane as Welles's attempt to create a new style of filmmaking by studying various forms of it and combining them into one. However, Welles stated that his love for cinema began only when he started working on the film. When asked where he got the confidence as a first-time director to direct a film so radically different from contemporary cinema, he responded, "Ignorance, ignorance, sheer ignorance—you know there's no confidence to equal it. It's only when you know something about a profession, I think, that you're timid or careful."
David Bordwell wrote that "The best way to understand Citizen Kane is to stop worshipping it as a triumph of technique." Bordwell argues that the film did not invent any of its famous techniques such as deep focus cinematography, shots of the ceilings, chiaroscuro lighting and temporal jump-cuts, and that many of these stylistics had been used in German Expressionist films of the 1920s, such as The Cabinet of Dr. Caligari. But Bordwell asserts that the film did put them all together for the first time and perfected the medium in one single film. In a 1948 interview, D. W. Griffith said, "I loved Citizen Kane and particularly loved the ideas he took from me."
Arguments against the film's cinematic innovations were made as early as 1946 when French historian Georges Sadoul wrote, "The film is an encyclopedia of old techniques." He pointed out such examples as compositions that used both the foreground and the background in the films of Auguste and Louis Lumière, special effects used in the films of Georges Méliès, shots of the ceiling in Erich von Stroheim's Greed and newsreel montages in the films of Dziga Vertov.
French film critic André Bazin defended the film, writing: "In this respect, the accusation of plagiarism could very well be extended to the film's use of panchromatic film or its exploitation of the properties of gelatinous silver halide." Bazin disagreed with Sadoul's comparison to Lumière's cinematography since Citizen Kane used more sophisticated lenses, but acknowledged that it had similarities to such previous works as The 49th Parallel and The Power and the Glory. Bazin stated that "even if Welles did not invent the cinematic devices employed in Citizen Kane, one should nevertheless credit him with the invention of their meaning." Bazin championed the techniques in the film for its depiction of heightened reality, but Bordwell believed that the film's use of special effects contradicted some of Bazin's theories.
Citizen Kane rejects the traditional linear, chronological narrative and tells Kane's story entirely in flashbacks using different points of view, many of them from Kane's aged and forgetful associates, the cinematic equivalent of the unreliable narrator in literature. Welles also dispenses with the idea of a single storyteller and uses multiple narrators to recount Kane's life, a technique not used previously in Hollywood films. Each narrator recounts a different part of Kane's life, with each story overlapping another. The film depicts Kane as an enigma, a complicated man who leaves viewers with more questions than answers as to his character, such as the newsreel footage where he is attacked for being both a communist and a fascist.
The technique of flashbacks had been used in earlier films, notably The Power and the Glory (1933), but no film was as immersed in it as Citizen Kane. Thompson the reporter acts as a surrogate for the audience, questioning Kane's associates and piecing together his life.
Films typically had an "omniscient perspective" at the time, which Marilyn Fabe says give the audience the "illusion that we are looking with impunity into a world which is unaware of our gaze". Citizen Kane also begins in that fashion until the News on the March sequence, after which we the audience see the film through the perspectives of others. The News on the March sequence gives an overview of Kane's entire life (and the film's entire story) at the beginning of the film, leaving the audience without the typical suspense of wondering how it will end. Instead, the film's repetitions of events compels the audience to analyze and wonder why Kane's life happened the way that it did, under the pretext of finding out what "Rosebud" means. The film then returns to the omniscient perspective in the final scene, when only the audience discovers what "Rosebud" is.
The most innovative technical aspect of Citizen Kane is the extended use of deep focus, where the foreground, background, and everything in between are all in sharp focus. Cinematographer Toland did this through his experimentation with lenses and lighting. Toland described the achievement in an article for Theatre Arts magazine, made possible by the sensitivity of modern speed film:
New developments in the science of motion picture photography are not abundant at this advanced stage of the game but periodically one is perfected to make this a greater art. Of these I am in an excellent position to discuss what is termed "Pan-focus", as I have been active for two years in its development and used it for the first time in Citizen Kane. Through its use, it is possible to photograph action from a range of eighteen inches from the camera lens to over two hundred feet away, with extreme foreground and background figures and action both recorded in sharp relief. Hitherto, the camera had to be focused either for a close or a distant shot, all efforts to encompass both at the same time resulting in one or the other being out of focus. This handicap necessitated the breaking up of a scene into long and short angles, with much consequent loss of realism. With pan-focus, the camera, like the human eye, sees an entire panorama at once, with everything clear and lifelike.
Another unorthodox method used in the film was the low-angle shots facing upwards, thus allowing ceilings to be shown in the background of several scenes. Every set was built with a ceiling which broke with studio convention, and many were constructed of fabric that concealed microphones. Welles felt that the camera should show what the eye sees, and that it was a bad theatrical convention to pretend that there was no ceiling—"a big lie in order to get all those terrible lights up there," he said. He became fascinated with the look of low angles, which made even dull interiors look interesting. One extremely low angle is used to photograph the encounter between Kane and Leland after Kane loses the election. A hole was dug for the camera, which required drilling into the concrete floor.
Welles credited Toland on the same title card as himself. "It's impossible to say how much I owe to Gregg," he said. "He was superb." He called Toland "the best director of photography that ever existed."
Citizen Kane's sound was recorded by Bailey Fesler and re-recorded in post-production by audio engineer James G. Stewart, both of whom had worked in radio. Stewart said that Hollywood films never deviated from a basic pattern of how sound could be recorded or used, but with Welles "deviation from the pattern was possible because he demanded it." Although the film is known for its complex soundtrack, much of the audio is heard as it was recorded by Fesler and without manipulation.
Welles used techniques from radio like overlapping dialogue. The scene in which characters sing "Oh, Mr. Kane" was especially complicated and required mixing several soundtracks together. He also used different "sound perspectives" to create the illusion of distances, such as in scenes at Xanadu where characters speak to each other at far distances. Welles experimented with sound in post-production, creating audio montages, and chose to create all of the sound effects for the film instead of using RKO's library of sound effects.
Welles used an aural technique from radio called the "lightning-mix". Welles used this technique to link complex montage sequences via a series of related sounds or phrases. For example, Kane grows from a child into a young man in just two shots. As Thatcher hands eight-year-old Kane a sled and wishes him a Merry Christmas, the sequence suddenly jumps to a shot of Thatcher fifteen years later, completing the sentence he began in both the previous shot and the chronological past. Other radio techniques include using a number of voices, each saying a sentence or sometimes merely a fragment of a sentence, and splicing the dialogue together in quick succession, such as the projection room scene. The film's sound cost $16,996, but was originally budgeted at $7,288.
Film critic and director François Truffaut wrote that "Before Kane, nobody in Hollywood knew how to set music properly in movies. Kane was the first, in fact the only, great film that uses radio techniques. ... A lot of filmmakers know enough to follow Auguste Renoir's advice to fill the eyes with images at all costs, but only Orson Welles understood that the sound track had to be filled in the same way." Cedric Belfrage of The Clipper wrote "of all of the delectable flavours that linger on the palate after seeing Kane, the use of sound is the strongest."
The make-up for Citizen Kane was created and applied by Maurice Seiderman (1907–1989), a junior member of the RKO make-up department. He had not been accepted into the union, which recognized him as only an apprentice, but RKO nevertheless used him to make up principal actors. "Apprentices were not supposed to make up any principals, only extras, and an apprentice could not be on a set without a journeyman present," wrote make-up artist Dick Smith, who became friends with Seiderman in 1979. "During his years at RKO I suspect these rules were probably overlooked often." "Seiderman had gained a reputation as one of the most inventive and creatively precise up-and-coming makeup men in Hollywood," wrote biographer Frank Brady.
On an early tour of RKO, Welles met Seiderman in the small make-up lab that he created for himself in an unused dressing room. "Welles fastened on to him at once," wrote biographer Charles Higham, as Seiderman had developed his own makeup methods "that ensured complete naturalness of expression—a naturalness unrivaled in Hollywood." Seiderman developed a thorough plan for aging the principal characters, first making a plaster cast of the face of each of the actors who aged. He made a plaster mold of Welles's body down to the hips.
"My sculptural techniques for the characters' aging were handled by adding pieces of white modeling clay, which matched the plaster, onto the surface of each bust," Seiderman told Norman Gambill. When Seiderman achieved the desired effect, he cast the clay pieces in a soft plastic material that he formulated himself. These appliances were then placed onto the plaster bust and a four-piece mold was made for each phase of aging. The castings were then fully painted and paired with the appropriate wig for evaluation.
Before the actors went before the cameras each day, the pliable pieces were applied directly to their faces to recreate Seiderman's sculptural image. The facial surface was underpainted in a flexible red plastic compound; The red ground resulted in a warmth of tone that was picked up by the panchromatic film. Over that was applied liquid grease paint, and finally a colorless translucent talcum. Seiderman created the effect of skin pores on Kane's face by stippling the surface with a negative cast made from an orange peel.
Welles often arrived on the set at 2:30 am, as application of the sculptural make-up took 3½ hours for the oldest incarnation of Kane. The make-up included appliances to age Welles's shoulders, breast, and stomach. "In the film and production photographs, you can see that Kane had a belly that overhung," Seiderman said. "That was not a costume, it was the rubber sculpture that created the image. You could see how Kane's silk shirt clung wetly to the character's body. It could not have been done any other way."
Seiderman worked with Charles Wright on the wigs. These went over a flexible skull cover that Seiderman created and sewed into place with elastic thread. When he found the wigs too full, he untied one hair at a time to alter their shape. Kane's mustache was inserted into the makeup surface a few hairs at a time, to realistically vary the color and texture. He also made scleral lenses for Welles, Dorothy Comingore, George Coulouris, and Everett Sloane to dull the brightness of their young eyes. The lenses took a long time to fit properly, and Seiderman began work on them before devising any of the other makeup. "I painted them to age in phases, ending with the blood vessels and the arcus senilis of old age." Seiderman's tour de force was the breakfast montage, shot all in one day. "Twelve years, two years shot at each scene," he said.
The major studios gave screen credit for make-up only to the department head. When RKO make-up department head Mel Berns refused to share credit with Seiderman, who was only an apprentice, Welles told Berns that there would be no make-up credit. Welles signed a large advertisement in the Los Angeles newspaper:
THANKS TO EVERYBODY WHO GETS SCREEN CREDIT FOR "CITIZEN KANE"AND THANKS TO THOSE WHO DON'TTO ALL THE ACTORS, THE CREW, THE OFFICE, THE MUSICIANS, EVERYBODYAND PARTICULARLY TO MAURICE SEIDERMAN, THE BEST MAKE-UP MAN IN THE WORLD
Although credited as an assistant, the film's art direction was done by Perry Ferguson. Welles and Ferguson got along during their collaboration. In the weeks before production began Welles, Toland and Ferguson met regularly to discuss the film and plan every shot, set design and prop. Ferguson would take notes during these discussions and create rough designs of the sets and story boards for individual shots. After Welles approved the rough sketches, Ferguson made miniature models for Welles and Toland to experiment on with a periscope in order to rehearse and perfect each shot. Ferguson then had detailed drawings made for the set design, including the film's lighting design. The set design was an integral part of the film's overall look and Toland's cinematography.
In the original script the Great Hall at Xanadu was modeled after the Great Hall in Hearst Castle and its design included a mixture of Renaissance and Gothic styles. "The Hearstian element is brought out in the almost perverse juxtaposition of incongruous architectural styles and motifs," wrote Carringer. Before RKO cut the film's budget, Ferguson's designs were more elaborate and resembled the production designs of early Cecil B. DeMille films and Intolerance. The budget cuts reduced Ferguson's budget by 33 percent and his work cost $58,775 total, which was below average at that time.
To save costs Ferguson and Welles re-wrote scenes in Xanadu's living room and transported them to the Great Hall. A large staircase from another film was found and used at no additional cost. When asked about the limited budget, Ferguson said "Very often—as in that much-discussed 'Xanadu' set in Citizen Kane—we can make a foreground piece, a background piece, and imaginative lighting suggests a great deal more on the screen than actually exists on the stage." According to the film's official budget there were 81 sets built, but Ferguson said there were between 106 and 116.
Still photographs of Oheka Castle in Huntington, New York, were used in the opening montage, representing Kane's Xanadu estate. Ferguson also designed statues from Kane's collection with styles ranging from Greek to German Gothic. The sets were also built to accommodate Toland's camera movements. Walls were built to fold and furniture could quickly be moved. The film's famous ceilings were made out of muslin fabric and camera boxes were built into the floors for low angle shots. Welles later said that he was proud that the film production value looked much more expensive than the film's budget. Although neither worked with Welles again, Toland and Ferguson collaborated in several films in the 1940s.
The film's special effects were supervised by RKO department head Vernon L. Walker. Welles pioneered several visual effects to cheaply shoot things like crowd scenes and large interior spaces. For example, the scene in which the camera in the opera house rises dramatically to the rafters, to show the workmen showing a lack of appreciation for Susan Alexander Kane's performance, was shot by a camera craning upwards over the performance scene, then a curtain wipe to a miniature of the upper regions of the house, and then another curtain wipe matching it again with the scene of the workmen. Other scenes effectively employed miniatures to make the film look much more expensive than it truly was, such as various shots of Xanadu.
Some shots included rear screen projection in the background, such as Thompson's interview of Leland and some of the ocean backgrounds at Xanadu. Bordwell claims that the scene where Thatcher agrees to be Kane's guardian used rear screen projection to depict young Kane in the background, despite this scene being cited as a prime example of Toland's deep focus cinematography. A special effects camera crew from Walker's department was required for the extreme close-up shots such as Kane's lips when he says "Rosebud" and the shot of the typewriter typing Susan's bad review.
Optical effects artist Dunn claimed that "up to 80 percent of some reels was optically printed." These shots were traditionally attributed to Toland for years. The optical printer improved some of the deep focus shots. One problem with the optical printer was that it sometimes created excessive graininess, such as the optical zoom out of the snow globe. Welles decided to superimpose snow falling to mask the graininess in these shots. Toland said that he disliked the results of the optical printer, but acknowledged that "RKO special effects expert Vernon Walker, ASC, and his staff handled their part of the production—a by no means inconsiderable assignment—with ability and fine understanding."
Any time deep focus was impossible—as in the scene in which Kane finishes a negative review of Susan's opera while at the same time firing the person who began writing the review—an optical printer was used to make the whole screen appear in focus, visually layering one piece of film onto another. However, some apparently deep-focus shots were the result of in-camera effects, as in the famous scene in which Kane breaks into Susan's room after her suicide attempt. In the background, Kane and another man break into the room, while simultaneously the medicine bottle and a glass with a spoon in it are in closeup in the foreground. The shot was an in-camera matte shot. The foreground was shot first, with the background dark. Then the background was lit, the foreground darkened, the film rewound, and the scene re-shot with the background action.
The film's music was composed by Bernard Herrmann. Herrmann had composed for Welles for his Mercury Theatre radio broadcasts. Because it was Herrmann's first motion picture score, RKO wanted to pay him only a small fee, but Welles insisted he be paid at the same rate as Max Steiner.
The score established Herrmann as an important new composer of film soundtracks and eschewed the typical Hollywood practice of scoring a film with virtually non-stop music. Instead Herrmann used what he later described as "radio scoring", musical cues typically 5–15 seconds in length that bridge the action or suggest a different emotional response. The breakfast montage sequence begins with a graceful waltz theme and gets darker with each variation on that theme as the passage of time leads to the hardening of Kane's personality and the breakdown of his first marriage.
Herrmann realized that musicians slated to play his music were hired for individual unique sessions; there was no need to write for existing ensembles. This meant that he was free to score for unusual combinations of instruments, even instruments that are not commonly heard. In the opening sequence, for example, the tour of Kane's estate Xanadu, Herrmann introduces a recurring leitmotif played by low woodwinds, including a quartet of alto flutes.
For Susan Alexander Kane's operatic sequence, Welles suggested that Herrmann compose a witty parody of a Mary Garden vehicle, an aria from Salammbô. "Our problem was to create something that would give the audience the feeling of the quicksand into which this simple little girl, having a charming but small voice, is suddenly thrown," Herrmann said. Writing in the style of a 19th-century French Oriental opera, Herrmann put the aria in a key that would force the singer to strain to reach the high notes, culminating in a high D, well outside the range of Susan Alexander. Soprano Jean Forward dubbed the vocal part for Comingore. Houseman claimed to have written the libretto, based on Jean Racine's Athalie and Phedre, although some confusion remains since Lucille Fletcher remembered preparing the lyrics. Fletcher, then Herrmann's wife, wrote the libretto for his opera Wuthering Heights.
Music enthusiasts consider the scene in which Susan Alexander Kane attempts to sing the famous cavatina "Una voce poco fa" from Il barbiere di Siviglia by Gioachino Rossini with vocal coach Signor Matiste as especially memorable for depicting the horrors of learning music through mistakes.
In 1972, Herrmann said, "I was fortunate to start my career with a film like Citizen Kane, it's been a downhill run ever since!" Welles loved Herrmann's score and told director Henry Jaglom that it was 50 percent responsible for the film's artistic success.
Some incidental music came from other sources. Welles heard the tune used for the publisher's theme, "Oh, Mr. Kane", in Mexico. Called "A Poco No", the song was written by Pepe Guízar and special lyrics were written by Herman Ruby.
"In a Mizz", a 1939 jazz song by Charlie Barnet and Haven Johnson, bookends Thompson's second interview of Susan Alexander Kane. "I kind of based the whole scene around that song," Welles said. "The music is by Nat Cole—it's his trio." Later—beginning with the lyrics, "It can't be love"—"In a Mizz" is performed at the Everglades picnic, framing the fight in the tent between Susan and Kane. Musicians including bandleader Cee Pee Johnson (drums), Alton Redd (vocals), Raymond Tate (trumpet), Buddy Collette (alto sax) and Buddy Banks (tenor sax) are featured.
All of the music used in the newsreel came from the RKO music library, edited at Welles's request by the newsreel department to achieve what Herrmann called "their own crazy way of cutting". The News on the March theme that accompanies the newsreel titles is "Belgian March" by Anthony Collins, from the film Nurse Edith Cavell. Other examples are an excerpt from Alfred Newman's score for Gunga Din (the exploration of Xanadu), Roy Webb's theme for the film Reno (the growth of Kane's empire), and bits of Webb's score for Five Came Back (introducing Walter Parks Thatcher).
One of the editing techniques used in Citizen Kane was the use of montage to collapse time and space, using an episodic sequence on the same set while the characters changed costume and make-up between cuts so that the scene following each cut would look as if it took place in the same location, but at a time long after the previous cut. In the breakfast montage, Welles chronicles the breakdown of Kane's first marriage in five vignettes that condense 16 years of story time into two minutes of screen time. Welles said that the idea for the breakfast scene "was stolen from The Long Christmas Dinner by Thornton Wilder ... a one-act play, which is a long Christmas dinner that takes you through something like 60 years of a family's life." The film often uses long dissolves to signify the passage of time and its psychological effect of the characters, such as the scene in which the abandoned sled is covered with snow after the young Kane is sent away with Thatcher.
Welles was influenced by the editing theories of Sergei Eisenstein by using jarring cuts that caused "sudden graphic or associative contrasts", such as the cut from Kane's deathbed to the beginning of the News on the March sequence and a sudden shot of a shrieking cockatoo at the beginning of Raymond's flashback. Although the film typically favors mise-en-scène over montage, the scene in which Kane goes to Susan Alexander's apartment after first meeting her is the only one that is primarily cut as close-ups with shots and counter shots between Kane and Susan. Fabe says that "by using a standard Hollywood technique sparingly, [Welles] revitalizes its psychological expressiveness."
Welles never confirmed a principal source for the character of Charles Foster Kane. Houseman wrote that Kane is a synthesis of different personalities, with Hearst's life used as the main source. Some events and details were invented, and Houseman wrote that he and Mankiewicz also "grafted anecdotes from other giants of journalism, including Pulitzer, Northcliffe and Mank's first boss, Herbert Bayard Swope." Welles said, "Mr. Hearst was quite a bit like Kane, although Kane isn't really founded on Hearst in particular. Many people sat for it, so to speak". He specifically acknowledged that aspects of Kane were drawn from the lives of two business tycoons familiar from his youth in Chicago—Samuel Insull and Harold Fowler McCormick.
The character of Jedediah Leland was based on drama critic Ashton Stevens, George Stevens's uncle and Welles's close boyhood friend. Some detail came from Mankiewicz's own experience as a drama critic in New York.
Many assumed that the character of Susan Alexander Kane was based on Marion Davies, Hearst's mistress whose career he managed and whom Hearst promoted as a motion picture actress. This assumption was a major reason Hearst tried to destroy Citizen Kane. Welles denied that the character was based on Davies, whom he called "an extraordinary woman—nothing like the character Dorothy Comingore played in the movie." He cited Insull's building of the Chicago Opera House, and McCormick's lavish promotion of the opera career of his second wife, Ganna Walska, as direct influences on the screenplay.
The character of political boss Jim W. Gettys is based on Charles F. Murphy, a leader in New York City's infamous Tammany Hall political machine.
Welles credited "Rosebud" to Mankiewicz. Biographer Richard Meryman wrote that the symbol of Mankiewicz's own damaged childhood was a treasured bicycle, stolen while he visited the public library and not replaced by his family as punishment. He regarded it as the prototype of Charles Foster Kane's sled. In his 2015 Welles biography, Patrick McGilligan reported that Mankiewicz himself stated that the word "Rosebud" was taken from the name of a famous racehorse, Old Rosebud. Mankiewicz had a bet on the horse in the 1914 Kentucky Derby, which he won, and McGilligan wrote that "Old Rosebud symbolized his lost youth, and the break with his family". In testimony for the Lundberg suit, Mankiewicz said, "I had undergone psycho-analysis, and Rosebud, under circumstances slightly resembling the circumstances in [Citizen Kane], played a prominent part." Gore Vidal has argued in the New York Review of Books that “Rosebud was what Hearst called his friend Marion Davies’s clitoris”.
The News on the March sequence that begins the film satirizes the journalistic style of The March of Time, the news documentary and dramatization series presented in movie theaters by Time Inc. From 1935 to 1938 Welles was a member of the uncredited company of actors that presented the original radio version.
Houseman claimed that banker Walter P. Thatcher was loosely based on J. P. Morgan. Bernstein was named for Dr. Maurice Bernstein, appointed Welles's guardian; Sloane's portrayal was said to be based on Bernard Herrmann. Herbert Carter, editor of The Inquirer, was named for actor Jack Carter.
Laura Mulvey explored the anti-fascist themes of Citizen Kane in her 1992 monograph for the British Film Institute. The News on the March newsreel presents Kane keeping company with Hitler and other dictators while he smugly assures the public that there will be no war. She wrote that the film reflects "the battle between intervention and isolationism" then being waged in the United States; the film was released six months before the attack on Pearl Harbor, while President Franklin D. Roosevelt was laboring to win public opinion for entering World War II. "In the rhetoric of Citizen Kane," Mulvey writes, "the destiny of isolationism is realised in metaphor: in Kane's own fate, dying wealthy and lonely, surrounded by the detritus of European culture and history."
Journalist Ignacio Ramonet has cited the film as an early example of mass media manipulation of public opinion and the power that media conglomerates have on influencing the democratic process. He believes that this early example of a media mogul influencing politics is outdated and that today "there are media groups with the power of a thousand Citizen Kanes." Media mogul Rupert Murdoch is sometimes labeled as a latter-day Citizen Kane.
Comparisons have also been made between the career and character of Donald Trump and Charles Foster Kane. Citizen Kane is reported to be one of Trump's favorite films, and his biographer Tim O’Brien has said that Trump is fascinated by and identifies with Kane. In an interview with filmmaker Errol Morris, Trump explained his own interpretation of the film's themes, saying "You learn in 'Kane' maybe wealth isn't everything, because he had the wealth but he didn't have the happiness. In real life I believe that wealth does in fact isolate you from other people. It's a protective mechanism — you have your guard up much more so [than] if you didn't have wealth...Perhaps I can understand that."
To ensure that Hearst's life's influence on Citizen Kane was a secret, Welles limited access to dailies and managed the film's publicity. A December 1940 feature story in Stage magazine compared the film's narrative to Faust and made no mention of Hearst.
The film was scheduled to premiere at RKO's flagship theater Radio City Music Hall on February 14, but in early January 1941 Welles was not finished with post-production work and told RKO that it still needed its musical score. Writers for national magazines had early deadlines and so a rough cut was previewed for a select few on January 3, 1941 for such magazines as Life, Look and Redbook. Gossip columnist Hedda Hopper (an arch-rival of Louella Parsons, the Hollywood correspondent for Hearst papers) showed up to the screening uninvited. Most of the critics at the preview said that they liked the film and gave it good advanced reviews. Hopper wrote negatively about it, calling the film a "vicious and irresponsible attack on a great man" and criticizing its corny writing and old fashioned photography.
Friday magazine ran an article drawing point-by-point comparisons between Kane and Hearst and documented how Welles had led on Parsons. Up until this Welles had been friendly with Parsons. The magazine quoted Welles as saying that he could not understand why she was so nice to him and that she should "wait until the woman finds out that the picture's about her boss." Welles immediately denied making the statement and the editor of Friday admitted that it might be false. Welles apologized to Parsons and assured her that he had never made that remark.
Shortly after Friday's article, Hearst sent Parsons an angry letter complaining that he had learned about Citizen Kane from Hopper and not her. The incident made a fool of Parsons and compelled her to start attacking Welles and the film. Parsons demanded a private screening of the film and personally threatened Schaefer on Hearst's behalf, first with a lawsuit and then with a vague threat of consequences for everyone in Hollywood. On January 10 Parsons and two lawyers working for Hearst were given a private screening of the film. James G. Stewart was present at the screening and said that she walked out of the film.
Soon after, Parsons called Schaefer and threatened RKO with a lawsuit if they released Kane. She also contacted the management of Radio City Music Hall and demanded that they should not screen it. The next day, the front page headline in Daily Variety read, "HEARST BANS RKO FROM PAPERS." Hearst began this ban by suppressing promotion of RKO's Kitty Foyle, but in two weeks the ban was lifted for everything except Kane.
When Schaefer did not submit to Parsons she called other studio heads and made more threats on behalf of Hearst to expose the private lives of people throughout the entire film industry. Welles was threatened with an exposé about his romance with the married actress Dolores del Río, who wanted the affair kept secret until her divorce was finalized. In a statement to journalists Welles denied that the film was about Hearst. Hearst began preparing an injunction against the film for libel and invasion of privacy, but Welles's lawyer told him that he doubted Hearst would proceed due to the negative publicity and required testimony that an injunction would bring.
The Hollywood Reporter ran a front-page story on January 13 that Hearst papers were about to run a series of editorials attacking Hollywood's practice of hiring refugees and immigrants for jobs that could be done by Americans. The goal was to put pressure on the other studios to force RKO to shelve Kane. Many of those immigrants had fled Europe after the rise of fascism and feared losing the haven of the United States. Soon afterwards, Schaefer was approached by Nicholas Schenck, head of Metro-Goldwyn-Mayer's parent company, with an offer on the behalf of Louis B. Mayer and other Hollywood executives to RKO Pictures of $805,000 to destroy all prints of the film and burn the negative.
Once RKO's legal team reassured Schaefer, the studio announced on January 21 that Kane would be released as scheduled, and with one of the largest promotional campaigns in the studio's history. Schaefer brought Welles to New York City for a private screening of the film with the New York corporate heads of the studios and their lawyers. There was no objection to its release provided that certain changes, including the removal or softening of specific references that might offend Hearst, were made. Welles agreed and cut the running time from 122 minutes to 119 minutes. The cuts satisfied the corporate lawyers.
Radio City Music Hall's management refused to screen Citizen Kane for its premiere. A possible factor was Parsons's threat that The American Weekly would run a defamatory story on the grandfather of major RKO stockholder Nelson Rockefeller. Other exhibitors feared being sued for libel by Hearst and refused to show the film. In March Welles threatened the RKO board of governors with a lawsuit if they did not release the film. Schaefer stood by Welles and opposed the board of governors. When RKO still delayed the film's release Welles offered to buy the film for $1 million and the studio finally agreed to release the film on May 1.
Schaefer managed to book a few theaters willing to show the film. Hearst papers refused to accept advertising. RKO's publicity advertisements for the film erroneously promoted it as a love story.
Kane opened at the RKO Palace Theatre on Broadway in New York on May 1, 1941, in Chicago on May 6, and in Los Angeles on May 8. Welles said that at the Chicago premiere that he attended the theater was almost empty.
The day after the New York release, The New York Times said "it comes close to being the most sensational film ever made in Hollywood". The Washington Post called it "one of the most important films in the history" of filmmaking. The Washington Evening Star said Welles was a genius who created "a superbly dramatic biography of another genius" and "a picture that is revolutionary". The Chicago Tribune called the film interesting and different but "its sacrifice of simplicity to eccentricity robs it of distinction and general entertainment value". The Los Angeles Times gave the film a mixed review, saying it was brilliant and skillful at times with an ending that "rather fizzled".
The film did well in cities and larger towns, but it fared poorly in more remote areas. RKO still had problems getting exhibitors to show the film. For example, one chain controlling more than 500 theaters got Welles's film as part of a package but refused to play it, reportedly out of fear of Hearst. Hearst's disruption of the film's release damaged its box office performance and, as a result, it lost $160,000 during its initial run. The film earned $23,878 during its first week in New York. By the ninth week it only made $7,279. Overall it lost money in New York, Boston, Chicago, Los Angeles, San Francisco and Washington, D.C., but made a profit in Seattle.
Written and directed by Welles at Toland's suggestion, the theatrical trailer for Citizen Kane differs from other trailers in that it did not feature a single second of footage of the actual film itself, but acts as a wholly original, tongue-in-cheek, pseudo-documentary piece on the film's production. Filmed at the same time as Citizen Kane itself, it offers the only existing behind-the-scenes footage of the film. The trailer, shot by Wild instead of Toland, follows an unseen Welles as he provides narration for a tour around the film set, introductions to the film's core cast members, and a brief overview of Kane's character. The trailer also contains a number of trick shots, including one of Everett Sloane appearing at first to be running into the camera, which turns out to be the reflection of the camera in a mirror.
At the time, it was almost unprecedented for a film trailer to not actually feature anything of the film itself; and while Citizen Kane is frequently cited as a groundbreaking, influential film, Simon Callow argues its trailer was no less original in its approach. Callow writes that it has "great playful charm ... it is a miniature documentary, almost an introduction to the cinema ... Teasing, charming, completely original, it is a sort of conjuring trick: Without his face appearing once on the screen, Welles entirely dominates its five [sic] minutes' duration."
Hearing about Citizen Kane enraged Hearst so much that he banned any advertising, reviewing, or mentioning of it in his papers, and had his journalists libel Welles. Welles used Hearst's opposition as a pretext for previewing the film in several opinion-making screenings in Los Angeles, lobbying for its artistic worth against the hostile campaign that Hearst was waging. A special press screening took place in early March. Henry Luce was in attendance and reportedly wanted to buy the film from RKO for $1 million to distribute it himself. The reviews for this screening were positive. A Hollywood Review headline read, "Mr. Genius Comes Through; 'Kane' Astonishing Picture". The Motion Picture Herald reported about the screening and Hearst's intention to sue RKO. Time magazine wrote that "The objection of Mr. Hearst, who founded a publishing empire on sensationalism, is ironic. For to most of the several hundred people who have seen the film at private screenings, Citizen Kane is the most sensational product of the U.S. movie industry." A second press screening occurred in April.
When Schaefer rejected Hearst's offer to suppress the film, Hearst banned every newspaper and station in his media conglomerate from reviewing—or even mentioning—the film. He also had many movie theaters ban it, and many did not show it through fear of being socially exposed by his massive newspaper empire. The Oscar-nominated documentary The Battle Over Citizen Kane lays the blame for the film's relative failure squarely at the feet of Hearst. The film did decent business at the box office; it went on to be the sixth highest grossing film in its year of release, a modest success its backers found acceptable. Nevertheless, the film's commercial performance fell short of its creators' expectations. Hearst's biographer David Nasaw points out that Hearst's actions were not the only reason Kane failed, however: the innovations Welles made with narrative, as well as the dark message at the heart of the film (that the pursuit of success is ultimately futile) meant that a popular audience could not appreciate its merits.
Hearst's attacks against Welles went beyond attempting to suppress the film. Welles said that while he was on his post-filming lecture tour a police detective approached him at a restaurant and advised him not to go back to his hotel. A 14-year-old girl had reportedly been hidden in the closet of his room, and two photographers were waiting for him to walk in. Knowing he would be jailed after the resulting publicity, Welles did not return to the hotel but waited until the train left town the following morning. "But that wasn't Hearst," Welles said, "that was a hatchet man from the local Hearst paper who thought he would advance himself by doing it."
In March 1941, Welles directed a Broadway version of Richard Wright's Native Son (and, for luck, used a "Rosebud" sled as a prop). Native Son received positive reviews, but Hearst-owned papers used the opportunity to attack Welles as a communist. The Hearst papers vociferously attacked Welles after his April 1941 radio play, "His Honor, the Mayor", produced for The Free Company radio series on CBS.
Welles described his chance encounter with Hearst in an elevator at the Fairmont Hotel on the night Citizen Kane opened in San Francisco. Hearst and Welles's father were acquaintances, so Welles introduced himself and asked Hearst if he would like to come to the opening. Hearst did not respond. "As he was getting off at his floor, I said, 'Charles Foster Kane would have accepted.' No reply", recalled Welles. "And Kane would have, you know. That was his style—just as he finished Jed Leland's bad review of Susan as an opera singer."
In 1945, Hearst journalist Robert Shaw wrote that the film got "a full tide of insensate fury" from Hearst papers, "then it ebbed suddenly. With one brain cell working, the chief realized that such hysterical barking by the trained seals would attract too much attention to the picture. But to this day the name of Orson Welles is on the official son-of-a-bitch list of every Hearst newspaper".
Despite Hearst's attempts to destroy the film, since 1941 references to his life and career have usually included a reference to Citizen Kane, such as the headline 'Son of Citizen Kane Dies' for the obituary of Hearst's son. In 2012, the Hearst estate agreed to screen the film at Hearst Castle in San Simeon, breaking Hearst's ban on the film.
Citizen Kane received acclaim from several critics. New York Daily News critic Kate Cameron called it "one of the most interesting and technically superior films that has ever come out of a Hollywood studio". New York World-Telegram critic William Boehnel said that the film was "staggering and belongs at once among the greatest screen achievements". Time magazine wrote that "it has found important new techniques in picture-making and story-telling." Life magazine's review said that "few movies have ever come from Hollywood with such powerful narrative, such original technique, such exciting photography." John C. Mosher of The New Yorker called the film's style "like fresh air" and raved "Something new has come to the movie world at last." Anthony Bower of The Nation called it "brilliant" and praised the cinematography and performances by Welles, Comingore and Cotten. John O'Hara's Newsweek review called it the best picture he'd ever seen and said Welles was "the best actor in the history of acting." Welles called O'Hara's review "the greatest review that anybody ever had."
The day following the premiere of Citizen Kane, The New York Times critic Bosley Crowther wrote that "... it comes close to being the most sensational film ever made in Hollywood."
Count on Mr. Welles: he doesn't do things by halves. ... Upon the screen he discovered an area large enough for his expansive whims to have free play. And the consequence is that he has made a picture of tremendous and overpowering scope, not in physical extent so much as in its rapid and graphic rotation of thoughts. Mr. Welles has put upon the screen a motion picture that really moves.
In the UK C. A. Lejeune of The Observer called it "The most exciting film that has come out of Hollywood in twenty-five years" and Dilys Powell of The Sunday Times said the film's style was made "with the ease and boldness and resource of one who controls and is not controlled by his medium." Edward Tangye Lean of Horizon praised the film's technical style, calling it "perhaps a decade ahead of its contemporaries."
A few reviews were mixed. Otis Ferguson of The New Republic said it was "the boldest free-hand stroke in major screen production since Griffith and Bitzer were running wild to unshackle the camera", but also criticized its style, calling it a "retrogression in film technique" and stating that "it holds no great place" in film history. Ferguson reacted to some of the film's celebrated visual techniques by calling them "just willful dabbling" and "the old shell game." In a rare film review, filmmaker Erich von Stroheim criticized the film's story and non-linear structure, but praised the technical style and performances, and wrote "Whatever the truth may be about it, Citizen Kane is a great picture and will go down in screen history. More power to Welles!"
Some prominent critics wrote negative reviews. In his 1941 review for Sur, Jorge Luis Borges famously called the film "a labyrinth with no center" and predicted that its legacy would be a film "whose historical value is undeniable but which no one cares to see again." The Argus Weekend Magazine critic Erle Cox called the film "amazing" but thought that Welles's break with Hollywood traditions was "overdone". Tatler's James Agate called it "the well-intentioned, muddled, amateurish thing one expects from high-brows" and "a quite good film which tries to run the psychological essay in harness with your detective thriller, and doesn't quite succeed." Eileen Creelman of The New York Sun called it "a cold picture, unemotional, a puzzle rather than a drama". Other people who disliked the film were W. H. Auden and James Agee. After watching the film on January 29, 1942 Kenneth Williams, then aged 15, writing in his first diary curtly described it as "boshey rot".
Modern critics have given Citizen Kane an even more positive response. Review aggregation website Rotten Tomatoes reports that 99% of 125 critics gave the film a positive review, with an average rating of 9.70/10. The site's critical consensus reads: "Orson Welles's epic tale of a publishing tycoon's rise and fall is entertaining, poignant, and inventive in its storytelling, earning its reputation as a landmark achievement in film." In April 2021, it was noted that the addition of an 80-year-old negative review from the Chicago Tribune reduced the film's rating from 100% to 99% on the site; Citizen Kane held its 100% rating until early 2021. On Metacritic, however, the film still has a rare weighted average score of 100 out of 100 based on 19 critics, indicating "universal acclaim".
It was widely believed the film would win most of its Academy Award nominations, but it received only the award for Best Original Screenplay. Variety reported that block voting by screen extras deprived Citizen Kane of Best Picture and Best Actor, and similar prejudices were likely to have been responsible for the film receiving no technical awards.
Citizen Kane was the only film made under Welles's original contract with RKO Pictures, which gave him complete creative control. Welles's new business manager and attorney permitted the contract to lapse. In July 1941, Welles reluctantly signed a new and less favorable deal with RKO under which he produced and directed The Magnificent Ambersons (1942), produced Journey into Fear (1943), and began It's All True, a film he agreed to do without payment. In the new contract Welles was an employee of the studio and lost the right to final cut, which later allowed RKO to modify and re-cut The Magnificent Ambersons over his objections. In June 1942, Schaefer resigned the presidency of RKO Pictures and Welles's contract was terminated by his successor.
During World War II, Citizen Kane was not seen in most European countries. It was shown in France for the first time on July 10, 1946, at the Marbeuf theater in Paris. Initially most French film critics were influenced by the negative reviews of Jean-Paul Sartre in 1945 and Georges Sadoul in 1946. At that time many French intellectuals and filmmakers shared Sartre's negative opinion that Hollywood filmmakers were uncultured. Sartre criticized the film's flashbacks for its nostalgic and romantic preoccupation with the past instead of the realities of the present and said that "the whole film is based on a misconception of what cinema is all about. The film is in the past tense, whereas we all know that cinema has got to be in the present tense."
André Bazin, a then little-known film critic working for Sartre's Les Temps modernes, was asked to give an impromptu speech about the film after a screening at the Colisée Theatre in the autumn of 1946 and changed the opinion of much of the audience. This speech led to Bazin's 1947 article "The Technique of Citizen Kane", which directly influenced public opinion about the film. Carringer wrote that Bazin was "the one who did the most to enhance the film's reputation." Both Bazin's critique of the film and his theories about cinema itself centered around his strong belief in mise-en-scène. These theories were diametrically opposed to both the popular Soviet montage theory and the politically Marxist and anti-Hollywood beliefs of most French film critics at that time. Bazin believed that a film should depict reality without the filmmaker imposing their "will" on the spectator, which the Soviet theory supported. Bazin wrote that Citizen Kane's mise-en-scène created a "new conception of filmmaking" and that the freedom given to the audience from the deep focus shots was innovative by changing the entire concept of the cinematic image. Bazin wrote extensively about the mise-en-scène in the scene where Susan Alexander attempts suicide, which was one long take while other films would have used four or five shots in the scene. Bazin wrote that the film's mise-en-scène "forces the spectator to participate in the meaning of the film" and creates "a psychological realism which brings the spectator back to the real conditions of perception."
In his 1950 essay "The Evolution of the Language of Cinema", Bazin placed Citizen Kane center stage as a work which ushered in a new period in cinema. One of the first critics to defend motion pictures as being on the same artistic level as literature or painting, Bazin often used the film as an example of cinema as an art form and wrote that "Welles has given the cinema a theoretical restoration. He has enriched his filmic repertory with new or forgotten effects that, in today's artistic context, take on a significance we didn't know they could have." Bazin also compared the film to Roberto Rossellini's Paisan for having "the same aesthetic concept of realism" and to the films of William Wyler shot by Toland (such as The Little Foxes and The Best Years of Our Lives), all of which used deep focus cinematography that Bazin called "a dialectical step forward in film language."
Bazin's praise of the film went beyond film theory and reflected his own philosophy towards life itself. His metaphysical interpretations about the film reflected humankind's place in the universe. Bazin believed that the film examined one person's identity and search for meaning. It portrayed the world as ambiguous and full of contradictions, whereas films up until then simply portrayed people's actions and motivations. Bazin's biographer Dudley Andrew wrote that:
The world of Citizen Kane, that mysterious, dark, and infinitely deep world of space and memory where voices trail off into distant echoes and where meaning dissolves into interpretation, seemed to Bazin to mark the starting point from which all of us try to construct provisionally the sense of our lives.
Bazin went on to co-found Cahiers du cinéma, whose contributors (including future film directors François Truffaut and Jean-Luc Godard) also praised the film. The popularity of Truffaut's auteur theory helped the film's and Welles's reputation.
By 1942 Citizen Kane had run its course theatrically and, apart from a few showings at big city arthouse cinemas, it largely vanished and both the film's and Welles's reputation fell among American critics. In 1949 critic Richard Griffith in his overview of cinema, The Film Till Now, dismissed Citizen Kane as "... tinpot if not crackpot Freud."
In the United States, it was neglected and forgotten until its revival on television in the mid-to-late 1950s. Three key events in 1956 led to its re-evaluation in the United States: first, RKO was one of the first studios to sell its library to television, and early that year Citizen Kane started to appear on television; second, the film was re-released theatrically to coincide with Welles's return to the New York stage, where he played King Lear; and third, American film critic Andrew Sarris wrote "Citizen Kane: The American Baroque" for Film Culture, and described it as "the great American film" and "the work that influenced the cinema more profoundly than any American film since The Birth of a Nation." Carringer considers Sarris's essay as the most important influence on the film's reputation in the US.
During Expo 58, a poll of over 100 film historians named Kane one of the top ten greatest films ever made (the group gave first-place honors to Battleship Potemkin). When a group of young film directors announced their vote for the top six, they were booed for not including the film.
In the decades since, its critical status as one of the greatest films ever made has grown, with numerous essays and books on it including Peter Cowie's The Cinema of Orson Welles, Ronald Gottesman's Focus on Citizen Kane, a collection of significant reviews and background pieces, and most notably Kael's essay, "Raising Kane", which promoted the value of the film to a much wider audience than it had reached before. Despite its criticism of Welles, it further popularized the notion of Citizen Kane as the great American film. The rise of art house and film society circuits also aided in the film's rediscovery. David Thomson said that the film 'grows with every year as America comes to resemble it."
The British magazine Sight & Sound has produced a Top Ten list surveying film critics every decade since 1952, and is regarded as one of the most respected barometers of critical taste. Citizen Kane was a runner up to the top 10 in its 1952 poll but was voted as the greatest film ever made in its 1962 poll, retaining the top spot in every subsequent poll until 2012, when Vertigo displaced it.
The film has also ranked number one in the following film "best of" lists: Julio Castedo's The 100 Best Films of the Century, Cahiers du cinéma's 100 films pour une cinémathèque idéale, Kinovedcheskie Zapiski, Time Out magazine's Top 100 Films (Centenary), The Village Voice's 100 Greatest Films, and The Royal Belgian Film Archive's Most Important and Misappreciated American Films.
Roger Ebert called Citizen Kane the greatest film ever made: "But people don't always ask about the greatest film. They ask, 'What's your favorite movie?' Again, I always answer with Citizen Kane."
In 1998 Time Out conducted a reader's poll and Citizen Kane was voted 3rd best film of all time. On February 18, 1999, the United States Postal Service honored Citizen Kane by including it in its Celebrate the Century series. The film was honored again in February 25, 2003, in a series of U.S. postage stamps marking the 75th anniversary of the Academy of Motion Picture Arts and Sciences. Art director Perry Ferguson represents the behind-the-scenes craftsmen of filmmaking in the series; he is depicted completing a sketch for Citizen Kane.
Citizen Kane was ranked number one in the American Film Institute's polls of film industry artists and leaders in 1998 and 2007. "Rosebud" was chosen as the 17th most memorable movie quotation in a 2005 AFI poll. The film's score was one of 250 nominees for the top 25 film scores in American cinema in another 2005 AFI poll. In 2005 the film was included on Time's All-Time 100 best movies list.
In 2012, the Motion Picture Editors Guild published a list of the 75 best-edited films of all time based on a survey of its membership. Citizen Kane was listed second. In 2015, Citizen Kane ranked 1st on BBC's "100 Greatest American Films" list, voted on by film critics from around the world.
Citizen Kane has been called the most influential film of all time. Richard Corliss has asserted that Jules Dassin's 1941 film The Tell-Tale Heart was the first example of its influence and the first pop culture reference to the film occurred later in 1941 when the spoof comedy Hellzapoppin' featured a "Rosebud" sled. The film's cinematography was almost immediately influential and in 1942 American Cinematographer wrote "without a doubt the most immediately noticeable trend in cinematography methods during the year was the trend toward crisper definition and increased depth of field."
The cinematography influenced John Huston's The Maltese Falcon. Cinematographer Arthur Edeson used a wider-angle lens than Toland and the film includes many long takes, low angles and shots of the ceiling, but it did not use deep focus shots on large sets to the extent that Citizen Kane did. Edeson and Toland are often credited together for revolutionizing cinematography in 1941. Toland's cinematography influenced his own work on The Best Years of Our Lives. Other films influenced include Gaslight, Mildred Pierce and Jane Eyre. Cinematographer Kazuo Miyagawa said that his use of deep focus was influenced by "the camera work of Gregg Toland in Citizen Kane" and not by traditional Japanese art.
Its cinematography, lighting, and flashback structure influenced such film noirs of the 1940s and 1950s as The Killers, Keeper of the Flame, Caught, The Great Man and This Gun for Hire. David Bordwell and Kristin Thompson have written that "For over a decade thereafter American films displayed exaggerated foregrounds and somber lighting, enhanced by long takes and exaggerated camera movements." However, by the 1960s filmmakers such as those from the French New Wave and Cinéma vérité movements favored "flatter, more shallow images with softer focus" and Citizen Kane's style became less fashionable. American filmmakers in the 1970s combined these two approaches by using long takes, rapid cutting, deep focus and telephoto shots all at once. Its use of long takes influenced films such as The Asphalt Jungle, and its use of deep focus cinematography influenced Gun Crazy, The Whip Hand, The Devil's General and Justice Is Done. The flashback structure in which different characters have conflicting versions of past events influenced La commare secca and Man of Marble.
The film's structure influenced the biographical films Lawrence of Arabia and Mishima: A Life in Four Chapters—which begin with the subject's death and show their life in flashbacks—as well as Welles's thriller Mr. Arkadin. Rosenbaum sees similarities in the film's plot to Mr. Arkadin, as well as the theme of nostalgia for loss of innocence throughout Welles's career, beginning with Citizen Kane and including The Magnificent Ambersons, Mr. Arkadin and Chimes at Midnight. Rosenbaum also points out how the film influenced Warren Beatty's Reds. The film depicts the life of Jack Reed through the eyes of Louise Bryant, much as Kane's life is seen through the eyes of Thompson and the people who he interviews. Rosenbaum also compared the romantic montage between Reed and Bryant with the breakfast table montage in Citizen Kane.
Akira Kurosawa's Rashomon is often compared to the film due to both having complicated plot structures told by multiple characters in the film. Welles said his initial idea for the film was "Basically, the idea Rashomon used later on," however Kurosawa had not yet seen the film before making Rashomon in 1950. Nigel Andrews has compared the film's complex plot structure to Rashomon, Last Year at Marienbad, Memento and Magnolia. Andrews also compares Charles Foster Kane to Michael Corleone in The Godfather, Jake LaMotta in Raging Bull and Daniel Plainview in There Will Be Blood for their portrayals of "haunted megalomaniac[s], presiding over the shards of [their] own [lives]."
The films of Paul Thomas Anderson have been compared to it. Variety compared There Will Be Blood to the film and called it "one that rivals Giant and Citizen Kane in our popular lore as origin stories about how we came to be the people we are." The Master has been called "movieland's only spiritual sequel to Citizen Kane that doesn't shrivel under the hefty comparison". The Social Network has been compared to the film for its depiction of a media mogul and by the character Erica Albright being similar to "Rosebud". The controversy of the Sony hacking before the release of The Interview brought comparisons of Hearst's attempt to suppress the film. The film's plot structure and some specific shots influenced Todd Haynes's Velvet Goldmine. Abbas Kiarostami's The Traveler has been called "the Citizen Kane of the Iranian children's cinema." The film's use of overlapping dialogue has influenced the films of Robert Altman and Carol Reed. Reed's films Odd Man Out, The Third Man (in which Welles and Cotten appeared) and Outcast of the Islands were also influenced by the film's cinematography.
Many directors have listed it as one of the greatest films ever made, including Woody Allen, Michael Apted, Les Blank, Kenneth Branagh, Paul Greengrass, Satyajit Ray, Michel Hazanavicius, Michael Mann, Sam Mendes, Jiří Menzel, Paul Schrader, Martin Scorsese, Denys Arcand, Gillian Armstrong, John Boorman, Roger Corman, Alex Cox, Miloš Forman, Norman Jewison, Richard Lester, Richard Linklater, Paul Mazursky, Ronald Neame, Sydney Pollack and Stanley Kubrick. Yasujirō Ozu said it was his favorite non-Japanese film and was impressed by its techniques. François Truffaut said that the film "has inspired more vocations to cinema throughout the world than any other" and recognized its influence in The Barefoot Contessa, Les Mauvaises Rencontres, Lola Montès, and 8 1/2. Truffaut's Day for Night pays tribute to the film in a dream sequence depicting a childhood memory of the character played by Truffaut stealing publicity photos from the film. Numerous film directors have cited the film as influential on their own films, including Theo Angelopoulos, Luc Besson, the Coen brothers, Francis Ford Coppola, Brian De Palma, John Frankenheimer, Stephen Frears, Sergio Leone, Michael Mann, Ridley Scott, Martin Scorsese, Bryan Singer and Steven Spielberg. Ingmar Bergman disliked the film and called it "a total bore. Above all, the performances are worthless. The amount of respect that movie has is absolutely unbelievable!"
William Friedkin said that the film influenced him and called it "a veritable quarry for filmmakers, just as Joyce's Ulysses is a quarry for writers." The film has also influenced other art forms. Carlos Fuentes's novel The Death of Artemio Cruz was partially inspired by the film and the rock band The White Stripes paid unauthorized tribute to the film in the song "The Union Forever".
In 1982, film director Steven Spielberg bought a "Rosebud" sled for $60,500; it was one of three balsa sleds used in the closing scenes and the only one that was not burned. Spielberg eventually donated the sled to the Academy Museum of Motion Pictures as he stated he felt it belonged in a museum. After the Spielberg purchase, it was reported that retiree Arthur Bauer claimed to own another "Rosebud" sled. In early 1942, when Bauer was 12, he had won an RKO publicity contest and selected the hardwood sled as his prize. In 1996, Bauer's estate offered the painted pine sled at auction through Christie's. Bauer's son told CBS News that his mother had once wanted to paint the sled and use it as a plant stand, but Bauer told her to "just save it and put it in the closet." The sled was sold to an anonymous bidder for $233,500.
Welles's Oscar for Best Original Screenplay was believed to be lost until it was rediscovered in 1994. It was withdrawn from a 2007 auction at Sotheby's when bidding failed to reach its estimate of $800,000 to $1.2 million. Owned by the charitable Dax Foundation, it was auctioned for $861,542 in 2011 to an anonymous buyer. Mankiewicz's Oscar was sold at least twice, in 1999 and again in 2012, the latest price being $588,455.
In 1989, Mankiewicz's personal copy of the Citizen Kane script was auctioned at Christie's. The leather-bound volume included the final shooting script and a carbon copy of American that bore handwritten annotations—purportedly made by Hearst's lawyers, who were said to have obtained it in the manner described by Kael in "Raising Kane". Estimated to bring $70,000 to $90,000, it sold for a record $231,000.
In 2007, Welles's personal copy of the last revised draft of Citizen Kane before the shooting script was sold at Sotheby's for $97,000. A second draft of the script titled American, marked "Mr. Welles' working copy", was auctioned by Sotheby's in 2014 for $164,692. A collection of 24 pages from a working script found in Welles's personal possessions by his daughter Beatrice Welles was auctioned in 2014 for $15,000.
In 2014, a collection of approximately 235 Citizen Kane stills and production photos that had belonged to Welles was sold at auction for $7,812.
The composited camera negative of Citizen Kane is believed to be lost forever. The most commonly-reported explanation is that it was destroyed in a New Jersey film laboratory fire in the 1970s. However, in 2021, Nicolas Falacci revealed that he had been told "the real story" by a colleague, when he was one of two employees in the film restoration lab which assembled the 1991 "restoration" from the best available elements. Falacci noted that throughout the process he had daily visits in 1990-1 from an unnamed "older RKO executive showing up every day – nervous and sweating". According to Falacci's colleague, this elderly man was keen to cover up a clerical error he had made decades earlier when in charge of the studio's inventory, which had resulted in the original camera negatives being sent to a silver reclamation plant, destroying the nitrate film to extract its valuable silver content. Falacci's account is impossible to verify, but it would have been fully in keeping with industry standard practice for many decades, which was to destroy prints and negatives of countless older films deemed non-commercially viable, to extract the silver.
Subsequent prints were derived from a master positive (a fine-grain preservation element) made in the 1940s and originally intended for use in overseas distribution. Modern techniques were used to produce a pristine print for a 50th Anniversary theatrical reissue in 1991 which Paramount Pictures released for then-owner Turner Broadcasting System, which earned $1.6 million in North America and $1.8 million worldwide.
In 1955, RKO sold the American television rights to its film library, including Citizen Kane, to C&C Television Corp. In 1960, television rights to the pre-1959 RKO's live-action library were acquired by United Artists. RKO kept the non-broadcast television rights to its library.
In 1976, when home video was in its infancy, entrepreneur Snuff Garrett bought cassette rights to the RKO library for what United Press International termed "a pittance". In 1978 The Nostalgia Merchant released the film through Media Home Entertainment. By 1980 the 800-title library of The Nostalgia Merchant was earning $2.3 million a year. "Nobody wanted cassettes four years ago," Garrett told UPI. "It wasn't the first time people called me crazy. It was a hobby with me which became big business." RKO Home Video released the film on VHS and Betamax in 1985.
On December 3, 1984, The Criterion Collection released the film as its first LaserDisc. It was made from a fine grain master positive provided by the UCLA Film and Television Archive. When told about the then-new concept of having an audio commentary on the disc, Welles was skeptical but said "theoretically, that's good for teaching movies, so long as they don't talk nonsense." In 1992 Criterion released a new 50th Anniversary Edition LaserDisc. This version had an improved transfer and additional special features, including the documentary The Legacy of Citizen Kane and Welles's early short The Hearts of Age.
Turner Broadcasting System acquired broadcast television rights to the RKO library in 1986 and the full worldwide rights to the library in 1987. The RKO Home Video unit was reorganized into Turner Home Entertainment that year. In 1991 Turner released a 50th Anniversary Edition on VHS and as a collector's edition that includes the film, the documentary Reflections On Citizen Kane, Harlan Lebo's 50th anniversary album, a poster and a copy of the original script. In 1996, Time Warner acquired Turner and Warner Home Video absorbed Turner Home Entertainment. In 2011, Warner Bros. Discovery's Warner Bros. unit had distribution rights for the film.
In 2001, Warner Home Video released a 60th Anniversary Collectors Edition DVD. The two-disc DVD included feature-length commentaries by Roger Ebert and Peter Bogdanovich, as well as a second DVD with the feature length documentary The Battle Over Citizen Kane (1999). It was simultaneously released on VHS. The DVD was criticized for being "too bright, too clean; the dirt and grime had been cleared away, but so had a good deal of the texture, the depth, and the sense of film grain."
In 2003, Welles's daughter Beatrice Welles sued Turner Entertainment, claiming the Welles estate is the legal copyright holder of the film. She claimed that Welles's deal to terminate his contracts with RKO meant that Turner's copyright of the film was null and void. She also claimed that the estate of Orson Welles was owed 20% of the film's profits if her copyright claim was not upheld. In 2007 she was allowed to proceed with the lawsuit, overturning the 2004 decision in favor of Turner Entertainment on the issue of video rights.
In 2011, it was released on Blu-ray and DVD in a 70th Anniversary Edition. The San Francisco Chronicle called it "the Blu-ray release of the year." Supplements included everything available on the 2001 Warner Home Video release, including The Battle Over Citizen Kane DVD. A 70th Anniversary Ultimate Collector's Edition added a third DVD with RKO 281 (1999), an award winning TV movie about the making of the film. Its packaging extras included a hardcover book and a folio containing mini reproductions of the original souvenir program, lobby cards, and production memos and correspondence. The transfer for the US releases were scanned as 4K resolution from three different 35mm prints and rectified the quality issues of the 2001 DVD. The rest of the world continued to receive home video releases based on the older transfer. This was partially rectified in 2016 with the release of the 75th Anniversary Edition in both the UK and US, which was a straight repackaging of the main disc from the 70th Anniversary Edition.
On August 11, 2021 Criterion announced their first 4K Ultra HD releases, a six-film slate, would include Citizen Kane. Criterion indicated each title was to be available in a combo pack including a 4K UHD disc of the feature film as well as the film and special features on the companion Blu-rays. Citizen Kane was released on November 23, 2021 by the collection as a 4K and 3 Blu-ray disc package. However, the release was recalled because at the half-hour mark on the regular blu-ray, the contrast fell sharply, which resulted in a much darker image compared to what was supposed to occur. However this issue does not apply to the 4K version itself.
In the 1980s, Citizen Kane became a catalyst in the controversy over the colorization of black-and-white films. One proponent of film colorization was Ted Turner, whose Turner Entertainment Company owned the RKO library. A Turner Entertainment spokesperson initially stated that Citizen Kane would not be colorized, but in July 1988 Turner said, "Citizen Kane? I'm thinking of colorizing it." In early 1989 it was reported that two companies were producing color tests for Turner Entertainment. Criticism increased when filmmaker Henry Jaglom stated that shortly before his death Welles had implored him "don't let Ted Turner deface my movie with his crayons."
In February 1989, Turner Entertainment President Roger Mayer announced that work to colorize the film had been stopped due to provisions in Welles's 1939 contract with RKO that "could be read to prohibit colorization without permission of the Welles estate." Mayer added that Welles's contract was "quite unusual" and "other contracts we have checked out are not like this at all." Turner had only colorized the final reel of the film before abandoning the project. In 1991 one minute of the colorized test footage was included in the BBC Arena documentary The Complete Citizen Kane.
The colorization controversy was a factor in the passage of the National Film Preservation Act in 1988 which created the National Film Registry the following year. ABC News anchor Peter Jennings reported that "one major reason for doing this is to require people like the broadcaster Ted Turner, who's been adding color to some movies and re-editing others for television, to put notices on those versions saying that the movies have been altered". | [
{
"paragraph_id": 0,
"text": "Citizen Kane is a 1941 American drama film directed by, produced by, and starring Orson Welles. Welles and Herman J. Mankiewicz wrote the screenplay. The picture was Welles' first feature film. Citizen Kane is frequently cited as the greatest film ever made. For 50 consecutive years, it stood at number 1 in the British Film Institute's Sight & Sound decennial poll of critics, and it topped the American Film Institute's 100 Years ... 100 Movies list in 1998, as well as its 2007 update. The film was nominated for Academy Awards in nine categories and it won for Best Writing (Original Screenplay) by Mankiewicz and Welles. Citizen Kane is praised for Gregg Toland's cinematography, Robert Wise's editing, Bernard Herrmann's music, and its narrative structure, all of which have been considered innovative and precedent-setting.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The quasi-biographical film examines the life and legacy of Charles Foster Kane, played by Welles, a composite character based on American media barons William Randolph Hearst and Joseph Pulitzer, Chicago tycoons Samuel Insull and Harold McCormick, as well as aspects of the screenwriters' own lives. Upon its release, Hearst prohibited any mention of the film in his newspapers.",
"title": ""
},
{
"paragraph_id": 2,
"text": "After the Broadway success of Welles's Mercury Theatre and the controversial 1938 radio broadcast \"The War of the Worlds\" on The Mercury Theatre on the Air, Welles was courted by Hollywood. He signed a contract with RKO Pictures in 1939. Although it was unusual for an untried director, he was given freedom to develop his own story, to use his own cast and crew, and to have final cut privilege. Following two abortive attempts to get a project off the ground, he wrote the screenplay for Citizen Kane, collaborating with Herman J. Mankiewicz. Principal photography took place in 1940, the same year its innovative trailer was shown, and the film was released in 1941.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although it was a critical success, Citizen Kane failed to recoup its costs at the box office. The film faded from view after its release, but it returned to public attention when it was praised by French critics such as André Bazin and re-released in 1956. In 1958, the film was voted number 9 on the prestigious Brussels 12 list at the 1958 World Expo. Citizen Kane was selected by the Library of Congress as an inductee of the 1989 inaugural group of 25 films for preservation in the United States National Film Registry for being \"culturally, historically, or aesthetically significant\". Roger Ebert wrote of it: \"Its surface is as much fun as any movie ever made. Its depths surpass understanding. I have analyzed it a shot at a time with more than 30 groups, and together we have seen, I believe, pretty much everything that is there on the screen. The more clearly I can see its physical manifestation, the more I am stirred by its mystery.\"",
"title": ""
},
{
"paragraph_id": 4,
"text": "In a mansion called Xanadu, part of a vast palatial estate in Florida, the elderly Charles Foster Kane is on his deathbed. Holding a snow globe, he utters his last word, \"Rosebud\", and dies. A newsreel obituary tells the life story of Kane, an enormously wealthy newspaper publisher and industrial magnate. Kane's death becomes sensational news around the world, and the newsreel's producer tasks reporter Jerry Thompson with discovering the meaning of \"Rosebud\".",
"title": "Plot"
},
{
"paragraph_id": 5,
"text": "Thompson sets out to interview Kane's friends and associates. He tries to approach his second wife, Susan Alexander Kane, now an alcoholic who runs her own nightclub, but she refuses to talk to him. Thompson goes to the private archive of the late banker Walter Parks Thatcher. Through Thatcher's written memoirs, Thompson learns about Kane's rise from a Colorado boarding house and the decline of his fortune.",
"title": "Plot"
},
{
"paragraph_id": 6,
"text": "In 1871, gold was discovered through a mining deed belonging to Kane's mother, Mary Kane. She hired Thatcher to establish a trust that would provide for Kane's education and assume guardianship of him. While the parents and Thatcher discussed arrangements inside the boarding house, the young Kane played happily with a sled in the snow outside. When Kane's parents introduced him to Thatcher, the boy struck Thatcher with his sled and attempted to run away.",
"title": "Plot"
},
{
"paragraph_id": 7,
"text": "By the time Kane gained control of his trust at the age of 25, the mine's productivity and Thatcher's prudent investing had made Kane one of the richest men in the world. Kane took control of the New York Inquirer newspaper and embarked on a career of yellow journalism, publishing scandalous articles that attacked Thatcher's (and his own) business interests. Kane sold his newspaper empire to Thatcher after the 1929 stock market crash left Kane short of cash.",
"title": "Plot"
},
{
"paragraph_id": 8,
"text": "Thompson interviews Kane's personal business manager, Mr. Bernstein. Bernstein recalls that Kane hired the best journalists available to build the Inquirer's circulation. Kane rose to power by successfully manipulating public opinion regarding the Spanish–American War and marrying Emily Norton, the niece of the President of the United States.",
"title": "Plot"
},
{
"paragraph_id": 9,
"text": "Thompson interviews Kane's estranged best friend, Jedediah Leland, in a retirement home. Leland says that Kane's marriage to Emily disintegrated over the years, and he began an affair with amateur singer Susan Alexander while running for Governor of New York. Both his wife and his political opponent discovered the affair, and the public scandal ended his political career. Kane married Susan and forced her into a humiliating operatic career for which she had neither the talent nor the ambition, even building a large opera house for her. After Leland began to write a negative review of Susan's disastrous opera debut, Kane fired him but finished the negative review and printed it. Susan protested that she never wanted the opera career anyway, but Kane forced her to continue the season.",
"title": "Plot"
},
{
"paragraph_id": 10,
"text": "Susan consents to an interview with Thompson and describes the aftermath of her opera career. She attempted suicide, and so Kane finally allowed her to abandon singing. After many unhappy years and after being hit by Kane, she finally decided to leave him. Kane's butler Raymond recounts that, after Susan left him, he began violently destroying the contents of her bedroom. When he happened upon a snow globe, he grew calm and said \"Rosebud\". Thompson concludes that he cannot solve the mystery and that the meaning of Kane's last word will remain unknown.",
"title": "Plot"
},
{
"paragraph_id": 11,
"text": "Back at Xanadu, Kane's belongings are cataloged or discarded by the staff. They find the sled on which eight-year-old Kane was playing on the day that he was taken from his home in Colorado and throw it into a furnace with other items. Behind their backs, the sled slowly burns and its trade name, printed on top, becomes visible through the flames: \"Rosebud\".",
"title": "Plot"
},
{
"paragraph_id": 12,
"text": "The beginning of the film's ending credits states that \"Most of the principal actors in Citizen Kane are new to motion pictures. The Mercury Theatre is proud to introduce them.\" The cast is then listed in the following order, with Orson Welles' credit for playing Charles Foster Kane appearing last:",
"title": "Cast"
},
{
"paragraph_id": 13,
"text": "Additionally, Charles Bennett appears as the entertainer at the head of the chorus line in the Inquirer party sequence, and cinematographer Gregg Toland makes a cameo appearance as an interviewer depicted in part of the News on the March newsreel. Actor Alan Ladd, still unknown at that time, makes a small appearance as a reporter smoking a pipe at the end of the film.",
"title": "Cast"
},
{
"paragraph_id": 14,
"text": "Hollywood had shown interest in Welles as early as 1936. He turned down three scripts sent to him by Warner Bros. In 1937, he declined offers from David O. Selznick, who asked him to head his film company's story department, and William Wyler, who wanted him for a supporting role in Wuthering Heights. \"Although the possibility of making huge amounts of money in Hollywood greatly attracted him,\" wrote biographer Frank Brady, \"he was still totally, hopelessly, insanely in love with the theater, and it is there that he had every intention of remaining to make his mark.\"",
"title": "Production"
},
{
"paragraph_id": 15,
"text": "Following \"The War of the Worlds\" broadcast of his CBS radio series The Mercury Theatre on the Air, Welles was lured to Hollywood with a remarkable contract. RKO Pictures studio head George J. Schaefer wanted to work with Welles after the notorious broadcast, believing that Welles had a gift for attracting mass attention. RKO was also uncharacteristically profitable and was entering into a series of independent production contracts that would add more artistically prestigious films to its roster. Throughout the spring and early summer of 1939, Schaefer constantly tried to lure the reluctant Welles to Hollywood. Welles was in financial trouble after failure of his plays Five Kings and The Green Goddess. At first he simply wanted to spend three months in Hollywood and earn enough money to pay his debts and fund his next theatrical season. Welles first arrived on July 20, 1939, and on his first tour, he called the movie studio \"the greatest electric train set a boy ever had\".",
"title": "Production"
},
{
"paragraph_id": 16,
"text": "Welles signed his contract with RKO on August 21, which stipulated that Welles would act in, direct, produce and write two films. Mercury would get $100,000 for the first film by January 1, 1940, plus 20% of profits after RKO recouped $500,000, and $125,000 for a second film by January 1, 1941, plus 20% of profits after RKO recouped $500,000. The most controversial aspect of the contract was granting Welles complete artistic control of the two films so long as RKO approved both projects' stories and so long as the budget did not exceed $500,000. RKO executives would not be allowed to see any footage until Welles chose to show it to them, and no cuts could be made to either film without Welles's approval. Welles was allowed to develop the story without interference, select his own cast and crew, and have the right of final cut. Granting the final cut privilege was unprecedented for a studio because it placed artistic considerations over financial investment. The contract was deeply resented in the film industry, and the Hollywood press took every opportunity to mock RKO and Welles. Schaefer remained a great supporter and saw the unprecedented contract as good publicity. Film scholar Robert L. Carringer wrote: \"The simple fact seems to be that Schaefer believed Welles was going to pull off something really big almost as much as Welles did himself.\"",
"title": "Production"
},
{
"paragraph_id": 17,
"text": "Welles spent the first five months of his RKO contract trying to get his first project going, without success. \"They are laying bets over on the RKO lot that the Orson Welles deal will end up without Orson ever doing a picture there,\" wrote The Hollywood Reporter. It was agreed that Welles would film Heart of Darkness, previously adapted for The Mercury Theatre on the Air, which would be presented entirely through a first-person camera. After elaborate pre-production and a day of test shooting with a hand-held camera—unheard of at the time—the project never reached production because Welles was unable to trim $50,000 from its budget. Schaefer told Welles that the $500,000 budget could not be exceeded; as war loomed, revenue was declining sharply in Europe by the fall of 1939.",
"title": "Production"
},
{
"paragraph_id": 18,
"text": "He then started work on the idea that became Citizen Kane. Knowing the script would take time to prepare, Welles suggested to RKO that while that was being done—\"so the year wouldn't be lost\"—he make a humorous political thriller. Welles proposed The Smiler with a Knife, from a novel by Cecil Day-Lewis. When that project stalled in December 1939, Welles began brainstorming other story ideas with screenwriter Herman J. Mankiewicz, who had been writing Mercury radio scripts. \"Arguing, inventing, discarding, these two powerful, headstrong, dazzlingly articulate personalities thrashed toward Kane\", wrote biographer Richard Meryman.",
"title": "Production"
},
{
"paragraph_id": 19,
"text": "One of the long-standing controversies about Citizen Kane has been the authorship of the screenplay. Welles conceived the project with screenwriter Herman J. Mankiewicz, who was writing radio plays for Welles's CBS Radio series, The Campbell Playhouse. Mankiewicz based the original outline on the life of William Randolph Hearst, whom he knew socially and came to hate after being exiled from Hearst's circle.",
"title": "Production"
},
{
"paragraph_id": 20,
"text": "In February 1940 Welles supplied Mankiewicz with 300 pages of notes and put him under contract to write the first draft screenplay under the supervision of John Houseman, Welles's former partner in the Mercury Theatre. Welles later explained, \"I left him on his own finally, because we'd started to waste too much time haggling. So, after mutual agreements on storyline and character, Mank went off with Houseman and did his version, while I stayed in Hollywood and wrote mine.\" Taking these drafts, Welles drastically condensed and rearranged them, then added scenes of his own. The industry accused Welles of underplaying Mankiewicz's contribution to the script, but Welles countered the attacks by saying, \"At the end, naturally, I was the one making the picture, after all—who had to make the decisions. I used what I wanted of Mank's and, rightly or wrongly, kept what I liked of my own.\"",
"title": "Production"
},
{
"paragraph_id": 21,
"text": "The terms of the contract stated that Mankiewicz was to receive no credit for his work, as he was hired as a script doctor. Before he signed the contract Mankiewicz was particularly advised by his agents that all credit for his work belonged to Welles and the Mercury Theatre, the \"author and creator\". As the film neared release, however, Mankiewicz began wanting a writing credit for the film and even threatened to take out full-page advertisements in trade papers and to get his friend Ben Hecht to write an exposé for The Saturday Evening Post. Mankiewicz also threatened to go to the Screen Writers Guild and claim full credit for writing the entire script by himself.",
"title": "Production"
},
{
"paragraph_id": 22,
"text": "After lodging a protest with the Screen Writers Guild, Mankiewicz withdrew it, then vacillated. The question was resolved in January 1941 when the studio, RKO Pictures, awarded Mankiewicz credit. The guild credit form listed Welles first, Mankiewicz second. Welles's assistant Richard Wilson said that the person who circled Mankiewicz's name in pencil, then drew an arrow that put it in first place, was Welles. The official credit reads, \"Screenplay by Herman J. Mankiewicz and Orson Welles\". Mankiewicz's rancor toward Welles grew over the remaining twelve years of his life.",
"title": "Production"
},
{
"paragraph_id": 23,
"text": "Questions over the authorship of the Citizen Kane screenplay were revived in 1971 by influential film critic Pauline Kael, whose controversial 50,000-word essay \"Raising Kane\" was commissioned as an introduction to the shooting script in The Citizen Kane Book, published in October 1971. The book-length essay first appeared in February 1971, in two consecutive issues of The New Yorker magazine. In the ensuing controversy, Welles was defended by colleagues, critics, biographers and scholars, but his reputation was damaged by its charges. The essay's thesis was later questioned and some of Kael's findings were also contested in later years.",
"title": "Production"
},
{
"paragraph_id": 24,
"text": "Questions of authorship continued to come into sharper focus with Carringer's 1978 thoroughly researched essay, \"The Scripts of Citizen Kane\". Carringer studied the collection of script records—\"almost a day-to-day record of the history of the scripting\"—that was then still intact at RKO. He reviewed all seven drafts and concluded that \"the full evidence reveals that Welles's contribution to the Citizen Kane script was not only substantial but definitive.\"",
"title": "Production"
},
{
"paragraph_id": 25,
"text": "Citizen Kane was a rare film in that its principal roles were played by actors new to motion pictures. Ten were billed as Mercury Actors, members of the skilled repertory company assembled by Welles for the stage and radio performances of the Mercury Theatre, an independent theater company he founded with Houseman in 1937. \"He loved to use the Mercury players,\" wrote biographer Charles Higham, \"and consequently he launched several of them on movie careers.\"",
"title": "Production"
},
{
"paragraph_id": 26,
"text": "The film represents the feature film debuts of William Alland, Ray Collins, Joseph Cotten, Agnes Moorehead, Erskine Sanford, Everett Sloane, Paul Stewart, and Welles himself. Despite never having appeared in feature films, some of the cast members were already well known to the public. Cotten had recently become a Broadway star in the hit play The Philadelphia Story with Katharine Hepburn and Sloane was well known for his role on the radio show The Goldbergs. Mercury actor George Coulouris was a star of the stage in New York and London.",
"title": "Production"
},
{
"paragraph_id": 27,
"text": "Not all of the cast came from the Mercury Players. Welles cast Dorothy Comingore, an actress who played supporting parts in films since 1934 using the name \"Linda Winters\", as Susan Alexander Kane. A discovery of Charlie Chaplin, Comingore was recommended to Welles by Chaplin, who then met Comingore at a party in Los Angeles and immediately cast her.",
"title": "Production"
},
{
"paragraph_id": 28,
"text": "Welles had met stage actress Ruth Warrick while visiting New York on a break from Hollywood and remembered her as a good fit for Emily Norton Kane, later saying that she looked the part. Warrick told Carringer that she was struck by the extraordinary resemblance between herself and Welles's mother when she saw a photograph of Beatrice Ives Welles. She characterized her own personal relationship with Welles as motherly.",
"title": "Production"
},
{
"paragraph_id": 29,
"text": "\"He trained us for films at the same time that he was training himself,\" recalled Agnes Moorehead. \"Orson believed in good acting, and he realized that rehearsals were needed to get the most from his actors. That was something new in Hollywood: nobody seemed interested in bringing in a group to rehearse before scenes were shot. But Orson knew it was necessary, and we rehearsed every sequence before it was shot.\"",
"title": "Production"
},
{
"paragraph_id": 30,
"text": "When The March of Time narrator Westbrook Van Voorhis asked for $25,000 to narrate the News on the March sequence, Alland demonstrated his ability to imitate Van Voorhis and Welles cast him.",
"title": "Production"
},
{
"paragraph_id": 31,
"text": "Welles later said that casting character actor Gino Corrado in the small part of the waiter at the El Rancho broke his heart. Corrado had appeared in many Hollywood films, often as a waiter, and Welles wanted all of the actors to be new to films.",
"title": "Production"
},
{
"paragraph_id": 32,
"text": "Other uncredited roles went to Thomas A. Curran as Teddy Roosevelt in the faux newsreel; Richard Baer as Hillman, a man at Madison Square Garden, and a man in the News on the March screening room; and Alan Ladd, Arthur O'Connell and Louise Currie as reporters at Xanadu.",
"title": "Production"
},
{
"paragraph_id": 33,
"text": "Ruth Warrick (died 2005) was the last surviving member of the principal cast. Sonny Bupp (died 2007), who played Kane's young son, was the last surviving credited cast member. Kathryn Trosper Popper (died March 6, 2016) was reported to have been the last surviving actor to have appeared in Citizen Kane. Jean Forward (died September 2016), a soprano who dubbed the singing voice of Susan Alexander, was the last surviving performer from the film.",
"title": "Production"
},
{
"paragraph_id": 34,
"text": "Production advisor Miriam Geiger quickly compiled a handmade film textbook for Welles, a practical reference book of film techniques that he studied carefully. He then taught himself filmmaking by matching its visual vocabulary to The Cabinet of Dr. Caligari, which he ordered from the Museum of Modern Art, and films by Frank Capra, René Clair, Fritz Lang, King Vidor and Jean Renoir. The one film he genuinely studied was John Ford's Stagecoach, which he watched 40 times. \"As it turned out, the first day I ever walked onto a set was my first day as a director,\" Welles said. \"I'd learned whatever I knew in the projection room—from Ford. After dinner every night for about a month, I'd run Stagecoach, often with some different technician or department head from the studio, and ask questions. 'How was this done?' 'Why was this done?' It was like going to school.\"",
"title": "Production"
},
{
"paragraph_id": 35,
"text": "Welles's cinematographer for the film was Gregg Toland, described by Welles as \"just then, the number-one cameraman in the world.\" To Welles's astonishment, Toland visited him at his office and said, \"I want you to use me on your picture.\" He had seen some of the Mercury stage productions (including Caesar) and said he wanted to work with someone who had never made a movie. RKO hired Toland on loan from Samuel Goldwyn Productions in the first week of June 1940.",
"title": "Production"
},
{
"paragraph_id": 36,
"text": "\"And he never tried to impress us that he was doing any miracles,\" Welles recalled. \"I was calling for things only a beginner would have been ignorant enough to think anybody could ever do, and there he was, doing them.\" Toland later explained that he wanted to work with Welles because he anticipated the first-time director's inexperience and reputation for audacious experimentation in the theater would allow the cinematographer to try new and innovative camera techniques that typical Hollywood films would never have allowed him to do. Unaware of filmmaking protocol, Welles adjusted the lights on set as he was accustomed to doing in the theater; Toland quietly re-balanced them, and was angry when one of the crew informed Welles that he was infringing on Toland's responsibilities. During the first few weeks of June, Welles had lengthy discussions about the film with Toland and art director Perry Ferguson in the morning, and in the afternoon and evening he worked with actors and revised the script.",
"title": "Production"
},
{
"paragraph_id": 37,
"text": "On June 29, 1940—a Saturday morning when few inquisitive studio executives would be around—Welles began filming Citizen Kane. After the disappointment of having Heart of Darkness canceled, Welles followed Ferguson's suggestion and deceived RKO into believing that he was simply shooting camera tests. \"But we were shooting the picture,\" Welles said, \"because we wanted to get started and be already into it before anybody knew about it.\"",
"title": "Production"
},
{
"paragraph_id": 38,
"text": "At the time RKO executives were pressuring him to agree to direct a film called The Men from Mars, to capitalize on \"The War of the Worlds\" radio broadcast. Welles said that he would consider making the project but wanted to make a different film first. At this time he did not inform them that he had already begun filming Citizen Kane.",
"title": "Production"
},
{
"paragraph_id": 39,
"text": "The early footage was called \"Orson Welles Tests\" on all paperwork. The first \"test\" shot was the News on the March projection room scene, economically filmed in a real studio projection room in darkness that masked many actors who appeared in other roles later in the film. \"At $809 Orson did run substantially beyond the test budget of $528—to create one of the most famous scenes in movie history,\" wrote Barton Whaley.",
"title": "Production"
},
{
"paragraph_id": 40,
"text": "The next scenes were the El Rancho nightclub scenes and the scene in which Susan attempts suicide. Welles later said that the nightclub set was available after another film had wrapped and that filming took 10 to 12 days to complete. For these scenes Welles had Comingore's throat sprayed with chemicals to give her voice a harsh, raspy tone. Other scenes shot in secret included those in which Thompson interviews Leland and Bernstein, which were also shot on sets built for other films.",
"title": "Production"
},
{
"paragraph_id": 41,
"text": "During production, the film was referred to as RKO 281. Most of the filming took place in what is now Stage 19 on the Paramount Pictures lot in Hollywood. There was some location filming at Balboa Park in San Diego and the San Diego Zoo. Photographs of German-Jewish investment banker Otto Hermann Kahn's real-life estate Oheka Castle were used to portray the fictional Xanadu.",
"title": "Production"
},
{
"paragraph_id": 42,
"text": "In the end of July, RKO approved the film and Welles was allowed to officially begin shooting, despite having already been filming \"tests\" for several weeks. Welles leaked stories to newspaper reporters that the \"tests\" had been so good that there was no need to re-shoot them. The first \"official\" scene to be shot was the breakfast montage sequence between Kane and his first wife Emily. To strategically save money and appease the RKO executives who opposed him, Welles rehearsed scenes extensively before actually shooting and filmed very few takes of each shot set-up. Welles never shot master shots for any scene after Toland told him that Ford never shot them. To appease the increasingly curious press, Welles threw a cocktail party for selected reporters, promising that they could watch a scene being filmed. When the journalists arrived Welles told them they had \"just finished\" shooting for the day but still had the party. Welles told the press that he was ahead of schedule (without factoring in the month of \"test shooting\"), thus discrediting claims that after a year in Hollywood without making a film he was a failure in the film industry.",
"title": "Production"
},
{
"paragraph_id": 43,
"text": "Welles usually worked 16 to 18 hours a day on the film. He often began work at 4 a.m. since the special effects make-up used to age him for certain scenes took up to four hours to apply. Welles used this time to discuss the day's shooting with Toland and other crew members. The special contact lenses used to make Welles look elderly proved very painful, and a doctor was employed to place them into Welles's eyes. Welles had difficulty seeing clearly while wearing them, which caused him to badly cut his wrist when shooting the scene in which Kane breaks up the furniture in Susan's bedroom. While shooting the scene in which Kane shouts at Gettys on the stairs of Susan Alexander's apartment building, Welles fell ten feet; an X-ray revealed two bone chips in his ankle.",
"title": "Production"
},
{
"paragraph_id": 44,
"text": "The injury required him to direct the film from a wheelchair for two weeks. He eventually wore a steel brace to resume performing on camera; it is visible in the low-angle scene between Kane and Leland after Kane loses the election. For the final scene, a stage at the Selznick studio was equipped with a working furnace, and multiple takes were required to show the sled being put into the fire and the word \"Rosebud\" consumed. Paul Stewart recalled that on the ninth take the Culver City Fire Department arrived in full gear because the furnace had grown so hot the flue caught fire. \"Orson was delighted with the commotion\", he said.",
"title": "Production"
},
{
"paragraph_id": 45,
"text": "When \"Rosebud\" was burned, Welles choreographed the scene while he had composer Bernard Herrmann's cue playing on the set.",
"title": "Production"
},
{
"paragraph_id": 46,
"text": "Unlike Schaefer, many members of RKO's board of governors did not like Welles or the control that his contract gave him. However such board members as Nelson Rockefeller and NBC chief David Sarnoff were sympathetic to Welles. Throughout production Welles had problems with these executives not respecting his contract's stipulation of non-interference and several spies arrived on set to report what they saw to the executives. When the executives would sometimes arrive on set unannounced the entire cast and crew would suddenly start playing softball until they left. Before official shooting began the executives intercepted all copies of the script and delayed their delivery to Welles. They had one copy sent to their office in New York, resulting in it being leaked to press.",
"title": "Production"
},
{
"paragraph_id": 47,
"text": "Principal shooting wrapped October 24. Welles then took several weeks away from the film for a lecture tour, during which he also scouted additional locations with Toland and Ferguson. Filming resumed November 15 with some re-shoots. Toland had to leave due to a commitment to shoot Howard Hughes' The Outlaw, but Toland's camera crew continued working on the film and Toland was replaced by RKO cinematographer Harry J. Wild. The final day of shooting on November 30 was Kane's death scene. Welles boasted that he only went 21 days over his official shooting schedule, without factoring in the month of \"camera tests\". According to RKO records, the film cost $839,727. Its estimated budget had been $723,800.",
"title": "Production"
},
{
"paragraph_id": 48,
"text": "Citizen Kane was edited by Robert Wise and assistant editor Mark Robson. Both would become successful film directors. Wise was hired after Welles finished shooting the \"camera tests\" and began officially making the film. Wise said that Welles \"had an older editor assigned to him for those tests and evidently he was not too happy and asked to have somebody else. I was roughly Orson's age and had several good credits.\" Wise and Robson began editing the film while it was still shooting and said that they \"could tell certainly that we were getting something very special. It was outstanding film day in and day out.\"",
"title": "Production"
},
{
"paragraph_id": 49,
"text": "Welles gave Wise detailed instructions and was usually not present during the film's editing. The film was very well planned out and intentionally shot for such post-production techniques as slow dissolves. The lack of coverage made editing easy since Welles and Toland edited the film \"in camera\" by leaving few options of how it could be put together. Wise said the breakfast table sequence took weeks to edit and get the correct \"timing\" and \"rhythm\" for the whip pans and overlapping dialogue. The News on the March sequence was edited by RKO's newsreel division to give it authenticity. They used stock footage from Pathé News and the General Film Library.",
"title": "Production"
},
{
"paragraph_id": 50,
"text": "During post-production Welles and special effects artist Linwood G. Dunn experimented with an optical printer to improve certain scenes that Welles found unsatisfactory from the footage. Whereas Welles was often immediately pleased with Wise's work, he would require Dunn and post-production audio engineer James G. Stewart to re-do their work several times until he was satisfied.",
"title": "Production"
},
{
"paragraph_id": 51,
"text": "Welles hired Bernard Herrmann to compose the film's score. Where most Hollywood film scores were written quickly, in as few as two or three weeks after filming was completed, Herrmann was given 12 weeks to write the music. He had sufficient time to do his own orchestrations and conducting, and worked on the film reel by reel as it was shot and cut. He wrote complete musical pieces for some of the montages, and Welles edited many of the scenes to match their length.",
"title": "Production"
},
{
"paragraph_id": 52,
"text": "Film scholars and historians view Citizen Kane as Welles's attempt to create a new style of filmmaking by studying various forms of it and combining them into one. However, Welles stated that his love for cinema began only when he started working on the film. When asked where he got the confidence as a first-time director to direct a film so radically different from contemporary cinema, he responded, \"Ignorance, ignorance, sheer ignorance—you know there's no confidence to equal it. It's only when you know something about a profession, I think, that you're timid or careful.\"",
"title": "Style"
},
{
"paragraph_id": 53,
"text": "David Bordwell wrote that \"The best way to understand Citizen Kane is to stop worshipping it as a triumph of technique.\" Bordwell argues that the film did not invent any of its famous techniques such as deep focus cinematography, shots of the ceilings, chiaroscuro lighting and temporal jump-cuts, and that many of these stylistics had been used in German Expressionist films of the 1920s, such as The Cabinet of Dr. Caligari. But Bordwell asserts that the film did put them all together for the first time and perfected the medium in one single film. In a 1948 interview, D. W. Griffith said, \"I loved Citizen Kane and particularly loved the ideas he took from me.\"",
"title": "Style"
},
{
"paragraph_id": 54,
"text": "Arguments against the film's cinematic innovations were made as early as 1946 when French historian Georges Sadoul wrote, \"The film is an encyclopedia of old techniques.\" He pointed out such examples as compositions that used both the foreground and the background in the films of Auguste and Louis Lumière, special effects used in the films of Georges Méliès, shots of the ceiling in Erich von Stroheim's Greed and newsreel montages in the films of Dziga Vertov.",
"title": "Style"
},
{
"paragraph_id": 55,
"text": "French film critic André Bazin defended the film, writing: \"In this respect, the accusation of plagiarism could very well be extended to the film's use of panchromatic film or its exploitation of the properties of gelatinous silver halide.\" Bazin disagreed with Sadoul's comparison to Lumière's cinematography since Citizen Kane used more sophisticated lenses, but acknowledged that it had similarities to such previous works as The 49th Parallel and The Power and the Glory. Bazin stated that \"even if Welles did not invent the cinematic devices employed in Citizen Kane, one should nevertheless credit him with the invention of their meaning.\" Bazin championed the techniques in the film for its depiction of heightened reality, but Bordwell believed that the film's use of special effects contradicted some of Bazin's theories.",
"title": "Style"
},
{
"paragraph_id": 56,
"text": "Citizen Kane rejects the traditional linear, chronological narrative and tells Kane's story entirely in flashbacks using different points of view, many of them from Kane's aged and forgetful associates, the cinematic equivalent of the unreliable narrator in literature. Welles also dispenses with the idea of a single storyteller and uses multiple narrators to recount Kane's life, a technique not used previously in Hollywood films. Each narrator recounts a different part of Kane's life, with each story overlapping another. The film depicts Kane as an enigma, a complicated man who leaves viewers with more questions than answers as to his character, such as the newsreel footage where he is attacked for being both a communist and a fascist.",
"title": "Style"
},
{
"paragraph_id": 57,
"text": "The technique of flashbacks had been used in earlier films, notably The Power and the Glory (1933), but no film was as immersed in it as Citizen Kane. Thompson the reporter acts as a surrogate for the audience, questioning Kane's associates and piecing together his life.",
"title": "Style"
},
{
"paragraph_id": 58,
"text": "Films typically had an \"omniscient perspective\" at the time, which Marilyn Fabe says give the audience the \"illusion that we are looking with impunity into a world which is unaware of our gaze\". Citizen Kane also begins in that fashion until the News on the March sequence, after which we the audience see the film through the perspectives of others. The News on the March sequence gives an overview of Kane's entire life (and the film's entire story) at the beginning of the film, leaving the audience without the typical suspense of wondering how it will end. Instead, the film's repetitions of events compels the audience to analyze and wonder why Kane's life happened the way that it did, under the pretext of finding out what \"Rosebud\" means. The film then returns to the omniscient perspective in the final scene, when only the audience discovers what \"Rosebud\" is.",
"title": "Style"
},
{
"paragraph_id": 59,
"text": "The most innovative technical aspect of Citizen Kane is the extended use of deep focus, where the foreground, background, and everything in between are all in sharp focus. Cinematographer Toland did this through his experimentation with lenses and lighting. Toland described the achievement in an article for Theatre Arts magazine, made possible by the sensitivity of modern speed film:",
"title": "Style"
},
{
"paragraph_id": 60,
"text": "New developments in the science of motion picture photography are not abundant at this advanced stage of the game but periodically one is perfected to make this a greater art. Of these I am in an excellent position to discuss what is termed \"Pan-focus\", as I have been active for two years in its development and used it for the first time in Citizen Kane. Through its use, it is possible to photograph action from a range of eighteen inches from the camera lens to over two hundred feet away, with extreme foreground and background figures and action both recorded in sharp relief. Hitherto, the camera had to be focused either for a close or a distant shot, all efforts to encompass both at the same time resulting in one or the other being out of focus. This handicap necessitated the breaking up of a scene into long and short angles, with much consequent loss of realism. With pan-focus, the camera, like the human eye, sees an entire panorama at once, with everything clear and lifelike.",
"title": "Style"
},
{
"paragraph_id": 61,
"text": "Another unorthodox method used in the film was the low-angle shots facing upwards, thus allowing ceilings to be shown in the background of several scenes. Every set was built with a ceiling which broke with studio convention, and many were constructed of fabric that concealed microphones. Welles felt that the camera should show what the eye sees, and that it was a bad theatrical convention to pretend that there was no ceiling—\"a big lie in order to get all those terrible lights up there,\" he said. He became fascinated with the look of low angles, which made even dull interiors look interesting. One extremely low angle is used to photograph the encounter between Kane and Leland after Kane loses the election. A hole was dug for the camera, which required drilling into the concrete floor.",
"title": "Style"
},
{
"paragraph_id": 62,
"text": "Welles credited Toland on the same title card as himself. \"It's impossible to say how much I owe to Gregg,\" he said. \"He was superb.\" He called Toland \"the best director of photography that ever existed.\"",
"title": "Style"
},
{
"paragraph_id": 63,
"text": "Citizen Kane's sound was recorded by Bailey Fesler and re-recorded in post-production by audio engineer James G. Stewart, both of whom had worked in radio. Stewart said that Hollywood films never deviated from a basic pattern of how sound could be recorded or used, but with Welles \"deviation from the pattern was possible because he demanded it.\" Although the film is known for its complex soundtrack, much of the audio is heard as it was recorded by Fesler and without manipulation.",
"title": "Style"
},
{
"paragraph_id": 64,
"text": "Welles used techniques from radio like overlapping dialogue. The scene in which characters sing \"Oh, Mr. Kane\" was especially complicated and required mixing several soundtracks together. He also used different \"sound perspectives\" to create the illusion of distances, such as in scenes at Xanadu where characters speak to each other at far distances. Welles experimented with sound in post-production, creating audio montages, and chose to create all of the sound effects for the film instead of using RKO's library of sound effects.",
"title": "Style"
},
{
"paragraph_id": 65,
"text": "Welles used an aural technique from radio called the \"lightning-mix\". Welles used this technique to link complex montage sequences via a series of related sounds or phrases. For example, Kane grows from a child into a young man in just two shots. As Thatcher hands eight-year-old Kane a sled and wishes him a Merry Christmas, the sequence suddenly jumps to a shot of Thatcher fifteen years later, completing the sentence he began in both the previous shot and the chronological past. Other radio techniques include using a number of voices, each saying a sentence or sometimes merely a fragment of a sentence, and splicing the dialogue together in quick succession, such as the projection room scene. The film's sound cost $16,996, but was originally budgeted at $7,288.",
"title": "Style"
},
{
"paragraph_id": 66,
"text": "Film critic and director François Truffaut wrote that \"Before Kane, nobody in Hollywood knew how to set music properly in movies. Kane was the first, in fact the only, great film that uses radio techniques. ... A lot of filmmakers know enough to follow Auguste Renoir's advice to fill the eyes with images at all costs, but only Orson Welles understood that the sound track had to be filled in the same way.\" Cedric Belfrage of The Clipper wrote \"of all of the delectable flavours that linger on the palate after seeing Kane, the use of sound is the strongest.\"",
"title": "Style"
},
{
"paragraph_id": 67,
"text": "The make-up for Citizen Kane was created and applied by Maurice Seiderman (1907–1989), a junior member of the RKO make-up department. He had not been accepted into the union, which recognized him as only an apprentice, but RKO nevertheless used him to make up principal actors. \"Apprentices were not supposed to make up any principals, only extras, and an apprentice could not be on a set without a journeyman present,\" wrote make-up artist Dick Smith, who became friends with Seiderman in 1979. \"During his years at RKO I suspect these rules were probably overlooked often.\" \"Seiderman had gained a reputation as one of the most inventive and creatively precise up-and-coming makeup men in Hollywood,\" wrote biographer Frank Brady.",
"title": "Style"
},
{
"paragraph_id": 68,
"text": "On an early tour of RKO, Welles met Seiderman in the small make-up lab that he created for himself in an unused dressing room. \"Welles fastened on to him at once,\" wrote biographer Charles Higham, as Seiderman had developed his own makeup methods \"that ensured complete naturalness of expression—a naturalness unrivaled in Hollywood.\" Seiderman developed a thorough plan for aging the principal characters, first making a plaster cast of the face of each of the actors who aged. He made a plaster mold of Welles's body down to the hips.",
"title": "Style"
},
{
"paragraph_id": 69,
"text": "\"My sculptural techniques for the characters' aging were handled by adding pieces of white modeling clay, which matched the plaster, onto the surface of each bust,\" Seiderman told Norman Gambill. When Seiderman achieved the desired effect, he cast the clay pieces in a soft plastic material that he formulated himself. These appliances were then placed onto the plaster bust and a four-piece mold was made for each phase of aging. The castings were then fully painted and paired with the appropriate wig for evaluation.",
"title": "Style"
},
{
"paragraph_id": 70,
"text": "Before the actors went before the cameras each day, the pliable pieces were applied directly to their faces to recreate Seiderman's sculptural image. The facial surface was underpainted in a flexible red plastic compound; The red ground resulted in a warmth of tone that was picked up by the panchromatic film. Over that was applied liquid grease paint, and finally a colorless translucent talcum. Seiderman created the effect of skin pores on Kane's face by stippling the surface with a negative cast made from an orange peel.",
"title": "Style"
},
{
"paragraph_id": 71,
"text": "Welles often arrived on the set at 2:30 am, as application of the sculptural make-up took 3½ hours for the oldest incarnation of Kane. The make-up included appliances to age Welles's shoulders, breast, and stomach. \"In the film and production photographs, you can see that Kane had a belly that overhung,\" Seiderman said. \"That was not a costume, it was the rubber sculpture that created the image. You could see how Kane's silk shirt clung wetly to the character's body. It could not have been done any other way.\"",
"title": "Style"
},
{
"paragraph_id": 72,
"text": "Seiderman worked with Charles Wright on the wigs. These went over a flexible skull cover that Seiderman created and sewed into place with elastic thread. When he found the wigs too full, he untied one hair at a time to alter their shape. Kane's mustache was inserted into the makeup surface a few hairs at a time, to realistically vary the color and texture. He also made scleral lenses for Welles, Dorothy Comingore, George Coulouris, and Everett Sloane to dull the brightness of their young eyes. The lenses took a long time to fit properly, and Seiderman began work on them before devising any of the other makeup. \"I painted them to age in phases, ending with the blood vessels and the arcus senilis of old age.\" Seiderman's tour de force was the breakfast montage, shot all in one day. \"Twelve years, two years shot at each scene,\" he said.",
"title": "Style"
},
{
"paragraph_id": 73,
"text": "The major studios gave screen credit for make-up only to the department head. When RKO make-up department head Mel Berns refused to share credit with Seiderman, who was only an apprentice, Welles told Berns that there would be no make-up credit. Welles signed a large advertisement in the Los Angeles newspaper:",
"title": "Style"
},
{
"paragraph_id": 74,
"text": "THANKS TO EVERYBODY WHO GETS SCREEN CREDIT FOR \"CITIZEN KANE\"AND THANKS TO THOSE WHO DON'TTO ALL THE ACTORS, THE CREW, THE OFFICE, THE MUSICIANS, EVERYBODYAND PARTICULARLY TO MAURICE SEIDERMAN, THE BEST MAKE-UP MAN IN THE WORLD",
"title": "Style"
},
{
"paragraph_id": 75,
"text": "Although credited as an assistant, the film's art direction was done by Perry Ferguson. Welles and Ferguson got along during their collaboration. In the weeks before production began Welles, Toland and Ferguson met regularly to discuss the film and plan every shot, set design and prop. Ferguson would take notes during these discussions and create rough designs of the sets and story boards for individual shots. After Welles approved the rough sketches, Ferguson made miniature models for Welles and Toland to experiment on with a periscope in order to rehearse and perfect each shot. Ferguson then had detailed drawings made for the set design, including the film's lighting design. The set design was an integral part of the film's overall look and Toland's cinematography.",
"title": "Style"
},
{
"paragraph_id": 76,
"text": "In the original script the Great Hall at Xanadu was modeled after the Great Hall in Hearst Castle and its design included a mixture of Renaissance and Gothic styles. \"The Hearstian element is brought out in the almost perverse juxtaposition of incongruous architectural styles and motifs,\" wrote Carringer. Before RKO cut the film's budget, Ferguson's designs were more elaborate and resembled the production designs of early Cecil B. DeMille films and Intolerance. The budget cuts reduced Ferguson's budget by 33 percent and his work cost $58,775 total, which was below average at that time.",
"title": "Style"
},
{
"paragraph_id": 77,
"text": "To save costs Ferguson and Welles re-wrote scenes in Xanadu's living room and transported them to the Great Hall. A large staircase from another film was found and used at no additional cost. When asked about the limited budget, Ferguson said \"Very often—as in that much-discussed 'Xanadu' set in Citizen Kane—we can make a foreground piece, a background piece, and imaginative lighting suggests a great deal more on the screen than actually exists on the stage.\" According to the film's official budget there were 81 sets built, but Ferguson said there were between 106 and 116.",
"title": "Style"
},
{
"paragraph_id": 78,
"text": "Still photographs of Oheka Castle in Huntington, New York, were used in the opening montage, representing Kane's Xanadu estate. Ferguson also designed statues from Kane's collection with styles ranging from Greek to German Gothic. The sets were also built to accommodate Toland's camera movements. Walls were built to fold and furniture could quickly be moved. The film's famous ceilings were made out of muslin fabric and camera boxes were built into the floors for low angle shots. Welles later said that he was proud that the film production value looked much more expensive than the film's budget. Although neither worked with Welles again, Toland and Ferguson collaborated in several films in the 1940s.",
"title": "Style"
},
{
"paragraph_id": 79,
"text": "The film's special effects were supervised by RKO department head Vernon L. Walker. Welles pioneered several visual effects to cheaply shoot things like crowd scenes and large interior spaces. For example, the scene in which the camera in the opera house rises dramatically to the rafters, to show the workmen showing a lack of appreciation for Susan Alexander Kane's performance, was shot by a camera craning upwards over the performance scene, then a curtain wipe to a miniature of the upper regions of the house, and then another curtain wipe matching it again with the scene of the workmen. Other scenes effectively employed miniatures to make the film look much more expensive than it truly was, such as various shots of Xanadu.",
"title": "Style"
},
{
"paragraph_id": 80,
"text": "Some shots included rear screen projection in the background, such as Thompson's interview of Leland and some of the ocean backgrounds at Xanadu. Bordwell claims that the scene where Thatcher agrees to be Kane's guardian used rear screen projection to depict young Kane in the background, despite this scene being cited as a prime example of Toland's deep focus cinematography. A special effects camera crew from Walker's department was required for the extreme close-up shots such as Kane's lips when he says \"Rosebud\" and the shot of the typewriter typing Susan's bad review.",
"title": "Style"
},
{
"paragraph_id": 81,
"text": "Optical effects artist Dunn claimed that \"up to 80 percent of some reels was optically printed.\" These shots were traditionally attributed to Toland for years. The optical printer improved some of the deep focus shots. One problem with the optical printer was that it sometimes created excessive graininess, such as the optical zoom out of the snow globe. Welles decided to superimpose snow falling to mask the graininess in these shots. Toland said that he disliked the results of the optical printer, but acknowledged that \"RKO special effects expert Vernon Walker, ASC, and his staff handled their part of the production—a by no means inconsiderable assignment—with ability and fine understanding.\"",
"title": "Style"
},
{
"paragraph_id": 82,
"text": "Any time deep focus was impossible—as in the scene in which Kane finishes a negative review of Susan's opera while at the same time firing the person who began writing the review—an optical printer was used to make the whole screen appear in focus, visually layering one piece of film onto another. However, some apparently deep-focus shots were the result of in-camera effects, as in the famous scene in which Kane breaks into Susan's room after her suicide attempt. In the background, Kane and another man break into the room, while simultaneously the medicine bottle and a glass with a spoon in it are in closeup in the foreground. The shot was an in-camera matte shot. The foreground was shot first, with the background dark. Then the background was lit, the foreground darkened, the film rewound, and the scene re-shot with the background action.",
"title": "Style"
},
{
"paragraph_id": 83,
"text": "The film's music was composed by Bernard Herrmann. Herrmann had composed for Welles for his Mercury Theatre radio broadcasts. Because it was Herrmann's first motion picture score, RKO wanted to pay him only a small fee, but Welles insisted he be paid at the same rate as Max Steiner.",
"title": "Style"
},
{
"paragraph_id": 84,
"text": "The score established Herrmann as an important new composer of film soundtracks and eschewed the typical Hollywood practice of scoring a film with virtually non-stop music. Instead Herrmann used what he later described as \"radio scoring\", musical cues typically 5–15 seconds in length that bridge the action or suggest a different emotional response. The breakfast montage sequence begins with a graceful waltz theme and gets darker with each variation on that theme as the passage of time leads to the hardening of Kane's personality and the breakdown of his first marriage.",
"title": "Style"
},
{
"paragraph_id": 85,
"text": "Herrmann realized that musicians slated to play his music were hired for individual unique sessions; there was no need to write for existing ensembles. This meant that he was free to score for unusual combinations of instruments, even instruments that are not commonly heard. In the opening sequence, for example, the tour of Kane's estate Xanadu, Herrmann introduces a recurring leitmotif played by low woodwinds, including a quartet of alto flutes.",
"title": "Style"
},
{
"paragraph_id": 86,
"text": "For Susan Alexander Kane's operatic sequence, Welles suggested that Herrmann compose a witty parody of a Mary Garden vehicle, an aria from Salammbô. \"Our problem was to create something that would give the audience the feeling of the quicksand into which this simple little girl, having a charming but small voice, is suddenly thrown,\" Herrmann said. Writing in the style of a 19th-century French Oriental opera, Herrmann put the aria in a key that would force the singer to strain to reach the high notes, culminating in a high D, well outside the range of Susan Alexander. Soprano Jean Forward dubbed the vocal part for Comingore. Houseman claimed to have written the libretto, based on Jean Racine's Athalie and Phedre, although some confusion remains since Lucille Fletcher remembered preparing the lyrics. Fletcher, then Herrmann's wife, wrote the libretto for his opera Wuthering Heights.",
"title": "Style"
},
{
"paragraph_id": 87,
"text": "Music enthusiasts consider the scene in which Susan Alexander Kane attempts to sing the famous cavatina \"Una voce poco fa\" from Il barbiere di Siviglia by Gioachino Rossini with vocal coach Signor Matiste as especially memorable for depicting the horrors of learning music through mistakes.",
"title": "Style"
},
{
"paragraph_id": 88,
"text": "In 1972, Herrmann said, \"I was fortunate to start my career with a film like Citizen Kane, it's been a downhill run ever since!\" Welles loved Herrmann's score and told director Henry Jaglom that it was 50 percent responsible for the film's artistic success.",
"title": "Style"
},
{
"paragraph_id": 89,
"text": "Some incidental music came from other sources. Welles heard the tune used for the publisher's theme, \"Oh, Mr. Kane\", in Mexico. Called \"A Poco No\", the song was written by Pepe Guízar and special lyrics were written by Herman Ruby.",
"title": "Style"
},
{
"paragraph_id": 90,
"text": "\"In a Mizz\", a 1939 jazz song by Charlie Barnet and Haven Johnson, bookends Thompson's second interview of Susan Alexander Kane. \"I kind of based the whole scene around that song,\" Welles said. \"The music is by Nat Cole—it's his trio.\" Later—beginning with the lyrics, \"It can't be love\"—\"In a Mizz\" is performed at the Everglades picnic, framing the fight in the tent between Susan and Kane. Musicians including bandleader Cee Pee Johnson (drums), Alton Redd (vocals), Raymond Tate (trumpet), Buddy Collette (alto sax) and Buddy Banks (tenor sax) are featured.",
"title": "Style"
},
{
"paragraph_id": 91,
"text": "All of the music used in the newsreel came from the RKO music library, edited at Welles's request by the newsreel department to achieve what Herrmann called \"their own crazy way of cutting\". The News on the March theme that accompanies the newsreel titles is \"Belgian March\" by Anthony Collins, from the film Nurse Edith Cavell. Other examples are an excerpt from Alfred Newman's score for Gunga Din (the exploration of Xanadu), Roy Webb's theme for the film Reno (the growth of Kane's empire), and bits of Webb's score for Five Came Back (introducing Walter Parks Thatcher).",
"title": "Style"
},
{
"paragraph_id": 92,
"text": "One of the editing techniques used in Citizen Kane was the use of montage to collapse time and space, using an episodic sequence on the same set while the characters changed costume and make-up between cuts so that the scene following each cut would look as if it took place in the same location, but at a time long after the previous cut. In the breakfast montage, Welles chronicles the breakdown of Kane's first marriage in five vignettes that condense 16 years of story time into two minutes of screen time. Welles said that the idea for the breakfast scene \"was stolen from The Long Christmas Dinner by Thornton Wilder ... a one-act play, which is a long Christmas dinner that takes you through something like 60 years of a family's life.\" The film often uses long dissolves to signify the passage of time and its psychological effect of the characters, such as the scene in which the abandoned sled is covered with snow after the young Kane is sent away with Thatcher.",
"title": "Style"
},
{
"paragraph_id": 93,
"text": "Welles was influenced by the editing theories of Sergei Eisenstein by using jarring cuts that caused \"sudden graphic or associative contrasts\", such as the cut from Kane's deathbed to the beginning of the News on the March sequence and a sudden shot of a shrieking cockatoo at the beginning of Raymond's flashback. Although the film typically favors mise-en-scène over montage, the scene in which Kane goes to Susan Alexander's apartment after first meeting her is the only one that is primarily cut as close-ups with shots and counter shots between Kane and Susan. Fabe says that \"by using a standard Hollywood technique sparingly, [Welles] revitalizes its psychological expressiveness.\"",
"title": "Style"
},
{
"paragraph_id": 94,
"text": "Welles never confirmed a principal source for the character of Charles Foster Kane. Houseman wrote that Kane is a synthesis of different personalities, with Hearst's life used as the main source. Some events and details were invented, and Houseman wrote that he and Mankiewicz also \"grafted anecdotes from other giants of journalism, including Pulitzer, Northcliffe and Mank's first boss, Herbert Bayard Swope.\" Welles said, \"Mr. Hearst was quite a bit like Kane, although Kane isn't really founded on Hearst in particular. Many people sat for it, so to speak\". He specifically acknowledged that aspects of Kane were drawn from the lives of two business tycoons familiar from his youth in Chicago—Samuel Insull and Harold Fowler McCormick.",
"title": "Sources"
},
{
"paragraph_id": 95,
"text": "The character of Jedediah Leland was based on drama critic Ashton Stevens, George Stevens's uncle and Welles's close boyhood friend. Some detail came from Mankiewicz's own experience as a drama critic in New York.",
"title": "Sources"
},
{
"paragraph_id": 96,
"text": "Many assumed that the character of Susan Alexander Kane was based on Marion Davies, Hearst's mistress whose career he managed and whom Hearst promoted as a motion picture actress. This assumption was a major reason Hearst tried to destroy Citizen Kane. Welles denied that the character was based on Davies, whom he called \"an extraordinary woman—nothing like the character Dorothy Comingore played in the movie.\" He cited Insull's building of the Chicago Opera House, and McCormick's lavish promotion of the opera career of his second wife, Ganna Walska, as direct influences on the screenplay.",
"title": "Sources"
},
{
"paragraph_id": 97,
"text": "The character of political boss Jim W. Gettys is based on Charles F. Murphy, a leader in New York City's infamous Tammany Hall political machine.",
"title": "Sources"
},
{
"paragraph_id": 98,
"text": "Welles credited \"Rosebud\" to Mankiewicz. Biographer Richard Meryman wrote that the symbol of Mankiewicz's own damaged childhood was a treasured bicycle, stolen while he visited the public library and not replaced by his family as punishment. He regarded it as the prototype of Charles Foster Kane's sled. In his 2015 Welles biography, Patrick McGilligan reported that Mankiewicz himself stated that the word \"Rosebud\" was taken from the name of a famous racehorse, Old Rosebud. Mankiewicz had a bet on the horse in the 1914 Kentucky Derby, which he won, and McGilligan wrote that \"Old Rosebud symbolized his lost youth, and the break with his family\". In testimony for the Lundberg suit, Mankiewicz said, \"I had undergone psycho-analysis, and Rosebud, under circumstances slightly resembling the circumstances in [Citizen Kane], played a prominent part.\" Gore Vidal has argued in the New York Review of Books that “Rosebud was what Hearst called his friend Marion Davies’s clitoris”.",
"title": "Sources"
},
{
"paragraph_id": 99,
"text": "The News on the March sequence that begins the film satirizes the journalistic style of The March of Time, the news documentary and dramatization series presented in movie theaters by Time Inc. From 1935 to 1938 Welles was a member of the uncredited company of actors that presented the original radio version.",
"title": "Sources"
},
{
"paragraph_id": 100,
"text": "Houseman claimed that banker Walter P. Thatcher was loosely based on J. P. Morgan. Bernstein was named for Dr. Maurice Bernstein, appointed Welles's guardian; Sloane's portrayal was said to be based on Bernard Herrmann. Herbert Carter, editor of The Inquirer, was named for actor Jack Carter.",
"title": "Sources"
},
{
"paragraph_id": 101,
"text": "Laura Mulvey explored the anti-fascist themes of Citizen Kane in her 1992 monograph for the British Film Institute. The News on the March newsreel presents Kane keeping company with Hitler and other dictators while he smugly assures the public that there will be no war. She wrote that the film reflects \"the battle between intervention and isolationism\" then being waged in the United States; the film was released six months before the attack on Pearl Harbor, while President Franklin D. Roosevelt was laboring to win public opinion for entering World War II. \"In the rhetoric of Citizen Kane,\" Mulvey writes, \"the destiny of isolationism is realised in metaphor: in Kane's own fate, dying wealthy and lonely, surrounded by the detritus of European culture and history.\"",
"title": "Political themes"
},
{
"paragraph_id": 102,
"text": "Journalist Ignacio Ramonet has cited the film as an early example of mass media manipulation of public opinion and the power that media conglomerates have on influencing the democratic process. He believes that this early example of a media mogul influencing politics is outdated and that today \"there are media groups with the power of a thousand Citizen Kanes.\" Media mogul Rupert Murdoch is sometimes labeled as a latter-day Citizen Kane.",
"title": "Political themes"
},
{
"paragraph_id": 103,
"text": "Comparisons have also been made between the career and character of Donald Trump and Charles Foster Kane. Citizen Kane is reported to be one of Trump's favorite films, and his biographer Tim O’Brien has said that Trump is fascinated by and identifies with Kane. In an interview with filmmaker Errol Morris, Trump explained his own interpretation of the film's themes, saying \"You learn in 'Kane' maybe wealth isn't everything, because he had the wealth but he didn't have the happiness. In real life I believe that wealth does in fact isolate you from other people. It's a protective mechanism — you have your guard up much more so [than] if you didn't have wealth...Perhaps I can understand that.\"",
"title": "Political themes"
},
{
"paragraph_id": 104,
"text": "To ensure that Hearst's life's influence on Citizen Kane was a secret, Welles limited access to dailies and managed the film's publicity. A December 1940 feature story in Stage magazine compared the film's narrative to Faust and made no mention of Hearst.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 105,
"text": "The film was scheduled to premiere at RKO's flagship theater Radio City Music Hall on February 14, but in early January 1941 Welles was not finished with post-production work and told RKO that it still needed its musical score. Writers for national magazines had early deadlines and so a rough cut was previewed for a select few on January 3, 1941 for such magazines as Life, Look and Redbook. Gossip columnist Hedda Hopper (an arch-rival of Louella Parsons, the Hollywood correspondent for Hearst papers) showed up to the screening uninvited. Most of the critics at the preview said that they liked the film and gave it good advanced reviews. Hopper wrote negatively about it, calling the film a \"vicious and irresponsible attack on a great man\" and criticizing its corny writing and old fashioned photography.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 106,
"text": "Friday magazine ran an article drawing point-by-point comparisons between Kane and Hearst and documented how Welles had led on Parsons. Up until this Welles had been friendly with Parsons. The magazine quoted Welles as saying that he could not understand why she was so nice to him and that she should \"wait until the woman finds out that the picture's about her boss.\" Welles immediately denied making the statement and the editor of Friday admitted that it might be false. Welles apologized to Parsons and assured her that he had never made that remark.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 107,
"text": "Shortly after Friday's article, Hearst sent Parsons an angry letter complaining that he had learned about Citizen Kane from Hopper and not her. The incident made a fool of Parsons and compelled her to start attacking Welles and the film. Parsons demanded a private screening of the film and personally threatened Schaefer on Hearst's behalf, first with a lawsuit and then with a vague threat of consequences for everyone in Hollywood. On January 10 Parsons and two lawyers working for Hearst were given a private screening of the film. James G. Stewart was present at the screening and said that she walked out of the film.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 108,
"text": "Soon after, Parsons called Schaefer and threatened RKO with a lawsuit if they released Kane. She also contacted the management of Radio City Music Hall and demanded that they should not screen it. The next day, the front page headline in Daily Variety read, \"HEARST BANS RKO FROM PAPERS.\" Hearst began this ban by suppressing promotion of RKO's Kitty Foyle, but in two weeks the ban was lifted for everything except Kane.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 109,
"text": "When Schaefer did not submit to Parsons she called other studio heads and made more threats on behalf of Hearst to expose the private lives of people throughout the entire film industry. Welles was threatened with an exposé about his romance with the married actress Dolores del Río, who wanted the affair kept secret until her divorce was finalized. In a statement to journalists Welles denied that the film was about Hearst. Hearst began preparing an injunction against the film for libel and invasion of privacy, but Welles's lawyer told him that he doubted Hearst would proceed due to the negative publicity and required testimony that an injunction would bring.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 110,
"text": "The Hollywood Reporter ran a front-page story on January 13 that Hearst papers were about to run a series of editorials attacking Hollywood's practice of hiring refugees and immigrants for jobs that could be done by Americans. The goal was to put pressure on the other studios to force RKO to shelve Kane. Many of those immigrants had fled Europe after the rise of fascism and feared losing the haven of the United States. Soon afterwards, Schaefer was approached by Nicholas Schenck, head of Metro-Goldwyn-Mayer's parent company, with an offer on the behalf of Louis B. Mayer and other Hollywood executives to RKO Pictures of $805,000 to destroy all prints of the film and burn the negative.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 111,
"text": "Once RKO's legal team reassured Schaefer, the studio announced on January 21 that Kane would be released as scheduled, and with one of the largest promotional campaigns in the studio's history. Schaefer brought Welles to New York City for a private screening of the film with the New York corporate heads of the studios and their lawyers. There was no objection to its release provided that certain changes, including the removal or softening of specific references that might offend Hearst, were made. Welles agreed and cut the running time from 122 minutes to 119 minutes. The cuts satisfied the corporate lawyers.",
"title": "Pre-release controversy"
},
{
"paragraph_id": 112,
"text": "Radio City Music Hall's management refused to screen Citizen Kane for its premiere. A possible factor was Parsons's threat that The American Weekly would run a defamatory story on the grandfather of major RKO stockholder Nelson Rockefeller. Other exhibitors feared being sued for libel by Hearst and refused to show the film. In March Welles threatened the RKO board of governors with a lawsuit if they did not release the film. Schaefer stood by Welles and opposed the board of governors. When RKO still delayed the film's release Welles offered to buy the film for $1 million and the studio finally agreed to release the film on May 1.",
"title": "Release"
},
{
"paragraph_id": 113,
"text": "Schaefer managed to book a few theaters willing to show the film. Hearst papers refused to accept advertising. RKO's publicity advertisements for the film erroneously promoted it as a love story.",
"title": "Release"
},
{
"paragraph_id": 114,
"text": "Kane opened at the RKO Palace Theatre on Broadway in New York on May 1, 1941, in Chicago on May 6, and in Los Angeles on May 8. Welles said that at the Chicago premiere that he attended the theater was almost empty.",
"title": "Release"
},
{
"paragraph_id": 115,
"text": "The day after the New York release, The New York Times said \"it comes close to being the most sensational film ever made in Hollywood\". The Washington Post called it \"one of the most important films in the history\" of filmmaking. The Washington Evening Star said Welles was a genius who created \"a superbly dramatic biography of another genius\" and \"a picture that is revolutionary\". The Chicago Tribune called the film interesting and different but \"its sacrifice of simplicity to eccentricity robs it of distinction and general entertainment value\". The Los Angeles Times gave the film a mixed review, saying it was brilliant and skillful at times with an ending that \"rather fizzled\".",
"title": "Release"
},
{
"paragraph_id": 116,
"text": "The film did well in cities and larger towns, but it fared poorly in more remote areas. RKO still had problems getting exhibitors to show the film. For example, one chain controlling more than 500 theaters got Welles's film as part of a package but refused to play it, reportedly out of fear of Hearst. Hearst's disruption of the film's release damaged its box office performance and, as a result, it lost $160,000 during its initial run. The film earned $23,878 during its first week in New York. By the ninth week it only made $7,279. Overall it lost money in New York, Boston, Chicago, Los Angeles, San Francisco and Washington, D.C., but made a profit in Seattle.",
"title": "Release"
},
{
"paragraph_id": 117,
"text": "Written and directed by Welles at Toland's suggestion, the theatrical trailer for Citizen Kane differs from other trailers in that it did not feature a single second of footage of the actual film itself, but acts as a wholly original, tongue-in-cheek, pseudo-documentary piece on the film's production. Filmed at the same time as Citizen Kane itself, it offers the only existing behind-the-scenes footage of the film. The trailer, shot by Wild instead of Toland, follows an unseen Welles as he provides narration for a tour around the film set, introductions to the film's core cast members, and a brief overview of Kane's character. The trailer also contains a number of trick shots, including one of Everett Sloane appearing at first to be running into the camera, which turns out to be the reflection of the camera in a mirror.",
"title": "Release"
},
{
"paragraph_id": 118,
"text": "At the time, it was almost unprecedented for a film trailer to not actually feature anything of the film itself; and while Citizen Kane is frequently cited as a groundbreaking, influential film, Simon Callow argues its trailer was no less original in its approach. Callow writes that it has \"great playful charm ... it is a miniature documentary, almost an introduction to the cinema ... Teasing, charming, completely original, it is a sort of conjuring trick: Without his face appearing once on the screen, Welles entirely dominates its five [sic] minutes' duration.\"",
"title": "Release"
},
{
"paragraph_id": 119,
"text": "Hearing about Citizen Kane enraged Hearst so much that he banned any advertising, reviewing, or mentioning of it in his papers, and had his journalists libel Welles. Welles used Hearst's opposition as a pretext for previewing the film in several opinion-making screenings in Los Angeles, lobbying for its artistic worth against the hostile campaign that Hearst was waging. A special press screening took place in early March. Henry Luce was in attendance and reportedly wanted to buy the film from RKO for $1 million to distribute it himself. The reviews for this screening were positive. A Hollywood Review headline read, \"Mr. Genius Comes Through; 'Kane' Astonishing Picture\". The Motion Picture Herald reported about the screening and Hearst's intention to sue RKO. Time magazine wrote that \"The objection of Mr. Hearst, who founded a publishing empire on sensationalism, is ironic. For to most of the several hundred people who have seen the film at private screenings, Citizen Kane is the most sensational product of the U.S. movie industry.\" A second press screening occurred in April.",
"title": "Release"
},
{
"paragraph_id": 120,
"text": "When Schaefer rejected Hearst's offer to suppress the film, Hearst banned every newspaper and station in his media conglomerate from reviewing—or even mentioning—the film. He also had many movie theaters ban it, and many did not show it through fear of being socially exposed by his massive newspaper empire. The Oscar-nominated documentary The Battle Over Citizen Kane lays the blame for the film's relative failure squarely at the feet of Hearst. The film did decent business at the box office; it went on to be the sixth highest grossing film in its year of release, a modest success its backers found acceptable. Nevertheless, the film's commercial performance fell short of its creators' expectations. Hearst's biographer David Nasaw points out that Hearst's actions were not the only reason Kane failed, however: the innovations Welles made with narrative, as well as the dark message at the heart of the film (that the pursuit of success is ultimately futile) meant that a popular audience could not appreciate its merits.",
"title": "Release"
},
{
"paragraph_id": 121,
"text": "Hearst's attacks against Welles went beyond attempting to suppress the film. Welles said that while he was on his post-filming lecture tour a police detective approached him at a restaurant and advised him not to go back to his hotel. A 14-year-old girl had reportedly been hidden in the closet of his room, and two photographers were waiting for him to walk in. Knowing he would be jailed after the resulting publicity, Welles did not return to the hotel but waited until the train left town the following morning. \"But that wasn't Hearst,\" Welles said, \"that was a hatchet man from the local Hearst paper who thought he would advance himself by doing it.\"",
"title": "Release"
},
{
"paragraph_id": 122,
"text": "In March 1941, Welles directed a Broadway version of Richard Wright's Native Son (and, for luck, used a \"Rosebud\" sled as a prop). Native Son received positive reviews, but Hearst-owned papers used the opportunity to attack Welles as a communist. The Hearst papers vociferously attacked Welles after his April 1941 radio play, \"His Honor, the Mayor\", produced for The Free Company radio series on CBS.",
"title": "Release"
},
{
"paragraph_id": 123,
"text": "Welles described his chance encounter with Hearst in an elevator at the Fairmont Hotel on the night Citizen Kane opened in San Francisco. Hearst and Welles's father were acquaintances, so Welles introduced himself and asked Hearst if he would like to come to the opening. Hearst did not respond. \"As he was getting off at his floor, I said, 'Charles Foster Kane would have accepted.' No reply\", recalled Welles. \"And Kane would have, you know. That was his style—just as he finished Jed Leland's bad review of Susan as an opera singer.\"",
"title": "Release"
},
{
"paragraph_id": 124,
"text": "In 1945, Hearst journalist Robert Shaw wrote that the film got \"a full tide of insensate fury\" from Hearst papers, \"then it ebbed suddenly. With one brain cell working, the chief realized that such hysterical barking by the trained seals would attract too much attention to the picture. But to this day the name of Orson Welles is on the official son-of-a-bitch list of every Hearst newspaper\".",
"title": "Release"
},
{
"paragraph_id": 125,
"text": "Despite Hearst's attempts to destroy the film, since 1941 references to his life and career have usually included a reference to Citizen Kane, such as the headline 'Son of Citizen Kane Dies' for the obituary of Hearst's son. In 2012, the Hearst estate agreed to screen the film at Hearst Castle in San Simeon, breaking Hearst's ban on the film.",
"title": "Release"
},
{
"paragraph_id": 126,
"text": "Citizen Kane received acclaim from several critics. New York Daily News critic Kate Cameron called it \"one of the most interesting and technically superior films that has ever come out of a Hollywood studio\". New York World-Telegram critic William Boehnel said that the film was \"staggering and belongs at once among the greatest screen achievements\". Time magazine wrote that \"it has found important new techniques in picture-making and story-telling.\" Life magazine's review said that \"few movies have ever come from Hollywood with such powerful narrative, such original technique, such exciting photography.\" John C. Mosher of The New Yorker called the film's style \"like fresh air\" and raved \"Something new has come to the movie world at last.\" Anthony Bower of The Nation called it \"brilliant\" and praised the cinematography and performances by Welles, Comingore and Cotten. John O'Hara's Newsweek review called it the best picture he'd ever seen and said Welles was \"the best actor in the history of acting.\" Welles called O'Hara's review \"the greatest review that anybody ever had.\"",
"title": "Release"
},
{
"paragraph_id": 127,
"text": "The day following the premiere of Citizen Kane, The New York Times critic Bosley Crowther wrote that \"... it comes close to being the most sensational film ever made in Hollywood.\"",
"title": "Release"
},
{
"paragraph_id": 128,
"text": "Count on Mr. Welles: he doesn't do things by halves. ... Upon the screen he discovered an area large enough for his expansive whims to have free play. And the consequence is that he has made a picture of tremendous and overpowering scope, not in physical extent so much as in its rapid and graphic rotation of thoughts. Mr. Welles has put upon the screen a motion picture that really moves.",
"title": "Release"
},
{
"paragraph_id": 129,
"text": "In the UK C. A. Lejeune of The Observer called it \"The most exciting film that has come out of Hollywood in twenty-five years\" and Dilys Powell of The Sunday Times said the film's style was made \"with the ease and boldness and resource of one who controls and is not controlled by his medium.\" Edward Tangye Lean of Horizon praised the film's technical style, calling it \"perhaps a decade ahead of its contemporaries.\"",
"title": "Release"
},
{
"paragraph_id": 130,
"text": "A few reviews were mixed. Otis Ferguson of The New Republic said it was \"the boldest free-hand stroke in major screen production since Griffith and Bitzer were running wild to unshackle the camera\", but also criticized its style, calling it a \"retrogression in film technique\" and stating that \"it holds no great place\" in film history. Ferguson reacted to some of the film's celebrated visual techniques by calling them \"just willful dabbling\" and \"the old shell game.\" In a rare film review, filmmaker Erich von Stroheim criticized the film's story and non-linear structure, but praised the technical style and performances, and wrote \"Whatever the truth may be about it, Citizen Kane is a great picture and will go down in screen history. More power to Welles!\"",
"title": "Release"
},
{
"paragraph_id": 131,
"text": "Some prominent critics wrote negative reviews. In his 1941 review for Sur, Jorge Luis Borges famously called the film \"a labyrinth with no center\" and predicted that its legacy would be a film \"whose historical value is undeniable but which no one cares to see again.\" The Argus Weekend Magazine critic Erle Cox called the film \"amazing\" but thought that Welles's break with Hollywood traditions was \"overdone\". Tatler's James Agate called it \"the well-intentioned, muddled, amateurish thing one expects from high-brows\" and \"a quite good film which tries to run the psychological essay in harness with your detective thriller, and doesn't quite succeed.\" Eileen Creelman of The New York Sun called it \"a cold picture, unemotional, a puzzle rather than a drama\". Other people who disliked the film were W. H. Auden and James Agee. After watching the film on January 29, 1942 Kenneth Williams, then aged 15, writing in his first diary curtly described it as \"boshey rot\".",
"title": "Release"
},
{
"paragraph_id": 132,
"text": "Modern critics have given Citizen Kane an even more positive response. Review aggregation website Rotten Tomatoes reports that 99% of 125 critics gave the film a positive review, with an average rating of 9.70/10. The site's critical consensus reads: \"Orson Welles's epic tale of a publishing tycoon's rise and fall is entertaining, poignant, and inventive in its storytelling, earning its reputation as a landmark achievement in film.\" In April 2021, it was noted that the addition of an 80-year-old negative review from the Chicago Tribune reduced the film's rating from 100% to 99% on the site; Citizen Kane held its 100% rating until early 2021. On Metacritic, however, the film still has a rare weighted average score of 100 out of 100 based on 19 critics, indicating \"universal acclaim\".",
"title": "Release"
},
{
"paragraph_id": 133,
"text": "It was widely believed the film would win most of its Academy Award nominations, but it received only the award for Best Original Screenplay. Variety reported that block voting by screen extras deprived Citizen Kane of Best Picture and Best Actor, and similar prejudices were likely to have been responsible for the film receiving no technical awards.",
"title": "Release"
},
{
"paragraph_id": 134,
"text": "Citizen Kane was the only film made under Welles's original contract with RKO Pictures, which gave him complete creative control. Welles's new business manager and attorney permitted the contract to lapse. In July 1941, Welles reluctantly signed a new and less favorable deal with RKO under which he produced and directed The Magnificent Ambersons (1942), produced Journey into Fear (1943), and began It's All True, a film he agreed to do without payment. In the new contract Welles was an employee of the studio and lost the right to final cut, which later allowed RKO to modify and re-cut The Magnificent Ambersons over his objections. In June 1942, Schaefer resigned the presidency of RKO Pictures and Welles's contract was terminated by his successor.",
"title": "Legacy"
},
{
"paragraph_id": 135,
"text": "During World War II, Citizen Kane was not seen in most European countries. It was shown in France for the first time on July 10, 1946, at the Marbeuf theater in Paris. Initially most French film critics were influenced by the negative reviews of Jean-Paul Sartre in 1945 and Georges Sadoul in 1946. At that time many French intellectuals and filmmakers shared Sartre's negative opinion that Hollywood filmmakers were uncultured. Sartre criticized the film's flashbacks for its nostalgic and romantic preoccupation with the past instead of the realities of the present and said that \"the whole film is based on a misconception of what cinema is all about. The film is in the past tense, whereas we all know that cinema has got to be in the present tense.\"",
"title": "Legacy"
},
{
"paragraph_id": 136,
"text": "André Bazin, a then little-known film critic working for Sartre's Les Temps modernes, was asked to give an impromptu speech about the film after a screening at the Colisée Theatre in the autumn of 1946 and changed the opinion of much of the audience. This speech led to Bazin's 1947 article \"The Technique of Citizen Kane\", which directly influenced public opinion about the film. Carringer wrote that Bazin was \"the one who did the most to enhance the film's reputation.\" Both Bazin's critique of the film and his theories about cinema itself centered around his strong belief in mise-en-scène. These theories were diametrically opposed to both the popular Soviet montage theory and the politically Marxist and anti-Hollywood beliefs of most French film critics at that time. Bazin believed that a film should depict reality without the filmmaker imposing their \"will\" on the spectator, which the Soviet theory supported. Bazin wrote that Citizen Kane's mise-en-scène created a \"new conception of filmmaking\" and that the freedom given to the audience from the deep focus shots was innovative by changing the entire concept of the cinematic image. Bazin wrote extensively about the mise-en-scène in the scene where Susan Alexander attempts suicide, which was one long take while other films would have used four or five shots in the scene. Bazin wrote that the film's mise-en-scène \"forces the spectator to participate in the meaning of the film\" and creates \"a psychological realism which brings the spectator back to the real conditions of perception.\"",
"title": "Legacy"
},
{
"paragraph_id": 137,
"text": "In his 1950 essay \"The Evolution of the Language of Cinema\", Bazin placed Citizen Kane center stage as a work which ushered in a new period in cinema. One of the first critics to defend motion pictures as being on the same artistic level as literature or painting, Bazin often used the film as an example of cinema as an art form and wrote that \"Welles has given the cinema a theoretical restoration. He has enriched his filmic repertory with new or forgotten effects that, in today's artistic context, take on a significance we didn't know they could have.\" Bazin also compared the film to Roberto Rossellini's Paisan for having \"the same aesthetic concept of realism\" and to the films of William Wyler shot by Toland (such as The Little Foxes and The Best Years of Our Lives), all of which used deep focus cinematography that Bazin called \"a dialectical step forward in film language.\"",
"title": "Legacy"
},
{
"paragraph_id": 138,
"text": "Bazin's praise of the film went beyond film theory and reflected his own philosophy towards life itself. His metaphysical interpretations about the film reflected humankind's place in the universe. Bazin believed that the film examined one person's identity and search for meaning. It portrayed the world as ambiguous and full of contradictions, whereas films up until then simply portrayed people's actions and motivations. Bazin's biographer Dudley Andrew wrote that:",
"title": "Legacy"
},
{
"paragraph_id": 139,
"text": "The world of Citizen Kane, that mysterious, dark, and infinitely deep world of space and memory where voices trail off into distant echoes and where meaning dissolves into interpretation, seemed to Bazin to mark the starting point from which all of us try to construct provisionally the sense of our lives.",
"title": "Legacy"
},
{
"paragraph_id": 140,
"text": "Bazin went on to co-found Cahiers du cinéma, whose contributors (including future film directors François Truffaut and Jean-Luc Godard) also praised the film. The popularity of Truffaut's auteur theory helped the film's and Welles's reputation.",
"title": "Legacy"
},
{
"paragraph_id": 141,
"text": "By 1942 Citizen Kane had run its course theatrically and, apart from a few showings at big city arthouse cinemas, it largely vanished and both the film's and Welles's reputation fell among American critics. In 1949 critic Richard Griffith in his overview of cinema, The Film Till Now, dismissed Citizen Kane as \"... tinpot if not crackpot Freud.\"",
"title": "Legacy"
},
{
"paragraph_id": 142,
"text": "In the United States, it was neglected and forgotten until its revival on television in the mid-to-late 1950s. Three key events in 1956 led to its re-evaluation in the United States: first, RKO was one of the first studios to sell its library to television, and early that year Citizen Kane started to appear on television; second, the film was re-released theatrically to coincide with Welles's return to the New York stage, where he played King Lear; and third, American film critic Andrew Sarris wrote \"Citizen Kane: The American Baroque\" for Film Culture, and described it as \"the great American film\" and \"the work that influenced the cinema more profoundly than any American film since The Birth of a Nation.\" Carringer considers Sarris's essay as the most important influence on the film's reputation in the US.",
"title": "Legacy"
},
{
"paragraph_id": 143,
"text": "During Expo 58, a poll of over 100 film historians named Kane one of the top ten greatest films ever made (the group gave first-place honors to Battleship Potemkin). When a group of young film directors announced their vote for the top six, they were booed for not including the film.",
"title": "Legacy"
},
{
"paragraph_id": 144,
"text": "In the decades since, its critical status as one of the greatest films ever made has grown, with numerous essays and books on it including Peter Cowie's The Cinema of Orson Welles, Ronald Gottesman's Focus on Citizen Kane, a collection of significant reviews and background pieces, and most notably Kael's essay, \"Raising Kane\", which promoted the value of the film to a much wider audience than it had reached before. Despite its criticism of Welles, it further popularized the notion of Citizen Kane as the great American film. The rise of art house and film society circuits also aided in the film's rediscovery. David Thomson said that the film 'grows with every year as America comes to resemble it.\"",
"title": "Legacy"
},
{
"paragraph_id": 145,
"text": "The British magazine Sight & Sound has produced a Top Ten list surveying film critics every decade since 1952, and is regarded as one of the most respected barometers of critical taste. Citizen Kane was a runner up to the top 10 in its 1952 poll but was voted as the greatest film ever made in its 1962 poll, retaining the top spot in every subsequent poll until 2012, when Vertigo displaced it.",
"title": "Legacy"
},
{
"paragraph_id": 146,
"text": "The film has also ranked number one in the following film \"best of\" lists: Julio Castedo's The 100 Best Films of the Century, Cahiers du cinéma's 100 films pour une cinémathèque idéale, Kinovedcheskie Zapiski, Time Out magazine's Top 100 Films (Centenary), The Village Voice's 100 Greatest Films, and The Royal Belgian Film Archive's Most Important and Misappreciated American Films.",
"title": "Legacy"
},
{
"paragraph_id": 147,
"text": "Roger Ebert called Citizen Kane the greatest film ever made: \"But people don't always ask about the greatest film. They ask, 'What's your favorite movie?' Again, I always answer with Citizen Kane.\"",
"title": "Legacy"
},
{
"paragraph_id": 148,
"text": "In 1998 Time Out conducted a reader's poll and Citizen Kane was voted 3rd best film of all time. On February 18, 1999, the United States Postal Service honored Citizen Kane by including it in its Celebrate the Century series. The film was honored again in February 25, 2003, in a series of U.S. postage stamps marking the 75th anniversary of the Academy of Motion Picture Arts and Sciences. Art director Perry Ferguson represents the behind-the-scenes craftsmen of filmmaking in the series; he is depicted completing a sketch for Citizen Kane.",
"title": "Legacy"
},
{
"paragraph_id": 149,
"text": "Citizen Kane was ranked number one in the American Film Institute's polls of film industry artists and leaders in 1998 and 2007. \"Rosebud\" was chosen as the 17th most memorable movie quotation in a 2005 AFI poll. The film's score was one of 250 nominees for the top 25 film scores in American cinema in another 2005 AFI poll. In 2005 the film was included on Time's All-Time 100 best movies list.",
"title": "Legacy"
},
{
"paragraph_id": 150,
"text": "In 2012, the Motion Picture Editors Guild published a list of the 75 best-edited films of all time based on a survey of its membership. Citizen Kane was listed second. In 2015, Citizen Kane ranked 1st on BBC's \"100 Greatest American Films\" list, voted on by film critics from around the world.",
"title": "Legacy"
},
{
"paragraph_id": 151,
"text": "Citizen Kane has been called the most influential film of all time. Richard Corliss has asserted that Jules Dassin's 1941 film The Tell-Tale Heart was the first example of its influence and the first pop culture reference to the film occurred later in 1941 when the spoof comedy Hellzapoppin' featured a \"Rosebud\" sled. The film's cinematography was almost immediately influential and in 1942 American Cinematographer wrote \"without a doubt the most immediately noticeable trend in cinematography methods during the year was the trend toward crisper definition and increased depth of field.\"",
"title": "Legacy"
},
{
"paragraph_id": 152,
"text": "The cinematography influenced John Huston's The Maltese Falcon. Cinematographer Arthur Edeson used a wider-angle lens than Toland and the film includes many long takes, low angles and shots of the ceiling, but it did not use deep focus shots on large sets to the extent that Citizen Kane did. Edeson and Toland are often credited together for revolutionizing cinematography in 1941. Toland's cinematography influenced his own work on The Best Years of Our Lives. Other films influenced include Gaslight, Mildred Pierce and Jane Eyre. Cinematographer Kazuo Miyagawa said that his use of deep focus was influenced by \"the camera work of Gregg Toland in Citizen Kane\" and not by traditional Japanese art.",
"title": "Legacy"
},
{
"paragraph_id": 153,
"text": "Its cinematography, lighting, and flashback structure influenced such film noirs of the 1940s and 1950s as The Killers, Keeper of the Flame, Caught, The Great Man and This Gun for Hire. David Bordwell and Kristin Thompson have written that \"For over a decade thereafter American films displayed exaggerated foregrounds and somber lighting, enhanced by long takes and exaggerated camera movements.\" However, by the 1960s filmmakers such as those from the French New Wave and Cinéma vérité movements favored \"flatter, more shallow images with softer focus\" and Citizen Kane's style became less fashionable. American filmmakers in the 1970s combined these two approaches by using long takes, rapid cutting, deep focus and telephoto shots all at once. Its use of long takes influenced films such as The Asphalt Jungle, and its use of deep focus cinematography influenced Gun Crazy, The Whip Hand, The Devil's General and Justice Is Done. The flashback structure in which different characters have conflicting versions of past events influenced La commare secca and Man of Marble.",
"title": "Legacy"
},
{
"paragraph_id": 154,
"text": "The film's structure influenced the biographical films Lawrence of Arabia and Mishima: A Life in Four Chapters—which begin with the subject's death and show their life in flashbacks—as well as Welles's thriller Mr. Arkadin. Rosenbaum sees similarities in the film's plot to Mr. Arkadin, as well as the theme of nostalgia for loss of innocence throughout Welles's career, beginning with Citizen Kane and including The Magnificent Ambersons, Mr. Arkadin and Chimes at Midnight. Rosenbaum also points out how the film influenced Warren Beatty's Reds. The film depicts the life of Jack Reed through the eyes of Louise Bryant, much as Kane's life is seen through the eyes of Thompson and the people who he interviews. Rosenbaum also compared the romantic montage between Reed and Bryant with the breakfast table montage in Citizen Kane.",
"title": "Legacy"
},
{
"paragraph_id": 155,
"text": "Akira Kurosawa's Rashomon is often compared to the film due to both having complicated plot structures told by multiple characters in the film. Welles said his initial idea for the film was \"Basically, the idea Rashomon used later on,\" however Kurosawa had not yet seen the film before making Rashomon in 1950. Nigel Andrews has compared the film's complex plot structure to Rashomon, Last Year at Marienbad, Memento and Magnolia. Andrews also compares Charles Foster Kane to Michael Corleone in The Godfather, Jake LaMotta in Raging Bull and Daniel Plainview in There Will Be Blood for their portrayals of \"haunted megalomaniac[s], presiding over the shards of [their] own [lives].\"",
"title": "Legacy"
},
{
"paragraph_id": 156,
"text": "The films of Paul Thomas Anderson have been compared to it. Variety compared There Will Be Blood to the film and called it \"one that rivals Giant and Citizen Kane in our popular lore as origin stories about how we came to be the people we are.\" The Master has been called \"movieland's only spiritual sequel to Citizen Kane that doesn't shrivel under the hefty comparison\". The Social Network has been compared to the film for its depiction of a media mogul and by the character Erica Albright being similar to \"Rosebud\". The controversy of the Sony hacking before the release of The Interview brought comparisons of Hearst's attempt to suppress the film. The film's plot structure and some specific shots influenced Todd Haynes's Velvet Goldmine. Abbas Kiarostami's The Traveler has been called \"the Citizen Kane of the Iranian children's cinema.\" The film's use of overlapping dialogue has influenced the films of Robert Altman and Carol Reed. Reed's films Odd Man Out, The Third Man (in which Welles and Cotten appeared) and Outcast of the Islands were also influenced by the film's cinematography.",
"title": "Legacy"
},
{
"paragraph_id": 157,
"text": "Many directors have listed it as one of the greatest films ever made, including Woody Allen, Michael Apted, Les Blank, Kenneth Branagh, Paul Greengrass, Satyajit Ray, Michel Hazanavicius, Michael Mann, Sam Mendes, Jiří Menzel, Paul Schrader, Martin Scorsese, Denys Arcand, Gillian Armstrong, John Boorman, Roger Corman, Alex Cox, Miloš Forman, Norman Jewison, Richard Lester, Richard Linklater, Paul Mazursky, Ronald Neame, Sydney Pollack and Stanley Kubrick. Yasujirō Ozu said it was his favorite non-Japanese film and was impressed by its techniques. François Truffaut said that the film \"has inspired more vocations to cinema throughout the world than any other\" and recognized its influence in The Barefoot Contessa, Les Mauvaises Rencontres, Lola Montès, and 8 1/2. Truffaut's Day for Night pays tribute to the film in a dream sequence depicting a childhood memory of the character played by Truffaut stealing publicity photos from the film. Numerous film directors have cited the film as influential on their own films, including Theo Angelopoulos, Luc Besson, the Coen brothers, Francis Ford Coppola, Brian De Palma, John Frankenheimer, Stephen Frears, Sergio Leone, Michael Mann, Ridley Scott, Martin Scorsese, Bryan Singer and Steven Spielberg. Ingmar Bergman disliked the film and called it \"a total bore. Above all, the performances are worthless. The amount of respect that movie has is absolutely unbelievable!\"",
"title": "Legacy"
},
{
"paragraph_id": 158,
"text": "William Friedkin said that the film influenced him and called it \"a veritable quarry for filmmakers, just as Joyce's Ulysses is a quarry for writers.\" The film has also influenced other art forms. Carlos Fuentes's novel The Death of Artemio Cruz was partially inspired by the film and the rock band The White Stripes paid unauthorized tribute to the film in the song \"The Union Forever\".",
"title": "Legacy"
},
{
"paragraph_id": 159,
"text": "In 1982, film director Steven Spielberg bought a \"Rosebud\" sled for $60,500; it was one of three balsa sleds used in the closing scenes and the only one that was not burned. Spielberg eventually donated the sled to the Academy Museum of Motion Pictures as he stated he felt it belonged in a museum. After the Spielberg purchase, it was reported that retiree Arthur Bauer claimed to own another \"Rosebud\" sled. In early 1942, when Bauer was 12, he had won an RKO publicity contest and selected the hardwood sled as his prize. In 1996, Bauer's estate offered the painted pine sled at auction through Christie's. Bauer's son told CBS News that his mother had once wanted to paint the sled and use it as a plant stand, but Bauer told her to \"just save it and put it in the closet.\" The sled was sold to an anonymous bidder for $233,500.",
"title": "Legacy"
},
{
"paragraph_id": 160,
"text": "Welles's Oscar for Best Original Screenplay was believed to be lost until it was rediscovered in 1994. It was withdrawn from a 2007 auction at Sotheby's when bidding failed to reach its estimate of $800,000 to $1.2 million. Owned by the charitable Dax Foundation, it was auctioned for $861,542 in 2011 to an anonymous buyer. Mankiewicz's Oscar was sold at least twice, in 1999 and again in 2012, the latest price being $588,455.",
"title": "Legacy"
},
{
"paragraph_id": 161,
"text": "In 1989, Mankiewicz's personal copy of the Citizen Kane script was auctioned at Christie's. The leather-bound volume included the final shooting script and a carbon copy of American that bore handwritten annotations—purportedly made by Hearst's lawyers, who were said to have obtained it in the manner described by Kael in \"Raising Kane\". Estimated to bring $70,000 to $90,000, it sold for a record $231,000.",
"title": "Legacy"
},
{
"paragraph_id": 162,
"text": "In 2007, Welles's personal copy of the last revised draft of Citizen Kane before the shooting script was sold at Sotheby's for $97,000. A second draft of the script titled American, marked \"Mr. Welles' working copy\", was auctioned by Sotheby's in 2014 for $164,692. A collection of 24 pages from a working script found in Welles's personal possessions by his daughter Beatrice Welles was auctioned in 2014 for $15,000.",
"title": "Legacy"
},
{
"paragraph_id": 163,
"text": "In 2014, a collection of approximately 235 Citizen Kane stills and production photos that had belonged to Welles was sold at auction for $7,812.",
"title": "Legacy"
},
{
"paragraph_id": 164,
"text": "The composited camera negative of Citizen Kane is believed to be lost forever. The most commonly-reported explanation is that it was destroyed in a New Jersey film laboratory fire in the 1970s. However, in 2021, Nicolas Falacci revealed that he had been told \"the real story\" by a colleague, when he was one of two employees in the film restoration lab which assembled the 1991 \"restoration\" from the best available elements. Falacci noted that throughout the process he had daily visits in 1990-1 from an unnamed \"older RKO executive showing up every day – nervous and sweating\". According to Falacci's colleague, this elderly man was keen to cover up a clerical error he had made decades earlier when in charge of the studio's inventory, which had resulted in the original camera negatives being sent to a silver reclamation plant, destroying the nitrate film to extract its valuable silver content. Falacci's account is impossible to verify, but it would have been fully in keeping with industry standard practice for many decades, which was to destroy prints and negatives of countless older films deemed non-commercially viable, to extract the silver.",
"title": "Rights and home media"
},
{
"paragraph_id": 165,
"text": "Subsequent prints were derived from a master positive (a fine-grain preservation element) made in the 1940s and originally intended for use in overseas distribution. Modern techniques were used to produce a pristine print for a 50th Anniversary theatrical reissue in 1991 which Paramount Pictures released for then-owner Turner Broadcasting System, which earned $1.6 million in North America and $1.8 million worldwide.",
"title": "Rights and home media"
},
{
"paragraph_id": 166,
"text": "In 1955, RKO sold the American television rights to its film library, including Citizen Kane, to C&C Television Corp. In 1960, television rights to the pre-1959 RKO's live-action library were acquired by United Artists. RKO kept the non-broadcast television rights to its library.",
"title": "Rights and home media"
},
{
"paragraph_id": 167,
"text": "In 1976, when home video was in its infancy, entrepreneur Snuff Garrett bought cassette rights to the RKO library for what United Press International termed \"a pittance\". In 1978 The Nostalgia Merchant released the film through Media Home Entertainment. By 1980 the 800-title library of The Nostalgia Merchant was earning $2.3 million a year. \"Nobody wanted cassettes four years ago,\" Garrett told UPI. \"It wasn't the first time people called me crazy. It was a hobby with me which became big business.\" RKO Home Video released the film on VHS and Betamax in 1985.",
"title": "Rights and home media"
},
{
"paragraph_id": 168,
"text": "On December 3, 1984, The Criterion Collection released the film as its first LaserDisc. It was made from a fine grain master positive provided by the UCLA Film and Television Archive. When told about the then-new concept of having an audio commentary on the disc, Welles was skeptical but said \"theoretically, that's good for teaching movies, so long as they don't talk nonsense.\" In 1992 Criterion released a new 50th Anniversary Edition LaserDisc. This version had an improved transfer and additional special features, including the documentary The Legacy of Citizen Kane and Welles's early short The Hearts of Age.",
"title": "Rights and home media"
},
{
"paragraph_id": 169,
"text": "Turner Broadcasting System acquired broadcast television rights to the RKO library in 1986 and the full worldwide rights to the library in 1987. The RKO Home Video unit was reorganized into Turner Home Entertainment that year. In 1991 Turner released a 50th Anniversary Edition on VHS and as a collector's edition that includes the film, the documentary Reflections On Citizen Kane, Harlan Lebo's 50th anniversary album, a poster and a copy of the original script. In 1996, Time Warner acquired Turner and Warner Home Video absorbed Turner Home Entertainment. In 2011, Warner Bros. Discovery's Warner Bros. unit had distribution rights for the film.",
"title": "Rights and home media"
},
{
"paragraph_id": 170,
"text": "In 2001, Warner Home Video released a 60th Anniversary Collectors Edition DVD. The two-disc DVD included feature-length commentaries by Roger Ebert and Peter Bogdanovich, as well as a second DVD with the feature length documentary The Battle Over Citizen Kane (1999). It was simultaneously released on VHS. The DVD was criticized for being \"too bright, too clean; the dirt and grime had been cleared away, but so had a good deal of the texture, the depth, and the sense of film grain.\"",
"title": "Rights and home media"
},
{
"paragraph_id": 171,
"text": "In 2003, Welles's daughter Beatrice Welles sued Turner Entertainment, claiming the Welles estate is the legal copyright holder of the film. She claimed that Welles's deal to terminate his contracts with RKO meant that Turner's copyright of the film was null and void. She also claimed that the estate of Orson Welles was owed 20% of the film's profits if her copyright claim was not upheld. In 2007 she was allowed to proceed with the lawsuit, overturning the 2004 decision in favor of Turner Entertainment on the issue of video rights.",
"title": "Rights and home media"
},
{
"paragraph_id": 172,
"text": "In 2011, it was released on Blu-ray and DVD in a 70th Anniversary Edition. The San Francisco Chronicle called it \"the Blu-ray release of the year.\" Supplements included everything available on the 2001 Warner Home Video release, including The Battle Over Citizen Kane DVD. A 70th Anniversary Ultimate Collector's Edition added a third DVD with RKO 281 (1999), an award winning TV movie about the making of the film. Its packaging extras included a hardcover book and a folio containing mini reproductions of the original souvenir program, lobby cards, and production memos and correspondence. The transfer for the US releases were scanned as 4K resolution from three different 35mm prints and rectified the quality issues of the 2001 DVD. The rest of the world continued to receive home video releases based on the older transfer. This was partially rectified in 2016 with the release of the 75th Anniversary Edition in both the UK and US, which was a straight repackaging of the main disc from the 70th Anniversary Edition.",
"title": "Rights and home media"
},
{
"paragraph_id": 173,
"text": "On August 11, 2021 Criterion announced their first 4K Ultra HD releases, a six-film slate, would include Citizen Kane. Criterion indicated each title was to be available in a combo pack including a 4K UHD disc of the feature film as well as the film and special features on the companion Blu-rays. Citizen Kane was released on November 23, 2021 by the collection as a 4K and 3 Blu-ray disc package. However, the release was recalled because at the half-hour mark on the regular blu-ray, the contrast fell sharply, which resulted in a much darker image compared to what was supposed to occur. However this issue does not apply to the 4K version itself.",
"title": "Rights and home media"
},
{
"paragraph_id": 174,
"text": "In the 1980s, Citizen Kane became a catalyst in the controversy over the colorization of black-and-white films. One proponent of film colorization was Ted Turner, whose Turner Entertainment Company owned the RKO library. A Turner Entertainment spokesperson initially stated that Citizen Kane would not be colorized, but in July 1988 Turner said, \"Citizen Kane? I'm thinking of colorizing it.\" In early 1989 it was reported that two companies were producing color tests for Turner Entertainment. Criticism increased when filmmaker Henry Jaglom stated that shortly before his death Welles had implored him \"don't let Ted Turner deface my movie with his crayons.\"",
"title": "Rights and home media"
},
{
"paragraph_id": 175,
"text": "In February 1989, Turner Entertainment President Roger Mayer announced that work to colorize the film had been stopped due to provisions in Welles's 1939 contract with RKO that \"could be read to prohibit colorization without permission of the Welles estate.\" Mayer added that Welles's contract was \"quite unusual\" and \"other contracts we have checked out are not like this at all.\" Turner had only colorized the final reel of the film before abandoning the project. In 1991 one minute of the colorized test footage was included in the BBC Arena documentary The Complete Citizen Kane.",
"title": "Rights and home media"
},
{
"paragraph_id": 176,
"text": "The colorization controversy was a factor in the passage of the National Film Preservation Act in 1988 which created the National Film Registry the following year. ABC News anchor Peter Jennings reported that \"one major reason for doing this is to require people like the broadcaster Ted Turner, who's been adding color to some movies and re-editing others for television, to put notices on those versions saying that the movies have been altered\".",
"title": "Rights and home media"
}
] | Citizen Kane is a 1941 American drama film directed by, produced by, and starring Orson Welles. Welles and Herman J. Mankiewicz wrote the screenplay. The picture was Welles' first feature film. Citizen Kane is frequently cited as the greatest film ever made. For 50 consecutive years, it stood at number 1 in the British Film Institute's Sight & Sound decennial poll of critics, and it topped the American Film Institute's 100 Years ... 100 Movies list in 1998, as well as its 2007 update. The film was nominated for Academy Awards in nine categories and it won for Best Writing by Mankiewicz and Welles. Citizen Kane is praised for Gregg Toland's cinematography, Robert Wise's editing, Bernard Herrmann's music, and its narrative structure, all of which have been considered innovative and precedent-setting. The quasi-biographical film examines the life and legacy of Charles Foster Kane, played by Welles, a composite character based on American media barons William Randolph Hearst and Joseph Pulitzer, Chicago tycoons Samuel Insull and Harold McCormick, as well as aspects of the screenwriters' own lives. Upon its release, Hearst prohibited any mention of the film in his newspapers. After the Broadway success of Welles's Mercury Theatre and the controversial 1938 radio broadcast "The War of the Worlds" on The Mercury Theatre on the Air, Welles was courted by Hollywood. He signed a contract with RKO Pictures in 1939. Although it was unusual for an untried director, he was given freedom to develop his own story, to use his own cast and crew, and to have final cut privilege. Following two abortive attempts to get a project off the ground, he wrote the screenplay for Citizen Kane, collaborating with Herman J. Mankiewicz. Principal photography took place in 1940, the same year its innovative trailer was shown, and the film was released in 1941. Although it was a critical success, Citizen Kane failed to recoup its costs at the box office. The film faded from view after its release, but it returned to public attention when it was praised by French critics such as André Bazin and re-released in 1956. In 1958, the film was voted number 9 on the prestigious Brussels 12 list at the 1958 World Expo. Citizen Kane was selected by the Library of Congress as an inductee of the 1989 inaugural group of 25 films for preservation in the United States National Film Registry for being "culturally, historically, or aesthetically significant".
Roger Ebert wrote of it: "Its surface is as much fun as any movie ever made. Its depths surpass understanding. I have analyzed it a shot at a time with more than 30 groups, and together we have seen, I believe, pretty much everything that is there on the screen. The more clearly I can see its physical manifestation, the more I am stirred by its mystery." | 2001-08-31T17:53:47Z | 2023-12-26T00:27:20Z | [
"Template:For",
"Template:ISBN",
"Template:Cite AV media",
"Template:Cite journal",
"Template:Citizen Kane",
"Template:Orson Welles",
"Template:Authority control",
"Template:Short description",
"Template:Infobox film",
"Template:TOC limit",
"Template:Main",
"Template:Won",
"Template:Official website",
"Template:Portal bar",
"Template:Use American English",
"Template:Nom",
"Template:US$",
"Template:Div col",
"Template:Div col end",
"Template:Cite web",
"Template:Efn",
"Template:Em",
"Template:Cite book",
"Template:Cite video",
"Template:Cite tweet",
"Template:IMDb title",
"Template:Rp",
"Template:R",
"Template:Cite news",
"Template:Cite magazine",
"Template:Navboxes",
"Template:'",
"Template:Sister project links",
"Template:Metacritic film",
"Template:TCMDb title",
"Template:Nowrap",
"Template:Use mdy dates",
"Template:Clarify",
"Template:Notelist",
"Template:Reflist",
"Template:Open access",
"Template:AllMovie title",
"Template:Rotten Tomatoes",
"Template:Good article",
"Template:AFI film",
"Template:Multiple image"
] | https://en.wikipedia.org/wiki/Citizen_Kane |
5,225 | Code | In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.
The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish.
One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.
In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings.
Before giving a mathematically precise definition, this is a brief example. The mapping
is a code, whose source alphabet is the set { a , b , c } {\displaystyle \{a,b,c\}} and whose target alphabet is the set { 0 , 1 } {\displaystyle \{0,1\}} . Using the extension of the code, the encoded string 0011001 can be grouped into codewords as 0 011 0 01, and these in turn can be decoded to the sequence of source symbols acab.
Using terms from formal language theory, the precise mathematical definition of this concept is as follows: let S and T be two finite sets, called the source and target alphabets, respectively. A code C : S → T ∗ {\displaystyle C:\,S\to T^{*}} is a total function mapping each symbol from S to a sequence of symbols over T. The extension C ′ {\displaystyle C'} of C {\displaystyle C} , is a homomorphism of S ∗ {\displaystyle S^{*}} into T ∗ {\displaystyle T^{*}} , which naturally maps each sequence of source symbols to a sequence of target symbols.
In this section, we consider codes that encode each source (clear text) character by a code word from some dictionary, and concatenation of such code words give us an encoded string. Variable-length codes are especially useful when clear text characters have different probabilities; see also entropy encoding.
A prefix code is a code with the "prefix property": there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. Huffman coding is the most known algorithm for deriving prefix codes. Prefix codes are widely referred to as "Huffman codes" even when the code was not produced by a Huffman algorithm. Other examples of prefix codes are country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS WCDMA 3G Wireless Standard.
Kraft's inequality characterizes the sets of codeword lengths that are possible in a prefix code. Virtually any uniquely decodable one-to-many code, not necessarily a prefix one, must satisfy Kraft's inequality.
Codes may also be used to represent data in a way more resistant to errors in transmission or storage. This so-called error-correcting code works by including carefully crafted redundancy with the stored (or transmitted) data. Examples include Hamming codes, Reed–Solomon, Reed–Muller, Walsh–Hadamard, Bose–Chaudhuri–Hochquenghem, Turbo, Golay, algebraic geometry codes, low-density parity-check codes, and space–time codes. Error detecting codes can be optimised to detect burst errors, or random errors.
A cable code replaces words (e.g. ship or invoice) with shorter words, allowing the same information to be sent with fewer characters, more quickly, and less expensively.
Codes can be used for brevity. When telegraph messages were the state of the art in rapid long-distance communication, elaborate systems of commercial codes that encoded complete phrases into single mouths (commonly five-minute groups) were developed, so that telegraphers became conversant with such "words" as BYOXO ("Are you trying to weasel out of our deal?"), LIOUY ("Why do you not answer my question?"), BMULD ("You're a skunk!"), or AYYLU ("Not clearly coded, repeat more clearly."). Code words were chosen for various reasons: length, pronounceability, etc. Meanings were chosen to fit perceived needs: commercial negotiations, military terms for military codes, diplomatic terms for diplomatic codes, any and all of the preceding for espionage codes. Codebooks and codebook publishers proliferated, including one run as a front for the American Black Chamber run by Herbert Yardley between the First and Second World Wars. The purpose of most of these codes was to save on cable costs. The use of data coding for data compression predates the computer era; an early example is the telegraph Morse code where more-frequently used characters have shorter representations. Techniques such as Huffman coding are now used by computer-based algorithms to compress large data files into a more compact form for storage or transmission.
Character encodings are representations of textual data. A given character encoding may be associated with a specific character set (the collection of characters which it can represent), though some character sets have multiple character encodings and vice versa. Character encodings may be broadly grouped according to the number of bytes required to represent a single character: there are single-byte encodings, multibyte (also called wide) encodings, and variable-width (also called variable-length) encodings. The earliest character encodings were single-byte, the best-known example of which is ASCII. ASCII remains in use today, for example in HTTP headers. However, single-byte encodings cannot model character sets with more than 256 characters. Scripts that require large character sets such as Chinese, Japanese and Korean must be represented with multibyte encodings. Early multibyte encodings were fixed-length, meaning that although each character was represented by more than one byte, all characters used the same number of bytes ("word length"), making them suitable for decoding with a lookup table. The final group, variable-width encodings, is a subset of multibyte encodings. These use more complex encoding and decoding logic to efficiently represent large character sets while keeping the representations of more commonly used characters shorter or maintaining backward compatibility properties. This group includes UTF-8, an encoding of the Unicode character set; UTF-8 is the most common encoding of text media on the Internet.
Biological organisms contain genetic material that is used to control their function and development. This is DNA, which contains units named genes from which messenger RNA is derived. This in turn produces proteins through a genetic code in which a series of triplets (codons) of four possible nucleotides can be translated into one of twenty possible amino acids. A sequence of codons results in a corresponding sequence of amino acids that form a protein molecule; a type of codon called a stop codon signals the end of the sequence.
In mathematics, a Gödel code was the basis for the proof of Gödel's incompleteness theorem. Here, the idea was to map mathematical notation to a natural number (using a Gödel numbering).
There are codes using colors, like traffic lights, the color code employed to mark the nominal value of the electrical resistors or that of the trashcans devoted to specific types of garbage (paper, glass, organic, etc.).
In marketing, coupon codes can be used for a financial discount or rebate when purchasing a product from a (usual internet) retailer.
In military environments, specific sounds with the cornet are used for different uses: to mark some moments of the day, to command the infantry on the battlefield, etc.
Communication systems for sensory impairments, such as sign language for deaf people and braille for blind people, are based on movement or tactile codes.
Musical scores are the most common way to encode music.
Specific games have their own code systems to record the matches, e.g. chess notation.
In the history of cryptography, codes were once common for ensuring the confidentiality of communications, although ciphers are now used instead.
Secret codes intended to obscure the real messages, ranging from serious (mainly espionage in military, diplomacy, business, etc.) to trivial (romance, games) can be any kind of imaginative encoding: flowers, game cards, clothes, fans, hats, melodies, birds, etc., in which the sole requirement is the pre-agreement on the meaning by both the sender and the receiver.
Other examples of encoding include:
Other examples of decoding include:
Acronyms and abbreviations can be considered codes, and in a sense, all languages and writing systems are codes for human thought.
International Air Transport Association airport codes are three-letter codes used to designate airports and used for bag tags. Station codes are similarly used on railways but are usually national, so the same code can be used for different stations if they are in different countries.
Occasionally, a code word achieves an independent existence (and meaning) while the original equivalent phrase is forgotten or at least no longer has the precise meaning attributed to the code word. For example, '30' was widely used in journalism to mean "end of story", and has been used in other contexts to signify "the end". | [
{
"paragraph_id": 0,
"text": "In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish.",
"title": ""
},
{
"paragraph_id": 2,
"text": "One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings.",
"title": "Theory"
},
{
"paragraph_id": 4,
"text": "Before giving a mathematically precise definition, this is a brief example. The mapping",
"title": "Theory"
},
{
"paragraph_id": 5,
"text": "is a code, whose source alphabet is the set { a , b , c } {\\displaystyle \\{a,b,c\\}} and whose target alphabet is the set { 0 , 1 } {\\displaystyle \\{0,1\\}} . Using the extension of the code, the encoded string 0011001 can be grouped into codewords as 0 011 0 01, and these in turn can be decoded to the sequence of source symbols acab.",
"title": "Theory"
},
{
"paragraph_id": 6,
"text": "Using terms from formal language theory, the precise mathematical definition of this concept is as follows: let S and T be two finite sets, called the source and target alphabets, respectively. A code C : S → T ∗ {\\displaystyle C:\\,S\\to T^{*}} is a total function mapping each symbol from S to a sequence of symbols over T. The extension C ′ {\\displaystyle C'} of C {\\displaystyle C} , is a homomorphism of S ∗ {\\displaystyle S^{*}} into T ∗ {\\displaystyle T^{*}} , which naturally maps each sequence of source symbols to a sequence of target symbols.",
"title": "Theory"
},
{
"paragraph_id": 7,
"text": "In this section, we consider codes that encode each source (clear text) character by a code word from some dictionary, and concatenation of such code words give us an encoded string. Variable-length codes are especially useful when clear text characters have different probabilities; see also entropy encoding.",
"title": "Theory"
},
{
"paragraph_id": 8,
"text": "A prefix code is a code with the \"prefix property\": there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. Huffman coding is the most known algorithm for deriving prefix codes. Prefix codes are widely referred to as \"Huffman codes\" even when the code was not produced by a Huffman algorithm. Other examples of prefix codes are country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS WCDMA 3G Wireless Standard.",
"title": "Theory"
},
{
"paragraph_id": 9,
"text": "Kraft's inequality characterizes the sets of codeword lengths that are possible in a prefix code. Virtually any uniquely decodable one-to-many code, not necessarily a prefix one, must satisfy Kraft's inequality.",
"title": "Theory"
},
{
"paragraph_id": 10,
"text": "Codes may also be used to represent data in a way more resistant to errors in transmission or storage. This so-called error-correcting code works by including carefully crafted redundancy with the stored (or transmitted) data. Examples include Hamming codes, Reed–Solomon, Reed–Muller, Walsh–Hadamard, Bose–Chaudhuri–Hochquenghem, Turbo, Golay, algebraic geometry codes, low-density parity-check codes, and space–time codes. Error detecting codes can be optimised to detect burst errors, or random errors.",
"title": "Theory"
},
{
"paragraph_id": 11,
"text": "A cable code replaces words (e.g. ship or invoice) with shorter words, allowing the same information to be sent with fewer characters, more quickly, and less expensively.",
"title": "Examples"
},
{
"paragraph_id": 12,
"text": "Codes can be used for brevity. When telegraph messages were the state of the art in rapid long-distance communication, elaborate systems of commercial codes that encoded complete phrases into single mouths (commonly five-minute groups) were developed, so that telegraphers became conversant with such \"words\" as BYOXO (\"Are you trying to weasel out of our deal?\"), LIOUY (\"Why do you not answer my question?\"), BMULD (\"You're a skunk!\"), or AYYLU (\"Not clearly coded, repeat more clearly.\"). Code words were chosen for various reasons: length, pronounceability, etc. Meanings were chosen to fit perceived needs: commercial negotiations, military terms for military codes, diplomatic terms for diplomatic codes, any and all of the preceding for espionage codes. Codebooks and codebook publishers proliferated, including one run as a front for the American Black Chamber run by Herbert Yardley between the First and Second World Wars. The purpose of most of these codes was to save on cable costs. The use of data coding for data compression predates the computer era; an early example is the telegraph Morse code where more-frequently used characters have shorter representations. Techniques such as Huffman coding are now used by computer-based algorithms to compress large data files into a more compact form for storage or transmission.",
"title": "Examples"
},
{
"paragraph_id": 13,
"text": "Character encodings are representations of textual data. A given character encoding may be associated with a specific character set (the collection of characters which it can represent), though some character sets have multiple character encodings and vice versa. Character encodings may be broadly grouped according to the number of bytes required to represent a single character: there are single-byte encodings, multibyte (also called wide) encodings, and variable-width (also called variable-length) encodings. The earliest character encodings were single-byte, the best-known example of which is ASCII. ASCII remains in use today, for example in HTTP headers. However, single-byte encodings cannot model character sets with more than 256 characters. Scripts that require large character sets such as Chinese, Japanese and Korean must be represented with multibyte encodings. Early multibyte encodings were fixed-length, meaning that although each character was represented by more than one byte, all characters used the same number of bytes (\"word length\"), making them suitable for decoding with a lookup table. The final group, variable-width encodings, is a subset of multibyte encodings. These use more complex encoding and decoding logic to efficiently represent large character sets while keeping the representations of more commonly used characters shorter or maintaining backward compatibility properties. This group includes UTF-8, an encoding of the Unicode character set; UTF-8 is the most common encoding of text media on the Internet.",
"title": "Examples"
},
{
"paragraph_id": 14,
"text": "Biological organisms contain genetic material that is used to control their function and development. This is DNA, which contains units named genes from which messenger RNA is derived. This in turn produces proteins through a genetic code in which a series of triplets (codons) of four possible nucleotides can be translated into one of twenty possible amino acids. A sequence of codons results in a corresponding sequence of amino acids that form a protein molecule; a type of codon called a stop codon signals the end of the sequence.",
"title": "Examples"
},
{
"paragraph_id": 15,
"text": "In mathematics, a Gödel code was the basis for the proof of Gödel's incompleteness theorem. Here, the idea was to map mathematical notation to a natural number (using a Gödel numbering).",
"title": "Examples"
},
{
"paragraph_id": 16,
"text": "There are codes using colors, like traffic lights, the color code employed to mark the nominal value of the electrical resistors or that of the trashcans devoted to specific types of garbage (paper, glass, organic, etc.).",
"title": "Examples"
},
{
"paragraph_id": 17,
"text": "In marketing, coupon codes can be used for a financial discount or rebate when purchasing a product from a (usual internet) retailer.",
"title": "Examples"
},
{
"paragraph_id": 18,
"text": "In military environments, specific sounds with the cornet are used for different uses: to mark some moments of the day, to command the infantry on the battlefield, etc.",
"title": "Examples"
},
{
"paragraph_id": 19,
"text": "Communication systems for sensory impairments, such as sign language for deaf people and braille for blind people, are based on movement or tactile codes.",
"title": "Examples"
},
{
"paragraph_id": 20,
"text": "Musical scores are the most common way to encode music.",
"title": "Examples"
},
{
"paragraph_id": 21,
"text": "Specific games have their own code systems to record the matches, e.g. chess notation.",
"title": "Examples"
},
{
"paragraph_id": 22,
"text": "In the history of cryptography, codes were once common for ensuring the confidentiality of communications, although ciphers are now used instead.",
"title": "Examples"
},
{
"paragraph_id": 23,
"text": "Secret codes intended to obscure the real messages, ranging from serious (mainly espionage in military, diplomacy, business, etc.) to trivial (romance, games) can be any kind of imaginative encoding: flowers, game cards, clothes, fans, hats, melodies, birds, etc., in which the sole requirement is the pre-agreement on the meaning by both the sender and the receiver.",
"title": "Examples"
},
{
"paragraph_id": 24,
"text": "Other examples of encoding include:",
"title": "Other examples"
},
{
"paragraph_id": 25,
"text": "Other examples of decoding include:",
"title": "Other examples"
},
{
"paragraph_id": 26,
"text": "Acronyms and abbreviations can be considered codes, and in a sense, all languages and writing systems are codes for human thought.",
"title": "Codes and acronyms"
},
{
"paragraph_id": 27,
"text": "International Air Transport Association airport codes are three-letter codes used to designate airports and used for bag tags. Station codes are similarly used on railways but are usually national, so the same code can be used for different stations if they are in different countries.",
"title": "Codes and acronyms"
},
{
"paragraph_id": 28,
"text": "Occasionally, a code word achieves an independent existence (and meaning) while the original equivalent phrase is forgotten or at least no longer has the precise meaning attributed to the code word. For example, '30' was widely used in journalism to mean \"end of story\", and has been used in other contexts to signify \"the end\".",
"title": "Codes and acronyms"
}
] | In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time. The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish. One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent. | 2001-11-10T16:54:15Z | 2023-11-13T18:41:10Z | [
"Template:Cite web",
"Template:Cite journal",
"Template:Hatgrp",
"Template:More citations needed",
"Template:Main",
"Template:See also",
"Template:Reflist",
"Template:Cite book",
"Template:Pp-semi-indef",
"Template:Short description",
"Template:Technical reasons",
"Template:Commons category",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Code |
5,228 | Cheirogaleidae | The Cheirogaleidae are the family of strepsirrhine primates containing the various dwarf and mouse lemurs. Like all other lemurs, cheirogaleids live exclusively on the island of Madagascar.
Cheirogaleids are smaller than the other lemurs and, in fact, they are the smallest primates. They have soft, long fur, colored grey-brown to reddish on top, with a generally brighter underbelly. Typically, they have small ears, large, close-set eyes, and long hind legs. Like all strepsirrhines, they have fine claws at the second toe of the hind legs. They grow to a size of only 13 to 28 cm, with a tail that is very long, sometimes up to one and a half times as long as the body. They weigh no more than 500 grams, with some species weighing as little as 60 grams.
Dwarf and mouse lemurs are nocturnal and arboreal. They are excellent climbers and can also jump far, using their long tails for balance. When on the ground (a rare occurrence), they move by hopping on their hind legs. They spend the day in tree hollows or leaf nests. Cheirogaleids are typically solitary, but sometimes live together in pairs.
Their eyes possess a tapetum lucidum, a light-reflecting layer that improves their night vision. Some species, such as the lesser dwarf lemur, store fat at the hind legs and the base of the tail, and hibernate. Unlike lemurids, they have long upper incisors, although they do have the comb-like teeth typical of all strepsirhines. They have the dental formula: 2.1.3.32.1.3.3
Cheirogaleids are omnivores, eating fruits, flowers and leaves (and sometimes nectar), as well as insects, spiders, and small vertebrates.
The females usually have three pairs of nipples. After a meager 60-day gestation, they will bear two to four (usually two or three) young. After five to six weeks, the young are weaned and become fully mature near the end of their first year or sometime in their second year, depending on the species. In human care, they can live for up to 15 years, although their life expectancy in the wild is probably significantly shorter.
The five genera of cheirogaleids contain 42 species. | [
{
"paragraph_id": 0,
"text": "The Cheirogaleidae are the family of strepsirrhine primates containing the various dwarf and mouse lemurs. Like all other lemurs, cheirogaleids live exclusively on the island of Madagascar.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cheirogaleids are smaller than the other lemurs and, in fact, they are the smallest primates. They have soft, long fur, colored grey-brown to reddish on top, with a generally brighter underbelly. Typically, they have small ears, large, close-set eyes, and long hind legs. Like all strepsirrhines, they have fine claws at the second toe of the hind legs. They grow to a size of only 13 to 28 cm, with a tail that is very long, sometimes up to one and a half times as long as the body. They weigh no more than 500 grams, with some species weighing as little as 60 grams.",
"title": "Characteristics"
},
{
"paragraph_id": 2,
"text": "Dwarf and mouse lemurs are nocturnal and arboreal. They are excellent climbers and can also jump far, using their long tails for balance. When on the ground (a rare occurrence), they move by hopping on their hind legs. They spend the day in tree hollows or leaf nests. Cheirogaleids are typically solitary, but sometimes live together in pairs.",
"title": "Characteristics"
},
{
"paragraph_id": 3,
"text": "Their eyes possess a tapetum lucidum, a light-reflecting layer that improves their night vision. Some species, such as the lesser dwarf lemur, store fat at the hind legs and the base of the tail, and hibernate. Unlike lemurids, they have long upper incisors, although they do have the comb-like teeth typical of all strepsirhines. They have the dental formula: 2.1.3.32.1.3.3",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "Cheirogaleids are omnivores, eating fruits, flowers and leaves (and sometimes nectar), as well as insects, spiders, and small vertebrates.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "The females usually have three pairs of nipples. After a meager 60-day gestation, they will bear two to four (usually two or three) young. After five to six weeks, the young are weaned and become fully mature near the end of their first year or sometime in their second year, depending on the species. In human care, they can live for up to 15 years, although their life expectancy in the wild is probably significantly shorter.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "The five genera of cheirogaleids contain 42 species.",
"title": "Classification"
}
] | The Cheirogaleidae are the family of strepsirrhine primates containing the various dwarf and mouse lemurs. Like all other lemurs, cheirogaleids live exclusively on the island of Madagascar. | 2001-03-08T13:14:02Z | 2023-12-25T02:58:12Z | [
"Template:Ref label",
"Template:Wikispecies",
"Template:Cite book",
"Template:Short description",
"Template:Note label",
"Template:Reflist",
"Template:Cite iucn",
"Template:Authority control",
"Template:Main",
"Template:Cite journal",
"Template:Primates",
"Template:Strepsirrhini",
"Template:Cheirogaleidae nav",
"Template:Taxonbar",
"Template:Automatic taxobox",
"Template:Cite web",
"Template:DentalFormula"
] | https://en.wikipedia.org/wiki/Cheirogaleidae |
5,229 | Callitrichidae | The Callitrichidae (also called Arctopitheci or Hapalidae) are a family of New World monkeys, including marmosets, tamarins, and lion tamarins. At times, this group of animals has been regarded as a subfamily, called the Callitrichinae, of the family Cebidae.
This taxon was traditionally thought to be a primitive lineage, from which all the larger-bodied platyrrhines evolved. However, some works argue that callitrichids are actually a dwarfed lineage.
Ancestral stem-callitrichids likely were "normal-sized" ceboids that were dwarfed through evolutionary time. This may exemplify a rare example of insular dwarfing in a mainland context, with the "islands" being formed by biogeographic barriers during arid climatic periods when forest distribution became patchy, and/or by the extensive river networks in the Amazon Basin.
All callitrichids are arboreal. They are the smallest of the simian primates. They eat insects, fruit, and the sap or gum from trees; occasionally, they take small vertebrates. The marmosets rely quite heavily on tree exudates, with some species (e.g. Callithrix jacchus and Cebuella pygmaea) considered obligate exudativores.
Callitrichids typically live in small, territorial groups of about five or six animals. Their social organization is unique among primates, and is called a "cooperative polyandrous group". This communal breeding system involves groups of multiple males and females, but only one female is reproductively active. Females mate with more than one male and each shares the responsibility of carrying the offspring.
They are the only primate group that regularly produces twins, which constitute over 80% of births in species that have been studied. Unlike other male primates, male callitrichids generally provide as much parental care as females. Parental duties may include carrying, protecting, feeding, comforting, and even engaging in play behavior with offspring. In some cases, such as in the cotton-top tamarin (Saguinus oedipus), males, particularly those that are paternal, even show a greater involvement in caregiving than females. The typical social structure seems to constitute a breeding group, with several of their previous offspring living in the group and providing significant help in rearing the young.
Taxa included in the Callitrichidae are:
Media related to Callitrichinae at Wikimedia Commons Data related to Callitrichinae at Wikispecies | [
{
"paragraph_id": 0,
"text": "The Callitrichidae (also called Arctopitheci or Hapalidae) are a family of New World monkeys, including marmosets, tamarins, and lion tamarins. At times, this group of animals has been regarded as a subfamily, called the Callitrichinae, of the family Cebidae.",
"title": ""
},
{
"paragraph_id": 1,
"text": "This taxon was traditionally thought to be a primitive lineage, from which all the larger-bodied platyrrhines evolved. However, some works argue that callitrichids are actually a dwarfed lineage.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Ancestral stem-callitrichids likely were \"normal-sized\" ceboids that were dwarfed through evolutionary time. This may exemplify a rare example of insular dwarfing in a mainland context, with the \"islands\" being formed by biogeographic barriers during arid climatic periods when forest distribution became patchy, and/or by the extensive river networks in the Amazon Basin.",
"title": ""
},
{
"paragraph_id": 3,
"text": "All callitrichids are arboreal. They are the smallest of the simian primates. They eat insects, fruit, and the sap or gum from trees; occasionally, they take small vertebrates. The marmosets rely quite heavily on tree exudates, with some species (e.g. Callithrix jacchus and Cebuella pygmaea) considered obligate exudativores.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Callitrichids typically live in small, territorial groups of about five or six animals. Their social organization is unique among primates, and is called a \"cooperative polyandrous group\". This communal breeding system involves groups of multiple males and females, but only one female is reproductively active. Females mate with more than one male and each shares the responsibility of carrying the offspring.",
"title": ""
},
{
"paragraph_id": 5,
"text": "They are the only primate group that regularly produces twins, which constitute over 80% of births in species that have been studied. Unlike other male primates, male callitrichids generally provide as much parental care as females. Parental duties may include carrying, protecting, feeding, comforting, and even engaging in play behavior with offspring. In some cases, such as in the cotton-top tamarin (Saguinus oedipus), males, particularly those that are paternal, even show a greater involvement in caregiving than females. The typical social structure seems to constitute a breeding group, with several of their previous offspring living in the group and providing significant help in rearing the young.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Taxa included in the Callitrichidae are:",
"title": "Species and subspecies list"
},
{
"paragraph_id": 7,
"text": "Media related to Callitrichinae at Wikimedia Commons Data related to Callitrichinae at Wikispecies",
"title": "External links"
}
] | The Callitrichidae are a family of New World monkeys, including marmosets, tamarins, and lion tamarins. At times, this group of animals has been regarded as a subfamily, called the Callitrichinae, of the family Cebidae. This taxon was traditionally thought to be a primitive lineage, from which all the larger-bodied platyrrhines evolved. However, some works argue that callitrichids are actually a dwarfed lineage. Ancestral stem-callitrichids likely were "normal-sized" ceboids that were dwarfed through evolutionary time. This may exemplify a rare example of insular dwarfing in a mainland context, with the "islands" being formed by biogeographic barriers during arid climatic periods when forest distribution became patchy, and/or by the extensive river networks in the Amazon Basin. All callitrichids are arboreal. They are the smallest of the simian primates. They eat insects, fruit, and the sap or gum from trees; occasionally, they take small vertebrates. The marmosets rely quite heavily on tree exudates, with some species considered obligate exudativores. Callitrichids typically live in small, territorial groups of about five or six animals. Their social organization is unique among primates, and is called a "cooperative polyandrous group". This communal breeding system involves groups of multiple males and females, but only one female is reproductively active. Females mate with more than one male and each shares the responsibility of carrying the offspring. They are the only primate group that regularly produces twins, which constitute over 80% of births in species that have been studied. Unlike other male primates, male callitrichids generally provide as much parental care as females. Parental duties may include carrying, protecting, feeding, comforting, and even engaging in play behavior with offspring. In some cases, such as in the cotton-top tamarin, males, particularly those that are paternal, even show a greater involvement in caregiving than females. The typical social structure seems to constitute a breeding group, with several of their previous offspring living in the group and providing significant help in rearing the young. | 2001-03-09T00:05:32Z | 2023-10-22T04:06:35Z | [
"Template:Automatic taxobox",
"Template:Reflist",
"Template:Cite book",
"Template:Primates",
"Template:Haplorhini",
"Template:Taxonbar",
"Template:Callitrichidae nav",
"Template:Short description",
"Template:Extinct",
"Template:Cite journal",
"Template:Cite bioRxiv",
"Template:Commonscat-inline",
"Template:Wikispecies-inline"
] | https://en.wikipedia.org/wiki/Callitrichidae |
5,230 | Cebidae | The Cebidae are one of the five families of New World monkeys now recognised. Extant members are the capuchin and squirrel monkeys. These species are found throughout tropical and subtropical South and Central America.
Cebid monkeys are arboreal animals that only rarely travel on the ground. They are generally small monkeys, ranging in size up to that of the brown capuchin, with a body length of 33 to 56 cm, and a weight of 2.5 to 3.9 kilograms. They are somewhat variable in form and coloration, but all have the wide, flat, noses typical of New World monkeys.
They are omnivorous, mostly eating fruit and insects, although the proportions of these foods vary greatly between species. They have the dental formula:2.1.3.2-32.1.3.2-3
Females give birth to one or two young after a gestation period of between 130 and 170 days, depending on species. They are social animals, living in groups of between five and forty individuals, with the smaller species typically forming larger groups. They are generally diurnal in habit.
Previously, New World monkeys were divided between Callitrichidae and this family. For a few recent years, marmosets, tamarins, and lion tamarins were placed as a subfamily (Callitrichinae) in Cebidae, while moving other genera from Cebidae into the families Aotidae, Pitheciidae and Atelidae. The most recent classification of New World monkeys again splits the callitrichids off, leaving only the capuchins and squirrel monkeys in this family. | [
{
"paragraph_id": 0,
"text": "The Cebidae are one of the five families of New World monkeys now recognised. Extant members are the capuchin and squirrel monkeys. These species are found throughout tropical and subtropical South and Central America.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cebid monkeys are arboreal animals that only rarely travel on the ground. They are generally small monkeys, ranging in size up to that of the brown capuchin, with a body length of 33 to 56 cm, and a weight of 2.5 to 3.9 kilograms. They are somewhat variable in form and coloration, but all have the wide, flat, noses typical of New World monkeys.",
"title": "Characteristics"
},
{
"paragraph_id": 2,
"text": "They are omnivorous, mostly eating fruit and insects, although the proportions of these foods vary greatly between species. They have the dental formula:2.1.3.2-32.1.3.2-3",
"title": "Characteristics"
},
{
"paragraph_id": 3,
"text": "Females give birth to one or two young after a gestation period of between 130 and 170 days, depending on species. They are social animals, living in groups of between five and forty individuals, with the smaller species typically forming larger groups. They are generally diurnal in habit.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "Previously, New World monkeys were divided between Callitrichidae and this family. For a few recent years, marmosets, tamarins, and lion tamarins were placed as a subfamily (Callitrichinae) in Cebidae, while moving other genera from Cebidae into the families Aotidae, Pitheciidae and Atelidae. The most recent classification of New World monkeys again splits the callitrichids off, leaving only the capuchins and squirrel monkeys in this family.",
"title": "Classification"
}
] | The Cebidae are one of the five families of New World monkeys now recognised. Extant members are the capuchin and squirrel monkeys. These species are found throughout tropical and subtropical South and Central America. | 2001-03-08T13:25:13Z | 2023-10-22T04:06:43Z | [
"Template:Cebidae nav",
"Template:Authority control",
"Template:Automatic taxobox",
"Template:DentalFormula",
"Template:Reflist",
"Template:Primates",
"Template:Taxonbar",
"Template:Short description",
"Template:Wikispecies",
"Template:Cite book",
"Template:Haplorhini"
] | https://en.wikipedia.org/wiki/Cebidae |
5,232 | Chondrichthyes | Chondrichthyes (/kɒnˈdrɪkθi.iːz/; from Ancient Greek χόνδρος (khóndros) 'cartilage', and ἰχθύς (ikhthús) 'fish') is a class of jawed fish that contains the cartilaginous fish or chondrichthyians, which all have skeletons primarily composed of cartilage. They can be contrasted with the Osteichthyes or bony fish, which have skeletons primarily composed of bone tissue. Chondrichthyes are aquatic vertebrates with paired fins, paired nares, placoid scales, conus arteriosus in the heart, and a lack of opecula and swim bladders. Within the infraphylum Gnathostomata, cartilaginous fishes are distinct from all other jawed vertebrates.
The class is divided into two subclasses: Elasmobranchii (sharks, rays, skates and sawfish) and Holocephali (chimaeras, sometimes called ghost sharks, which are sometimes separated into their own class). Extant Chondrichthyes range in size from the 10 cm (3.9 in) finless sleeper ray to the over 10 m (33 ft) whale shark.
The skeleton is cartilaginous. The notochord is gradually replaced by a vertebral column during development, except in Holocephali, where the notochord stays intact. In some deepwater sharks, the column is reduced.
As they do not have bone marrow, red blood cells are produced in the spleen and the epigonal organ (special tissue around the gonads, which is also thought to play a role in the immune system). They are also produced in the Leydig's organ, which is only found in certain cartilaginous fishes. The subclass Holocephali, which is a very specialized group, lacks both the Leydig's and epigonal organs.
Apart from electric rays, which have a thick and flabby body, with soft, loose skin, chondrichthyans have tough skin covered with dermal teeth (again, Holocephali is an exception, as the teeth are lost in adults, only kept on the clasping organ seen on the caudal ventral surface of the male), also called placoid scales (or dermal denticles), making it feel like sandpaper. In most species, all dermal denticles are oriented in one direction, making the skin feel very smooth if rubbed in one direction and very rough if rubbed in the other.
Originally, the pectoral and pelvic girdles, which do not contain any dermal elements, did not connect. In later forms, each pair of fins became ventrally connected in the middle when scapulocoracoid and puboischiadic bars evolved. In rays, the pectoral fins are connected to the head and are very flexible.
One of the primary characteristics present in most sharks is the heterocercal tail, which aids in locomotion.
Chondrichthyans have tooth-like scales called dermal denticles or placoid scales. Denticles usually provide protection, and in most cases, streamlining. Mucous glands exist in some species, as well.
It is assumed that their oral teeth evolved from dermal denticles that migrated into the mouth, but it could be the other way around, as the teleost bony fish Denticeps clupeoides has most of its head covered by dermal teeth (as does, probably, Atherion elymus, another bony fish). This is most likely a secondary evolved characteristic, which means there is not necessarily a connection between the teeth and the original dermal scales.
The old placoderms did not have teeth at all, but had sharp bony plates in their mouth. Thus, it is unknown whether the dermal or oral teeth evolved first. It has even been suggested that the original bony plates of all vertebrates are now gone and that the present scales are just modified teeth, even if both the teeth and body armor had a common origin a long time ago. However, there is currently no evidence of this.
All chondrichthyans breathe through five to seven pairs of gills, depending on the species. In general, pelagic species must keep swimming to keep oxygenated water moving through their gills, whilst demersal species can actively pump water in through their spiracles and out through their gills. However, this is only a general rule and many species differ.
A spiracle is a small hole found behind each eye. These can be tiny and circular, such as found on the nurse shark (Ginglymostoma cirratum), to extended and slit-like, such as found on the wobbegongs (Orectolobidae). Many larger, pelagic species, such as the mackerel sharks (Lamnidae) and the thresher sharks (Alopiidae), no longer possess them.
In chondrichthyans, the nervous system is composed of a small brain, 8–10 pairs of cranial nerves, and a spinal cord with spinal nerves. They have several sensory organs which provide information to be processed. Ampullae of Lorenzini are a network of small jelly filled pores called electroreceptors which help the fish sense electric fields in water. This aids in finding prey, navigation, and sensing temperature. The Lateral line system has modified epithelial cells located externally which sense motion, vibration, and pressure in the water around them. Most species have large well-developed eyes. Also, they have very powerful nostrils and olfactory organs. Their inner ears consist of 3 large semicircular canals which aid in balance and orientation. Their sound detecting apparatus has limited range and is typically more powerful at lower frequencies. Some species have electric organs which can be used for defense and predation. They have relatively simple brains with the forebrain not greatly enlarged. The structure and formation of myelin in their nervous systems are nearly identical to that of tetrapods, which has led evolutionary biologists to believe that Chondrichthyes were a cornerstone group in the evolutionary timeline of myelin development.
Like all other jawed vertebrates, members of Chondrichthyes have an adaptive immune system.
Fertilization is internal. Development is usually live birth (ovoviviparous species) but can be through eggs (oviparous). Some rare species are viviparous. There is no parental care after birth; however, some chondrichthyans do guard their eggs.
Capture-induced premature birth and abortion (collectively called capture-induced parturition) occurs frequently in sharks/rays when fished. Capture-induced parturition is often mistaken for natural birth by recreational fishers and is rarely considered in commercial fisheries management despite being shown to occur in at least 12% of live bearing sharks and rays (88 species to date).
The class Chondrichthyes has two subclasses: the subclass Elasmobranchii (sharks, rays, skates, and sawfish) and the subclass Holocephali (chimaeras). To see the full list of the species, click here.
Cartilaginous fish are considered to have evolved from acanthodians. The discovery of Entelognathus and several examinations of acanthodian characteristics indicate that bony fish evolved directly from placoderm like ancestors, while acanthodians represent a paraphyletic assemblage leading to Chondrichthyes. Some characteristics previously thought to be exclusive to acanthodians are also present in basal cartilaginous fish. In particular, new phylogenetic studies find cartilaginous fish to be well nested among acanthodians, with Doliodus and Tamiobatis being the closest relatives to Chondrichthyes. Recent studies vindicate this, as Doliodus had a mosaic of chondrichthyan and acanthodian traits. Dating back to the Middle and Late Ordovician Period, many isolated scales, made of dentine and bone, have a structure and growth form that is chondrichthyan-like. They may be the remains of stem-chondrichthyans, but their classification remains uncertain.
The earliest unequivocal fossils of acanthodian-grade cartilaginous fishes are Qianodus and Fanjingshania from the early Silurian (Aeronian) of Guizhou, China around 439 million years ago, which are also the oldest unambiguous remains of any jawed vertebrates. Shenacanthus vermiformis, which lived 436 million years ago, had thoracic armour plates resembling those of placoderms.
By the start of the Early Devonian, 419 million years ago, jawed fishes had divided into three distinct groups: the now extinct placoderms (a paraphyletic assemblage of ancient armoured fishes), the bony fishes, and the clade that includes spiny sharks and early cartilaginous fish. The modern bony fishes, class Osteichthyes, appeared in the late Silurian or early Devonian, about 416 million years ago. The first abundant genus of shark, Cladoselache, appeared in the oceans during the Devonian Period. The first Cartilaginous fishes evolved from Doliodus-like spiny shark ancestors. | [
{
"paragraph_id": 0,
"text": "Chondrichthyes (/kɒnˈdrɪkθi.iːz/; from Ancient Greek χόνδρος (khóndros) 'cartilage', and ἰχθύς (ikhthús) 'fish') is a class of jawed fish that contains the cartilaginous fish or chondrichthyians, which all have skeletons primarily composed of cartilage. They can be contrasted with the Osteichthyes or bony fish, which have skeletons primarily composed of bone tissue. Chondrichthyes are aquatic vertebrates with paired fins, paired nares, placoid scales, conus arteriosus in the heart, and a lack of opecula and swim bladders. Within the infraphylum Gnathostomata, cartilaginous fishes are distinct from all other jawed vertebrates.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The class is divided into two subclasses: Elasmobranchii (sharks, rays, skates and sawfish) and Holocephali (chimaeras, sometimes called ghost sharks, which are sometimes separated into their own class). Extant Chondrichthyes range in size from the 10 cm (3.9 in) finless sleeper ray to the over 10 m (33 ft) whale shark.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The skeleton is cartilaginous. The notochord is gradually replaced by a vertebral column during development, except in Holocephali, where the notochord stays intact. In some deepwater sharks, the column is reduced.",
"title": "Anatomy"
},
{
"paragraph_id": 3,
"text": "As they do not have bone marrow, red blood cells are produced in the spleen and the epigonal organ (special tissue around the gonads, which is also thought to play a role in the immune system). They are also produced in the Leydig's organ, which is only found in certain cartilaginous fishes. The subclass Holocephali, which is a very specialized group, lacks both the Leydig's and epigonal organs.",
"title": "Anatomy"
},
{
"paragraph_id": 4,
"text": "Apart from electric rays, which have a thick and flabby body, with soft, loose skin, chondrichthyans have tough skin covered with dermal teeth (again, Holocephali is an exception, as the teeth are lost in adults, only kept on the clasping organ seen on the caudal ventral surface of the male), also called placoid scales (or dermal denticles), making it feel like sandpaper. In most species, all dermal denticles are oriented in one direction, making the skin feel very smooth if rubbed in one direction and very rough if rubbed in the other.",
"title": "Anatomy"
},
{
"paragraph_id": 5,
"text": "Originally, the pectoral and pelvic girdles, which do not contain any dermal elements, did not connect. In later forms, each pair of fins became ventrally connected in the middle when scapulocoracoid and puboischiadic bars evolved. In rays, the pectoral fins are connected to the head and are very flexible.",
"title": "Anatomy"
},
{
"paragraph_id": 6,
"text": "One of the primary characteristics present in most sharks is the heterocercal tail, which aids in locomotion.",
"title": "Anatomy"
},
{
"paragraph_id": 7,
"text": "Chondrichthyans have tooth-like scales called dermal denticles or placoid scales. Denticles usually provide protection, and in most cases, streamlining. Mucous glands exist in some species, as well.",
"title": "Anatomy"
},
{
"paragraph_id": 8,
"text": "It is assumed that their oral teeth evolved from dermal denticles that migrated into the mouth, but it could be the other way around, as the teleost bony fish Denticeps clupeoides has most of its head covered by dermal teeth (as does, probably, Atherion elymus, another bony fish). This is most likely a secondary evolved characteristic, which means there is not necessarily a connection between the teeth and the original dermal scales.",
"title": "Anatomy"
},
{
"paragraph_id": 9,
"text": "The old placoderms did not have teeth at all, but had sharp bony plates in their mouth. Thus, it is unknown whether the dermal or oral teeth evolved first. It has even been suggested that the original bony plates of all vertebrates are now gone and that the present scales are just modified teeth, even if both the teeth and body armor had a common origin a long time ago. However, there is currently no evidence of this.",
"title": "Anatomy"
},
{
"paragraph_id": 10,
"text": "All chondrichthyans breathe through five to seven pairs of gills, depending on the species. In general, pelagic species must keep swimming to keep oxygenated water moving through their gills, whilst demersal species can actively pump water in through their spiracles and out through their gills. However, this is only a general rule and many species differ.",
"title": "Anatomy"
},
{
"paragraph_id": 11,
"text": "A spiracle is a small hole found behind each eye. These can be tiny and circular, such as found on the nurse shark (Ginglymostoma cirratum), to extended and slit-like, such as found on the wobbegongs (Orectolobidae). Many larger, pelagic species, such as the mackerel sharks (Lamnidae) and the thresher sharks (Alopiidae), no longer possess them.",
"title": "Anatomy"
},
{
"paragraph_id": 12,
"text": "In chondrichthyans, the nervous system is composed of a small brain, 8–10 pairs of cranial nerves, and a spinal cord with spinal nerves. They have several sensory organs which provide information to be processed. Ampullae of Lorenzini are a network of small jelly filled pores called electroreceptors which help the fish sense electric fields in water. This aids in finding prey, navigation, and sensing temperature. The Lateral line system has modified epithelial cells located externally which sense motion, vibration, and pressure in the water around them. Most species have large well-developed eyes. Also, they have very powerful nostrils and olfactory organs. Their inner ears consist of 3 large semicircular canals which aid in balance and orientation. Their sound detecting apparatus has limited range and is typically more powerful at lower frequencies. Some species have electric organs which can be used for defense and predation. They have relatively simple brains with the forebrain not greatly enlarged. The structure and formation of myelin in their nervous systems are nearly identical to that of tetrapods, which has led evolutionary biologists to believe that Chondrichthyes were a cornerstone group in the evolutionary timeline of myelin development.",
"title": "Anatomy"
},
{
"paragraph_id": 13,
"text": "Like all other jawed vertebrates, members of Chondrichthyes have an adaptive immune system.",
"title": "Anatomy"
},
{
"paragraph_id": 14,
"text": "Fertilization is internal. Development is usually live birth (ovoviviparous species) but can be through eggs (oviparous). Some rare species are viviparous. There is no parental care after birth; however, some chondrichthyans do guard their eggs.",
"title": "Reproduction"
},
{
"paragraph_id": 15,
"text": "Capture-induced premature birth and abortion (collectively called capture-induced parturition) occurs frequently in sharks/rays when fished. Capture-induced parturition is often mistaken for natural birth by recreational fishers and is rarely considered in commercial fisheries management despite being shown to occur in at least 12% of live bearing sharks and rays (88 species to date).",
"title": "Reproduction"
},
{
"paragraph_id": 16,
"text": "The class Chondrichthyes has two subclasses: the subclass Elasmobranchii (sharks, rays, skates, and sawfish) and the subclass Holocephali (chimaeras). To see the full list of the species, click here.",
"title": "Classification"
},
{
"paragraph_id": 17,
"text": "Cartilaginous fish are considered to have evolved from acanthodians. The discovery of Entelognathus and several examinations of acanthodian characteristics indicate that bony fish evolved directly from placoderm like ancestors, while acanthodians represent a paraphyletic assemblage leading to Chondrichthyes. Some characteristics previously thought to be exclusive to acanthodians are also present in basal cartilaginous fish. In particular, new phylogenetic studies find cartilaginous fish to be well nested among acanthodians, with Doliodus and Tamiobatis being the closest relatives to Chondrichthyes. Recent studies vindicate this, as Doliodus had a mosaic of chondrichthyan and acanthodian traits. Dating back to the Middle and Late Ordovician Period, many isolated scales, made of dentine and bone, have a structure and growth form that is chondrichthyan-like. They may be the remains of stem-chondrichthyans, but their classification remains uncertain.",
"title": "Evolution"
},
{
"paragraph_id": 18,
"text": "The earliest unequivocal fossils of acanthodian-grade cartilaginous fishes are Qianodus and Fanjingshania from the early Silurian (Aeronian) of Guizhou, China around 439 million years ago, which are also the oldest unambiguous remains of any jawed vertebrates. Shenacanthus vermiformis, which lived 436 million years ago, had thoracic armour plates resembling those of placoderms.",
"title": "Evolution"
},
{
"paragraph_id": 19,
"text": "By the start of the Early Devonian, 419 million years ago, jawed fishes had divided into three distinct groups: the now extinct placoderms (a paraphyletic assemblage of ancient armoured fishes), the bony fishes, and the clade that includes spiny sharks and early cartilaginous fish. The modern bony fishes, class Osteichthyes, appeared in the late Silurian or early Devonian, about 416 million years ago. The first abundant genus of shark, Cladoselache, appeared in the oceans during the Devonian Period. The first Cartilaginous fishes evolved from Doliodus-like spiny shark ancestors.",
"title": "Evolution"
},
{
"paragraph_id": 20,
"text": "",
"title": "Taxonomy"
}
] | Chondrichthyes is a class of jawed fish that contains the cartilaginous fish or chondrichthyians, which all have skeletons primarily composed of cartilage. They can be contrasted with the Osteichthyes or bony fish, which have skeletons primarily composed of bone tissue. Chondrichthyes are aquatic vertebrates with paired fins, paired nares, placoid scales, conus arteriosus in the heart, and a lack of opecula and swim bladders. Within the infraphylum Gnathostomata, cartilaginous fishes are distinct from all other jawed vertebrates. The class is divided into two subclasses: Elasmobranchii and Holocephali. Extant Chondrichthyes range in size from the 10 cm (3.9 in) finless sleeper ray to the over 10 m (33 ft) whale shark. | 2001-03-08T17:43:39Z | 2023-12-26T18:46:17Z | [
"Template:Authority control",
"Template:See also",
"Template:Clear",
"Template:Cite book",
"Template:Evolution of fish",
"Template:Short description",
"Template:Cvt",
"Template:Center",
"Template:Chordata",
"Template:Chondrichthyes",
"Template:Taxonbar",
"Template:IPAc-en",
"Template:Etymology",
"Template:Reflist",
"Template:Cite journal",
"Template:Wikispecies",
"Template:Wikibooks",
"Template:Use dmy dates",
"Template:Automatic taxobox",
"Template:By whom",
"Template:Further"
] | https://en.wikipedia.org/wiki/Chondrichthyes |
5,233 | Carl Linnaeus | Carl Linnaeus (23 May 1707 – 10 January 1778), also known after ennoblement in 1761 as Carl von Linné, was a Swedish biologist and physician who formalised binomial nomenclature, the modern system of naming organisms. He is known as the "father of modern taxonomy". Many of his writings were in Latin; his name is rendered in Latin as Carolus Linnæus and, after his 1761 ennoblement, as Carolus a Linné.
Linnaeus was the son of a curate and he was born in Råshult, the countryside of Småland, in southern Sweden. He received most of his higher education at Uppsala University and began giving lectures in botany there in 1730. He lived abroad between 1735 and 1738, where he studied and also published the first edition of his Systema Naturae in the Netherlands. He then returned to Sweden where he became professor of medicine and botany at Uppsala. In the 1740s, he was sent on several journeys through Sweden to find and classify plants and animals. In the 1750s and 1760s, he continued to collect and classify animals, plants, and minerals, while publishing several volumes. By the time of his death in 1778, he was one of the most acclaimed scientists in Europe.
Philosopher Jean-Jacques Rousseau sent him the message: "Tell him I know no greater man on Earth." Johann Wolfgang von Goethe wrote: "With the exception of Shakespeare and Spinoza, I know no one among the no longer living who has influenced me more strongly." Swedish author August Strindberg wrote: "Linnaeus was in reality a poet who happened to become a naturalist." Linnaeus has been called Princeps botanicorum (Prince of Botanists) and "The Pliny of the North". He is also considered one of the founders of modern ecology.
In botany and zoology, the abbreviation L. is used to indicate Linnaeus as the authority for a species' name. In older publications, the abbreviation "Linn." is found. Linnaeus's remains constitute the type specimen for the species Homo sapiens following the International Code of Zoological Nomenclature, since the sole specimen that he is known to have examined was himself.
Linnaeus was born in the village of Råshult in Småland, Sweden, on 23 May 1707. He was the first child of Nicolaus (Nils) Ingemarsson (who later adopted the family name Linnaeus) and Christina Brodersonia. His siblings were Anna Maria Linnæa, Sofia Juliana Linnæa, Samuel Linnæus (who would eventually succeed their father as rector of Stenbrohult and write a manual on beekeeping), and Emerentia Linnæa. His father taught him Latin as a small child.
One of a long line of peasants and priests, Nils was an amateur botanist, a Lutheran minister, and the curate of the small village of Stenbrohult in Småland. Christina was the daughter of the rector of Stenbrohult, Samuel Brodersonius.
A year after Linnaeus's birth, his grandfather Samuel Brodersonius died, and his father Nils became the rector of Stenbrohult. The family moved into the rectory from the curate's house.
Even in his early years, Linnaeus seemed to have a liking for plants, flowers in particular. Whenever he was upset, he was given a flower, which immediately calmed him. Nils spent much time in his garden and often showed flowers to Linnaeus and told him their names. Soon Linnaeus was given his own patch of earth where he could grow plants.
Carl's father was the first in his ancestry to adopt a permanent surname. Before that, ancestors had used the patronymic naming system of Scandinavian countries: his father was named Ingemarsson after his father Ingemar Bengtsson. When Nils was admitted to the University of Lund, he had to take on a family name. He adopted the Latinate name Linnæus after a giant linden tree (or lime tree), lind in Swedish, that grew on the family homestead. This name was spelled with the æ ligature. When Carl was born, he was named Carl Linnæus, with his father's family name. The son also always spelled it with the æ ligature, both in handwritten documents and in publications. Carl's patronymic would have been Nilsson, as in Carl Nilsson Linnæus.
Linnaeus's father began teaching him basic Latin, religion, and geography at an early age. When Linnaeus was seven, Nils decided to hire a tutor for him. The parents picked Johan Telander, a son of a local yeoman. Linnaeus did not like him, writing in his autobiography that Telander "was better calculated to extinguish a child's talents than develop them".
Two years after his tutoring had begun, he was sent to the Lower Grammar School at Växjö in 1717. Linnaeus rarely studied, often going to the countryside to look for plants. At some point, his father went to visit him and, after hearing critical assessments by his preceptors, he decided to put the youth as an apprentice to some honest cobbler. He reached the last year of the Lower School when he was fifteen, which was taught by the headmaster, Daniel Lannerus, who was interested in botany. Lannerus noticed Linnaeus's interest in botany and gave him the run of his garden.
He also introduced him to Johan Rothman, the state doctor of Småland and a teacher at Katedralskolan (a gymnasium) in Växjö. Also a botanist, Rothman broadened Linnaeus's interest in botany and helped him develop an interest in medicine. By the age of 17, Linnaeus had become well acquainted with the existing botanical literature. He remarks in his journal that he "read day and night, knowing like the back of my hand, Arvidh Månsson's Rydaholm Book of Herbs, Tillandz's Flora Åboensis, Palmberg's Serta Florea Suecana, Bromelii's Chloros Gothica and Rudbeckii's Hortus Upsaliensis".
Linnaeus entered the Växjö Katedralskola in 1724, where he studied mainly Greek, Hebrew, theology and mathematics, a curriculum designed for boys preparing for the priesthood. In the last year at the gymnasium, Linnaeus's father visited to ask the professors how his son's studies were progressing; to his dismay, most said that the boy would never become a scholar. Rothman believed otherwise, suggesting Linnaeus could have a future in medicine. The doctor offered to have Linnaeus live with his family in Växjö and to teach him physiology and botany. Nils accepted this offer.
Rothman showed Linnaeus that botany was a serious subject. He taught Linnaeus to classify plants according to Tournefort's system. Linnaeus was also taught about the sexual reproduction of plants, according to Sébastien Vaillant. In 1727, Linnaeus, age 21, enrolled in Lund University in Skåne. He was registered as Carolus Linnæus, the Latin form of his full name, which he also used later for his Latin publications.
Professor Kilian Stobæus, natural scientist, physician and historian, offered Linnaeus tutoring and lodging, as well as the use of his library, which included many books about botany. He also gave the student free admission to his lectures. In his spare time, Linnaeus explored the flora of Skåne, together with students sharing the same interests.
In August 1728, Linnaeus decided to attend Uppsala University on the advice of Rothman, who believed it would be a better choice if Linnaeus wanted to study both medicine and botany. Rothman based this recommendation on the two professors who taught at the medical faculty at Uppsala: Olof Rudbeck the Younger and Lars Roberg. Although Rudbeck and Roberg had undoubtedly been good professors, by then they were older and not so interested in teaching. Rudbeck no longer gave public lectures, and had others stand in for him. The botany, zoology, pharmacology and anatomy lectures were not in their best state. In Uppsala, Linnaeus met a new benefactor, Olof Celsius, who was a professor of theology and an amateur botanist. He received Linnaeus into his home and allowed him use of his library, which was one of the richest botanical libraries in Sweden.
In 1729, Linnaeus wrote a thesis, Praeludia Sponsaliorum Plantarum on plant sexual reproduction. This attracted the attention of Rudbeck; in May 1730, he selected Linnaeus to give lectures at the University although the young man was only a second-year student. His lectures were popular, and Linnaeus often addressed an audience of 300 people. In June, Linnaeus moved from Celsius's house to Rudbeck's to become the tutor of the three youngest of his 24 children. His friendship with Celsius did not wane and they continued their botanical expeditions. Over that winter, Linnaeus began to doubt Tournefort's system of classification and decided to create one of his own. His plan was to divide the plants by the number of stamens and pistils. He began writing several books, which would later result in, for example, Genera Plantarum and Critica Botanica. He also produced a book on the plants grown in the Uppsala Botanical Garden, Adonis Uplandicus.
Rudbeck's former assistant, Nils Rosén, returned to the University in March 1731 with a degree in medicine. Rosén started giving anatomy lectures and tried to take over Linnaeus's botany lectures, but Rudbeck prevented that. Until December, Rosén gave Linnaeus private tutoring in medicine. In December, Linnaeus had a "disagreement" with Rudbeck's wife and had to move out of his mentor's house; his relationship with Rudbeck did not appear to suffer. That Christmas, Linnaeus returned home to Stenbrohult to visit his parents for the first time in about three years. His mother had disapproved of his failing to become a priest, but she was pleased to learn he was teaching at the University.
During a visit with his parents, Linnaeus told them about his plan to travel to Lapland; Rudbeck had made the journey in 1695, but the detailed results of his exploration were lost in a fire seven years afterwards. Linnaeus's hope was to find new plants, animals and possibly valuable minerals. He was also curious about the customs of the native Sami people, reindeer-herding nomads who wandered Scandinavia's vast tundras. In April 1732, Linnaeus was awarded a grant from the Royal Society of Sciences in Uppsala for his journey.
Linnaeus began his expedition from Uppsala on 12 May 1732, just before he turned 25. He travelled on foot and horse, bringing with him his journal, botanical and ornithological manuscripts and sheets of paper for pressing plants. Near Gävle he found great quantities of Campanula serpyllifolia, later known as Linnaea borealis, the twinflower that would become his favourite. He sometimes dismounted on the way to examine a flower or rock and was particularly interested in mosses and lichens, the latter a main part of the diet of the reindeer, a common and economically important animal in Lapland.
Linnaeus travelled clockwise around the coast of the Gulf of Bothnia, making major inland incursions from Umeå, Luleå and Tornio. He returned from his six-month-long, over 2,000 kilometres (1,200 mi) expedition in October, having gathered and observed many plants, birds and rocks. Although Lapland was a region with limited biodiversity, Linnaeus described about 100 previously unidentified plants. These became the basis of his book Flora Lapponica. However, on the expedition to Lapland, Linnaeus used Latin names to describe organisms because he had not yet developed the binomial system.
In Flora Lapponica Linnaeus's ideas about nomenclature and classification were first used in a practical way, making this the first proto-modern Flora. The account covered 534 species, used the Linnaean classification system and included, for the described species, geographical distribution and taxonomic notes. It was Augustin Pyramus de Candolle who attributed Linnaeus with Flora Lapponica as the first example in the botanical genre of Flora writing. Botanical historian E. L. Greene described Flora Lapponica as "the most classic and delightful" of Linnaeus's works.
It was also during this expedition that Linnaeus had a flash of insight regarding the classification of mammals. Upon observing the lower jawbone of a horse at the side of a road he was travelling, Linnaeus remarked: "If I only knew how many teeth and of what kind every animal had, how many teats and where they were placed, I should perhaps be able to work out a perfectly natural system for the arrangement of all quadrupeds."
In 1734, Linnaeus led a small group of students to Dalarna. Funded by the Governor of Dalarna, the expedition was to catalogue known natural resources and discover new ones, but also to gather intelligence on Norwegian mining activities at Røros.
His relations with Nils Rosén having worsened, Linnaeus accepted an invitation from Claes Sohlberg, son of a mining inspector, to spend the Christmas holiday in Falun, where Linnaeus was permitted to visit the mines.
In April 1735, at the suggestion of Sohlberg's father, Linnaeus and Sohlberg set out for the Dutch Republic, where Linnaeus intended to study medicine at the University of Harderwijk while tutoring Sohlberg in exchange for an annual salary. At the time, it was common for Swedes to pursue doctoral degrees in the Netherlands, then a highly revered place to study natural history.
On the way, the pair stopped in Hamburg, where they met the mayor, who proudly showed them a supposed wonder of nature in his possession: the taxidermied remains of a seven-headed hydra. Linnaeus quickly discovered the specimen was a fake, cobbled together from the jaws and paws of weasels and the skins of snakes. The provenance of the hydra suggested to Linnaeus that it had been manufactured by monks to represent the Beast of Revelation. Even at the risk of incurring the mayor's wrath, Linnaeus made his observations public, dashing the mayor's dreams of selling the hydra for an enormous sum. Linnaeus and Sohlberg were forced to flee from Hamburg.
Linnaeus began working towards his degree as soon as he reached Harderwijk, a university known for awarding degrees in as little as a week. He submitted a dissertation, written back in Sweden, entitled Dissertatio medica inauguralis in qua exhibetur hypothesis nova de febrium intermittentium causa, in which he laid out his hypothesis that malaria arose only in areas with clay-rich soils. Although he failed to identify the true source of disease transmission, (i.e., the Anopheles mosquito), he did correctly predict that Artemisia annua (wormwood) would become a source of antimalarial medications.
Within two weeks he had completed his oral and practical examinations and was awarded a doctoral degree.
That summer Linnaeus reunited with Peter Artedi, a friend from Uppsala with whom he had once made a pact that should either of the two predecease the other, the survivor would finish the decedent's work. Ten weeks later, Artedi drowned in the canals of Amsterdam, leaving behind an unfinished manuscript on the classification of fish.
One of the first scientists Linnaeus met in the Netherlands was Johan Frederik Gronovius, to whom Linnaeus showed one of the several manuscripts he had brought with him from Sweden. The manuscript described a new system for classifying plants. When Gronovius saw it, he was very impressed, and offered to help pay for the printing. With an additional monetary contribution by the Scottish doctor Isaac Lawson, the manuscript was published as Systema Naturae (1735).
Linnaeus became acquainted with one of the most respected physicians and botanists in the Netherlands, Herman Boerhaave, who tried to convince Linnaeus to make a career there. Boerhaave offered him a journey to South Africa and America, but Linnaeus declined, stating he would not stand the heat. Instead, Boerhaave convinced Linnaeus that he should visit the botanist Johannes Burman. After his visit, Burman, impressed with his guest's knowledge, decided Linnaeus should stay with him during the winter. During his stay, Linnaeus helped Burman with his Thesaurus Zeylanicus. Burman also helped Linnaeus with the books on which he was working: Fundamenta Botanica and Bibliotheca Botanica.
In August 1735, during Linnaeus's stay with Burman, he met George Clifford III, a director of the Dutch East India Company and the owner of a rich botanical garden at the estate of Hartekamp in Heemstede. Clifford was very impressed with Linnaeus's ability to classify plants, and invited him to become his physician and superintendent of his garden. Linnaeus had already agreed to stay with Burman over the winter, and could thus not accept immediately. However, Clifford offered to compensate Burman by offering him a copy of Sir Hans Sloane's Natural History of Jamaica, a rare book, if he let Linnaeus stay with him, and Burman accepted. On 24 September 1735, Linnaeus moved to Hartekamp to become personal physician to Clifford, and curator of Clifford's herbarium. He was paid 1,000 florins a year, with free board and lodging. Though the agreement was only for a winter of that year, Linnaeus practically stayed there until 1738. It was here that he wrote a book Hortus Cliffortianus, in the preface of which he described his experience as "the happiest time of my life". (A portion of Hartekamp was declared as public garden in April 1956 by the Heemstede local authority, and was named "Linnaeushof". It eventually became, as it is claimed, the biggest playground in Europe.)
In July 1736, Linnaeus travelled to England, at Clifford's expense. He went to London to visit Sir Hans Sloane, a collector of natural history, and to see his cabinet, as well as to visit the Chelsea Physic Garden and its keeper, Philip Miller. He taught Miller about his new system of subdividing plants, as described in Systema Naturae. Miller was in fact reluctant to use the new binomial nomenclature, preferring the classifications of Joseph Pitton de Tournefort and John Ray at first. Linnaeus, nevertheless, applauded Miller's Gardeners Dictionary, the conservative Scot actually retained in his dictionary a number of pre-Linnaean binomial signifiers discarded by Linnaeus but which have been retained by modern botanists. He only fully changed to the Linnaean system in the edition of The Gardeners Dictionary of 1768. Miller ultimately was impressed, and from then on started to arrange the garden according to Linnaeus's system.
Linnaeus also travelled to Oxford University to visit the botanist Johann Jacob Dillenius. He failed to make Dillenius publicly fully accept his new classification system, though the two men remained in correspondence for many years afterwards. Linnaeus dedicated his Critica Botanica to him, as "opus botanicum quo absolutius mundus non-vidit". Linnaeus would later name a genus of tropical tree Dillenia in his honour. He then returned to Hartekamp, bringing with him many specimens of rare plants. The next year, 1737, he published Genera Plantarum, in which he described 935 genera of plants, and shortly thereafter he supplemented it with Corollarium Generum Plantarum, with another sixty (sexaginta) genera.
His work at Hartekamp led to another book, Hortus Cliffortianus, a catalogue of the botanical holdings in the herbarium and botanical garden of Hartekamp. He wrote it in nine months (completed in July 1737), but it was not published until 1738. It contains the first use of the name Nepenthes, which Linnaeus used to describe a genus of pitcher plants.
Linnaeus stayed with Clifford at Hartekamp until 18 October 1737 (new style), when he left the house to return to Sweden. Illness and the kindness of Dutch friends obliged him to stay some months longer in Holland. In May 1738, he set out for Sweden again. On the way home, he stayed in Paris for about a month, visiting botanists such as Antoine de Jussieu. After his return, Linnaeus never again left Sweden.
When Linnaeus returned to Sweden on 28 June 1738, he went to Falun, where he entered into an engagement to Sara Elisabeth Moræa. Three months later, he moved to Stockholm to find employment as a physician, and thus to make it possible to support a family. Once again, Linnaeus found a patron; he became acquainted with Count Carl Gustav Tessin, who helped him get work as a physician at the Admiralty. During this time in Stockholm, Linnaeus helped found the Royal Swedish Academy of Science; he became the first Praeses of the academy by drawing of lots.
Because his finances had improved and were now sufficient to support a family, he received permission to marry his fiancée, Sara Elisabeth Moræa. Their wedding was held 26 June 1739. Seventeen months later, Sara gave birth to their first son, Carl. Two years later, a daughter, Elisabeth Christina, was born, and the subsequent year Sara gave birth to Sara Magdalena, who died when 15 days old. Sara and Linnaeus would later have four other children: Lovisa, Sara Christina [sv], Johannes and Sophia.
In May 1741, Linnaeus was appointed Professor of Medicine at Uppsala University, first with responsibility for medicine-related matters. Soon, he changed place with the other Professor of Medicine, Nils Rosén, and thus was responsible for the Botanical Garden (which he would thoroughly reconstruct and expand), botany and natural history, instead. In October that same year, his wife and nine-month-old son followed him to live in Uppsala.
Ten days after he was appointed Professor, he undertook an expedition to the island provinces of Öland and Gotland with six students from the university to look for plants useful in medicine. First, they travelled to Öland and stayed there until 21 June, when they sailed to Visby in Gotland. Linnaeus and the students stayed on Gotland for about a month, and then returned to Uppsala. During this expedition, they found 100 previously unrecorded plants. The observations from the expedition were later published in Öländska och Gothländska Resa, written in Swedish. Like Flora Lapponica, it contained both zoological and botanical observations, as well as observations concerning the culture in Öland and Gotland.
During the summer of 1745, Linnaeus published two more books: Flora Suecica and Fauna Suecica. Flora Suecica was a strictly botanical book, while Fauna Suecica was zoological. Anders Celsius had created the temperature scale named after him in 1742. Celsius's scale was inverted compared to today, the boiling point at 0 °C and freezing point at 100 °C. In 1745, Linnaeus inverted the scale to its present standard.
In the summer of 1746, Linnaeus was once again commissioned by the Government to carry out an expedition, this time to the Swedish province of Västergötland. He set out from Uppsala on 12 June and returned on 11 August. On the expedition his primary companion was Erik Gustaf Lidbeck, a student who had accompanied him on his previous journey. Linnaeus described his findings from the expedition in the book Wästgöta-Resa, published the next year. After he returned from the journey, the Government decided Linnaeus should take on another expedition to the southernmost province Scania. This journey was postponed, as Linnaeus felt too busy.
In 1747, Linnaeus was given the title archiater, or chief physician, by the Swedish king Adolf Frederick—a mark of great respect. The same year he was elected member of the Academy of Sciences in Berlin.
In the spring of 1749, Linnaeus could finally journey to Scania, again commissioned by the Government. With him he brought his student, Olof Söderberg. On the way to Scania, he made his last visit to his brothers and sisters in Stenbrohult since his father had died the previous year. The expedition was similar to the previous journeys in most aspects, but this time he was also ordered to find the best place to grow walnut and Swedish whitebeam trees; these trees were used by the military to make rifles. While there, they also visited the Ramlösa mineral spa, where he remarked on the quality of its ferruginous water. The journey was successful, and Linnaeus's observations were published the next year in Skånska Resa.
In 1750, Linnaeus became rector of Uppsala University, starting a period where natural sciences were esteemed. Perhaps the most important contribution he made during his time at Uppsala was to teach; many of his students travelled to various places in the world to collect botanical samples. Linnaeus called the best of these students his "apostles". His lectures were normally very popular and were often held in the Botanical Garden. He tried to teach the students to think for themselves and not trust anybody, not even him. Even more popular than the lectures were the botanical excursions made every Saturday during summer, where Linnaeus and his students explored the flora and fauna in the vicinity of Uppsala.
Linnaeus published Philosophia Botanica in 1751. The book contained a complete survey of the taxonomy system he had been using in his earlier works. It also contained information of how to keep a journal on travels and how to maintain a botanical garden.
During Linnaeus's time it was normal for upper class women to have wet nurses for their babies. Linnaeus joined an ongoing campaign to end this practice in Sweden and promote breast-feeding by mothers. In 1752 Linnaeus published a thesis along with Frederick Lindberg, a physician student, based on their experiences. In the tradition of the period, this dissertation was essentially an idea of the presiding reviewer (prases) expounded upon by the student. Linnaeus's dissertation was translated into French by J. E. Gilibert in 1770 as La Nourrice marâtre, ou Dissertation sur les suites funestes du nourrisage mercénaire. Linnaeus suggested that children might absorb the personality of their wet nurse through the milk. He admired the child care practices of the Lapps and pointed out how healthy their babies were compared to those of Europeans who employed wet nurses. He compared the behaviour of wild animals and pointed out how none of them denied their newborns their breastmilk. It is thought that his activism played a role in his choice of the term Mammalia for the class of organisms.
Linnaeus published Species Plantarum, the work which is now internationally accepted as the starting point of modern botanical nomenclature, in 1753. The first volume was issued on 24 May, the second volume followed on 16 August of the same year. The book contained 1,200 pages and was published in two volumes; it described over 7,300 species. The same year the king dubbed him knight of the Order of the Polar Star, the first civilian in Sweden to become a knight in this order. He was then seldom seen not wearing the order's insignia.
Linnaeus felt Uppsala was too noisy and unhealthy, so he bought two farms in 1758: Hammarby and Sävja. The next year, he bought a neighbouring farm, Edeby. He spent the summers with his family at Hammarby; initially it only had a small one-storey house, but in 1762 a new, larger main building was added. In Hammarby, Linnaeus made a garden where he could grow plants that could not be grown in the Botanical Garden in Uppsala. He began constructing a museum on a hill behind Hammarby in 1766, where he moved his library and collection of plants. A fire that destroyed about one third of Uppsala and had threatened his residence there necessitated the move.
Since the initial release of Systema Naturae in 1735, the book had been expanded and reprinted several times; the tenth edition was released in 1758. This edition established itself as the starting point for zoological nomenclature, the equivalent of Species Plantarum.
The Swedish King Adolf Frederick granted Linnaeus nobility in 1757, but he was not ennobled until 1761. With his ennoblement, he took the name Carl von Linné (Latinised as Carolus a Linné), 'Linné' being a shortened and gallicised version of 'Linnæus', and the German nobiliary particle 'von' signifying his ennoblement. The noble family's coat of arms prominently features a twinflower, one of Linnaeus's favourite plants; it was given the scientific name Linnaea borealis in his honour by Gronovius. The shield in the coat of arms is divided into thirds: red, black and green for the three kingdoms of nature (animal, mineral and vegetable) in Linnaean classification; in the centre is an egg "to denote Nature, which is continued and perpetuated in ovo." At the bottom is a phrase in Latin, borrowed from the Aeneid, which reads "Famam extendere factis": we extend our fame by our deeds. Linnaeus inscribed this personal motto in books that were given to him by friends.
After his ennoblement, Linnaeus continued teaching and writing. His reputation had spread over the world, and he corresponded with many different people. For example, Catherine II of Russia sent him seeds from her country. He also corresponded with Giovanni Antonio Scopoli, "the Linnaeus of the Austrian Empire", who was a doctor and a botanist in Idrija, Duchy of Carniola (nowadays Slovenia). Scopoli communicated all of his research, findings, and descriptions (for example of the olm and the dormouse, two little animals hitherto unknown to Linnaeus). Linnaeus greatly respected Scopoli and showed great interest in his work. He named a solanaceous genus, Scopolia, the source of scopolamine, after him, but because of the great distance between them, they never met.
Linnaeus was relieved of his duties in the Royal Swedish Academy of Science in 1763, but continued his work there as usual for more than ten years after. In 1769 he was elected to the American Philosophical Society for his work. He stepped down as rector at Uppsala University in December 1772, mostly due to his declining health.
Linnaeus's last years were troubled by illness. He had had a disease called the Uppsala fever in 1764, but survived due to the care of Rosén. He developed sciatica in 1773, and the next year, he had a stroke which partially paralysed him. He had a second stroke in 1776, losing the use of his right side and leaving him bereft of his memory; while still able to admire his own writings, he could not recognise himself as their author.
In December 1777, he had another stroke which greatly weakened him, and eventually led to his death on 10 January 1778 in Hammarby. Despite his desire to be buried in Hammarby, he was buried in Uppsala Cathedral on 22 January.
His library and collections were left to his widow Sara and their children. Joseph Banks, an eminent botanist, wished to purchase the collection, but his son Carl refused the offer and instead moved the collection to Uppsala. In 1783 Carl died and Sara inherited the collection, having outlived both her husband and son. She tried to sell it to Banks, but he was no longer interested; instead an acquaintance of his agreed to buy the collection. The acquaintance was a 24-year-old medical student, James Edward Smith, who bought the whole collection: 14,000 plants, 3,198 insects, 1,564 shells, about 3,000 letters and 1,600 books. Smith founded the Linnean Society of London five years later.
The von Linné name ended with his son Carl, who never married. His other son, Johannes, had died aged 3. There are over two hundred descendants of Linnaeus through two of his daughters.
During Linnaeus's time as Professor and Rector of Uppsala University, he taught many devoted students, 17 of whom he called "apostles". They were the most promising, most committed students, and all of them made botanical expeditions to various places in the world, often with his help. The amount of this help varied; sometimes he used his influence as Rector to grant his apostles a scholarship or a place on an expedition. To most of the apostles he gave instructions of what to look for on their journeys. Abroad, the apostles collected and organised new plants, animals and minerals according to Linnaeus's system. Most of them also gave some of their collection to Linnaeus when their journey was finished. Thanks to these students, the Linnaean system of taxonomy spread through the world without Linnaeus ever having to travel outside Sweden after his return from Holland. The British botanist William T. Stearn notes, without Linnaeus's new system, it would not have been possible for the apostles to collect and organise so many new specimens. Many of the apostles died during their expeditions.
Christopher Tärnström, the first apostle and a 43-year-old pastor with a wife and children, made his journey in 1746. He boarded a Swedish East India Company ship headed for China. Tärnström never reached his destination, dying of a tropical fever on Côn Sơn Island the same year. Tärnström's widow blamed Linnaeus for making her children fatherless, causing Linnaeus to prefer sending out younger, unmarried students after Tärnström. Six other apostles later died on their expeditions, including Pehr Forsskål and Pehr Löfling.
Two years after Tärnström's expedition, Finnish-born Pehr Kalm set out as the second apostle to North America. There he spent two-and-a-half years studying the flora and fauna of Pennsylvania, New York, New Jersey and Canada. Linnaeus was overjoyed when Kalm returned, bringing back with him many pressed flowers and seeds. At least 90 of the 700 North American species described in Species Plantarum had been brought back by Kalm.
Daniel Solander was living in Linnaeus's house during his time as a student in Uppsala. Linnaeus was very fond of him, promising Solander his eldest daughter's hand in marriage. On Linnaeus's recommendation, Solander travelled to England in 1760, where he met the English botanist Joseph Banks. With Banks, Solander joined James Cook on his expedition to Oceania on the Endeavour in 1768–71. Solander was not the only apostle to journey with James Cook; Anders Sparrman followed on the Resolution in 1772–75 bound for, among other places, Oceania and South America. Sparrman made many other expeditions, one of them to South Africa.
Perhaps the most famous and successful apostle was Carl Peter Thunberg, who embarked on a nine-year expedition in 1770. He stayed in South Africa for three years, then travelled to Japan. All foreigners in Japan were forced to stay on the island of Dejima outside Nagasaki, so it was thus hard for Thunberg to study the flora. He did, however, manage to persuade some of the translators to bring him different plants, and he also found plants in the gardens of Dejima. He returned to Sweden in 1779, one year after Linnaeus's death.
The first edition of Systema Naturae was printed in the Netherlands in 1735. It was a twelve-page work. By the time it reached its 10th edition in 1758, it classified 4,400 species of animals and 7,700 species of plants. People from all over the world sent their specimens to Linnaeus to be included. By the time he started work on the 12th edition, Linnaeus needed a new invention—the index card—to track classifications.
In Systema Naturae, the unwieldy names mostly used at the time, such as "Physalis annua ramosissima, ramis angulosis glabris, foliis dentato-serratis", were supplemented with concise and now familiar "binomials", composed of the generic name, followed by a specific epithet—in the case given, Physalis angulata. These binomials could serve as a label to refer to the species. Higher taxa were constructed and arranged in a simple and orderly manner. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers (see Gaspard Bauhin and Johann Bauhin) almost 200 years earlier, Linnaeus was the first to use it consistently throughout the work, including in monospecific genera, and may be said to have popularised it within the scientific community.
After the decline in Linnaeus's health in the early 1770s, publication of editions of Systema Naturae went in two different directions. Another Swedish scientist, Johan Andreas Murray issued the Regnum Vegetabile section separately in 1774 as the Systema Vegetabilium, rather confusingly labelled the 13th edition. Meanwhile, a 13th edition of the entire Systema appeared in parts between 1788 and 1793 under the editorship of Johann Friedrich Gmelin. It was through the Systema Vegetabilium that Linnaeus's work became widely known in England, following its translation from the Latin by the Lichfield Botanical Society as A System of Vegetables (1783–1785).
('Opinion of the learned world on the writings of Carl Linnaeus, Doctor') Published in 1740, this small octavo-sized pamphlet was presented to the State Library of New South Wales by the Linnean Society of NSW in 2018. This is considered among the rarest of all the writings of Linnaeus, and crucial to his career, securing him his appointment to a professorship of medicine at Uppsala University. From this position he laid the groundwork for his radical new theory of classifying and naming organisms for which he was considered the founder of modern taxonomy.
Species Plantarum (or, more fully, Species Plantarum, exhibentes plantas rite cognitas, ad genera relatas, cum differentiis specificis, nominibus trivialibus, synonymis selectis, locis natalibus, secundum systema sexuale digestas) was first published in 1753, as a two-volume work. Its prime importance is perhaps that it is the primary starting point of plant nomenclature as it exists today.
Genera plantarum: eorumque characteres naturales secundum numerum, figuram, situm, et proportionem omnium fructificationis partium was first published in 1737, delineating plant genera. Around 10 editions were published, not all of them by Linnaeus himself; the most important is the 1754 fifth edition. In it Linnaeus divided the plant Kingdom into 24 classes. One, Cryptogamia, included all the plants with concealed reproductive parts (algae, fungi, mosses and liverworts and ferns).
Philosophia Botanica (1751) was a summary of Linnaeus's thinking on plant classification and nomenclature, and an elaboration of the work he had previously published in Fundamenta Botanica (1736) and Critica Botanica (1737). Other publications forming part of his plan to reform the foundations of botany include his Classes Plantarum and Bibliotheca Botanica: all were printed in Holland (as were Genera Plantarum (1737) and Systema Naturae (1735)), the Philosophia being simultaneously released in Stockholm.
At the end of his lifetime the Linnean collection in Uppsala was considered one of the finest collections of natural history objects in Sweden. Next to his own collection he had also built up a museum for the university of Uppsala, which was supplied by material donated by Carl Gyllenborg (in 1744–1745), crown-prince Adolf Fredrik (in 1745), Erik Petreus (in 1746), Claes Grill (in 1746), Magnus Lagerström (in 1748 and 1750) and Jonas Alströmer (in 1749). The relation between the museum and the private collection was not formalised and the steady flow of material from Linnean pupils were incorporated to the private collection rather than to the museum. Linnaeus felt his work was reflecting the harmony of nature and he said in 1754 "the earth is then nothing else but a museum of the all-wise creator's masterpieces, divided into three chambers". He had turned his own estate into a microcosm of that 'world museum'.
In April 1766 parts of the town were destroyed by a fire and the Linnean private collection was subsequently moved to a barn outside the town, and shortly afterwards to a single-room stone building close to his country house at Hammarby near Uppsala. This resulted in a physical separation between the two collections; the museum collection remained in the botanical garden of the university. Some material which needed special care (alcohol specimens) or ample storage space was moved from the private collection to the museum.
In Hammarby the Linnean private collections suffered seriously from damp and the depredations by mice and insects. Carl von Linné's son (Carl Linnaeus) inherited the collections in 1778 and retained them until his own death in 1783. Shortly after Carl von Linné's death his son confirmed that mice had caused "horrible damage" to the plants and that also moths and mould had caused considerable damage. He tried to rescue them from the neglect they had suffered during his father's later years, and also added further specimens. This last activity however reduced rather than augmented the scientific value of the original material.
In 1784 the young medical student James Edward Smith purchased the entire specimen collection, library, manuscripts, and correspondence of Carl Linnaeus from his widow and daughter and transferred the collections to London. Not all material in Linné's private collection was transported to England. Thirty-three fish specimens preserved in alcohol were not sent and were later lost.
In London Smith tended to neglect the zoological parts of the collection; he added some specimens and also gave some specimens away. Over the following centuries the Linnean collection in London suffered enormously at the hands of scientists who studied the collection, and in the process disturbed the original arrangement and labels, added specimens that did not belong to the original series and withdrew precious original type material.
Much material which had been intensively studied by Linné in his scientific career belonged to the collection of Queen Lovisa Ulrika (1720–1782) (in the Linnean publications referred to as "Museum Ludovicae Ulricae" or "M. L. U."). This collection was donated by her grandson King Gustav IV Adolf (1778–1837) to the museum in Uppsala in 1804. Another important collection in this respect was that of her husband King Adolf Fredrik (1710–1771) (in the Linnean sources known as "Museum Adolphi Friderici" or "Mus. Ad. Fr."), the wet parts (alcohol collection) of which were later donated to the Royal Swedish Academy of Sciences, and is today housed in the Swedish Museum of Natural History at Stockholm. The dry material was transferred to Uppsala.
The establishment of universally accepted conventions for the naming of organisms was Linnaeus's main contribution to taxonomy—his work marks the starting point of consistent use of binomial nomenclature. During the 18th century expansion of natural history knowledge, Linnaeus also developed what became known as the Linnaean taxonomy; the system of scientific classification now widely used in the biological sciences. A previous zoologist Rumphius (1627–1702) had more or less approximated the Linnaean system and his material contributed to the later development of the binomial scientific classification by Linnaeus.
The Linnaean system classified nature within a nested hierarchy, starting with three kingdoms. Kingdoms were divided into classes and they, in turn, into orders, and thence into genera (singular: genus), which were divided into species (singular: species). Below the rank of species he sometimes recognised taxa of a lower (unnamed) rank; these have since acquired standardised names such as variety in botany and subspecies in zoology. Modern taxonomy includes a rank of family between order and genus and a rank of phylum between kingdom and class that were not present in Linnaeus's original system.
Linnaeus's groupings were based upon shared physical characteristics, and not based upon differences. Of his higher groupings, only those for animals are still in use, and the groupings themselves have been significantly changed since their conception, as have the principles behind them. Nevertheless, Linnaeus is credited with establishing the idea of a hierarchical structure of classification which is based upon observable characteristics and intended to reflect natural relationships. While the underlying details concerning what are considered to be scientifically valid "observable characteristics" have changed with expanding knowledge (for example, DNA sequencing, unavailable in Linnaeus's time, has proven to be a tool of considerable utility for classifying living organisms and establishing their evolutionary relationships), the fundamental principle remains sound.
Linnaeus's system of taxonomy was especially noted as the first to include humans (Homo) taxonomically grouped with apes (Simia), under the header of Anthropomorpha. German biologist Ernst Haeckel speaking in 1907 noted this as the "most important sign of Linnaeus's genius".
Linnaeus classified humans among the primates beginning with the first edition of Systema Naturae. During his time at Hartekamp, he had the opportunity to examine several monkeys and noted similarities between them and man. He pointed out both species basically have the same anatomy; except for speech, he found no other differences. Thus he placed man and monkeys under the same category, Anthropomorpha, meaning "manlike." This classification received criticism from other biologists such as Johan Gottschalk Wallerius, Jacob Theodor Klein and Johann Georg Gmelin on the ground that it is illogical to describe man as human-like. In a letter to Gmelin from 1747, Linnaeus replied:
It does not please [you] that I've placed Man among the Anthropomorpha, perhaps because of the term 'with human form', but man learns to know himself. Let's not quibble over words. It will be the same to me whatever name we apply. But I seek from you and from the whole world a generic difference between man and simian that [follows] from the principles of Natural History. I absolutely know of none. If only someone might tell me a single one! If I would have called man a simian or vice versa, I would have brought together all the theologians against me. Perhaps I ought to have by virtue of the law of the discipline.
The theological concerns were twofold: first, putting man at the same level as monkeys or apes would lower the spiritually higher position that man was assumed to have in the great chain of being, and second, because the Bible says man was created in the image of God (theomorphism), if monkeys/apes and humans were not distinctly and separately designed, that would mean monkeys and apes were created in the image of God as well. This was something many could not accept. The conflict between world views that was caused by asserting man was a type of animal would simmer for a century until the much greater, and still ongoing, creation–evolution controversy began in earnest with the publication of On the Origin of Species by Charles Darwin in 1859.
After such criticism, Linnaeus felt he needed to explain himself more clearly. The 10th edition of Systema Naturae introduced new terms, including Mammalia and Primates, the latter of which would replace Anthropomorpha as well as giving humans the full binomial Homo sapiens. The new classification received less criticism, but many natural historians still believed he had demoted humans from their former place of ruling over nature and not being a part of it. Linnaeus believed that man biologically belongs to the animal kingdom and had to be included in it. In his book Dieta Naturalis, he said, "One should not vent one's wrath on animals, Theology decree that man has a soul and that the animals are mere 'automata mechanica,' but I believe they would be better advised that animals have a soul and that the difference is of nobility."
Linnaeus added a second species to the genus Homo in Systema Naturae based on a figure and description by Jacobus Bontius from a 1658 publication: Homo troglodytes ("caveman") and published a third in 1771: Homo lar. Swedish historian Gunnar Broberg states that the new human species Linnaeus described were actually simians or native people clad in skins to frighten colonial settlers, whose appearance had been exaggerated in accounts to Linnaeus. For Homo troglodytes Linnaeus asked the Swedish East India Company to search for one, but they did not find any signs of its existence. Homo lar has since been reclassified as Hylobates lar, the lar gibbon.
In the first edition of Systema Naturae, Linnaeus subdivided the human species into four varieties: "Europæus albesc[ens]" (whitish European), "Americanus rubesc[ens]" (reddish American), "Asiaticus fuscus" (tawny Asian) and "Africanus nigr[iculus]" (blackish African). In the tenth edition of Systema Naturae he further detailed phenotypical characteristics for each variety, based on the concept of the four temperaments from classical antiquity, and changed the description of Asians' skin tone to "luridus" (yellow). While Linnaeus believed that these varieties resulted from environmental differences between the four known continents, the Linnean Society acknowledges that his categorization's focus on skin color and later inclusion of cultural and behavioral traits cemented colonial stereotypes and provided the foundations for scientific racism. Additionally, Linnaeus created a wastebasket taxon "monstrosus" for "wild and monstrous humans, unknown groups, and more or less abnormal people".
In 1959, W. T. Stearn designated Linnaeus to be the lectotype of H. sapiens.
Linnaeus's applied science was inspired not only by the instrumental utilitarianism general to the early Enlightenment, but also by his adherence to the older economic doctrine of Cameralism. Additionally, Linnaeus was a state interventionist. He supported tariffs, levies, export bounties, quotas, embargoes, navigation acts, subsidised investment capital, ceilings on wages, cash grants, state-licensed producer monopolies, and cartels.
Anniversaries of Linnaeus's birth, especially in centennial years, have been marked by major celebrations. Linnaeus has appeared on numerous Swedish postage stamps and banknotes. There are numerous statues of Linnaeus in countries around the world. The Linnean Society of London has awarded the Linnean Medal for excellence in botany or zoology since 1888. Following approval by the Riksdag of Sweden, Växjö University and Kalmar College merged on 1 January 2010 to become Linnaeus University. Other things named after Linnaeus include the twinflower genus Linnaea, Linnaeosicyos (a monotypic genus in the family Cucurbitaceae), the crater Linné on the Earth's moon, a street in Cambridge, Massachusetts, and the cobalt sulfide mineral Linnaeite.
Andrew Dickson White wrote in A History of the Warfare of Science with Theology in Christendom (1896):
Linnaeus ... was the most eminent naturalist of his time, a wide observer, a close thinker; but the atmosphere in which he lived and moved and had his being was saturated with biblical theology, and this permeated all his thinking. ... Toward the end of his life he timidly advanced the hypothesis that all the species of one genus constituted at the creation one species; and from the last edition of his Systema Naturæ he quietly left out the strongly orthodox statement of the fixity of each species, which he had insisted upon in his earlier works. ... warnings came speedily both from the Catholic and Protestant sides.
The mathematical PageRank algorithm, applied to 24 multilingual Wikipedia editions in 2014, published in PLOS ONE in 2015, placed Carl Linnaeus at the top historical figure, above Jesus, Aristotle, Napoleon, and Adolf Hitler (in that order).
In the 21st century, Linnæus's taxonomy of human "races" has been problematised and discussed. Some critics claim that Linnæus was one of the forebears of the modern pseudoscientific notion of scientific racism, while others hold the view that while his classification was stereotyped, it did not imply that certain human "races" were superior to others.
Biographies
Resources
Other | [
{
"paragraph_id": 0,
"text": "Carl Linnaeus (23 May 1707 – 10 January 1778), also known after ennoblement in 1761 as Carl von Linné, was a Swedish biologist and physician who formalised binomial nomenclature, the modern system of naming organisms. He is known as the \"father of modern taxonomy\". Many of his writings were in Latin; his name is rendered in Latin as Carolus Linnæus and, after his 1761 ennoblement, as Carolus a Linné.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Linnaeus was the son of a curate and he was born in Råshult, the countryside of Småland, in southern Sweden. He received most of his higher education at Uppsala University and began giving lectures in botany there in 1730. He lived abroad between 1735 and 1738, where he studied and also published the first edition of his Systema Naturae in the Netherlands. He then returned to Sweden where he became professor of medicine and botany at Uppsala. In the 1740s, he was sent on several journeys through Sweden to find and classify plants and animals. In the 1750s and 1760s, he continued to collect and classify animals, plants, and minerals, while publishing several volumes. By the time of his death in 1778, he was one of the most acclaimed scientists in Europe.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Philosopher Jean-Jacques Rousseau sent him the message: \"Tell him I know no greater man on Earth.\" Johann Wolfgang von Goethe wrote: \"With the exception of Shakespeare and Spinoza, I know no one among the no longer living who has influenced me more strongly.\" Swedish author August Strindberg wrote: \"Linnaeus was in reality a poet who happened to become a naturalist.\" Linnaeus has been called Princeps botanicorum (Prince of Botanists) and \"The Pliny of the North\". He is also considered one of the founders of modern ecology.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In botany and zoology, the abbreviation L. is used to indicate Linnaeus as the authority for a species' name. In older publications, the abbreviation \"Linn.\" is found. Linnaeus's remains constitute the type specimen for the species Homo sapiens following the International Code of Zoological Nomenclature, since the sole specimen that he is known to have examined was himself.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Linnaeus was born in the village of Råshult in Småland, Sweden, on 23 May 1707. He was the first child of Nicolaus (Nils) Ingemarsson (who later adopted the family name Linnaeus) and Christina Brodersonia. His siblings were Anna Maria Linnæa, Sofia Juliana Linnæa, Samuel Linnæus (who would eventually succeed their father as rector of Stenbrohult and write a manual on beekeeping), and Emerentia Linnæa. His father taught him Latin as a small child.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "One of a long line of peasants and priests, Nils was an amateur botanist, a Lutheran minister, and the curate of the small village of Stenbrohult in Småland. Christina was the daughter of the rector of Stenbrohult, Samuel Brodersonius.",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "A year after Linnaeus's birth, his grandfather Samuel Brodersonius died, and his father Nils became the rector of Stenbrohult. The family moved into the rectory from the curate's house.",
"title": "Early life"
},
{
"paragraph_id": 7,
"text": "Even in his early years, Linnaeus seemed to have a liking for plants, flowers in particular. Whenever he was upset, he was given a flower, which immediately calmed him. Nils spent much time in his garden and often showed flowers to Linnaeus and told him their names. Soon Linnaeus was given his own patch of earth where he could grow plants.",
"title": "Early life"
},
{
"paragraph_id": 8,
"text": "Carl's father was the first in his ancestry to adopt a permanent surname. Before that, ancestors had used the patronymic naming system of Scandinavian countries: his father was named Ingemarsson after his father Ingemar Bengtsson. When Nils was admitted to the University of Lund, he had to take on a family name. He adopted the Latinate name Linnæus after a giant linden tree (or lime tree), lind in Swedish, that grew on the family homestead. This name was spelled with the æ ligature. When Carl was born, he was named Carl Linnæus, with his father's family name. The son also always spelled it with the æ ligature, both in handwritten documents and in publications. Carl's patronymic would have been Nilsson, as in Carl Nilsson Linnæus.",
"title": "Early life"
},
{
"paragraph_id": 9,
"text": "Linnaeus's father began teaching him basic Latin, religion, and geography at an early age. When Linnaeus was seven, Nils decided to hire a tutor for him. The parents picked Johan Telander, a son of a local yeoman. Linnaeus did not like him, writing in his autobiography that Telander \"was better calculated to extinguish a child's talents than develop them\".",
"title": "Early life"
},
{
"paragraph_id": 10,
"text": "Two years after his tutoring had begun, he was sent to the Lower Grammar School at Växjö in 1717. Linnaeus rarely studied, often going to the countryside to look for plants. At some point, his father went to visit him and, after hearing critical assessments by his preceptors, he decided to put the youth as an apprentice to some honest cobbler. He reached the last year of the Lower School when he was fifteen, which was taught by the headmaster, Daniel Lannerus, who was interested in botany. Lannerus noticed Linnaeus's interest in botany and gave him the run of his garden.",
"title": "Early life"
},
{
"paragraph_id": 11,
"text": "He also introduced him to Johan Rothman, the state doctor of Småland and a teacher at Katedralskolan (a gymnasium) in Växjö. Also a botanist, Rothman broadened Linnaeus's interest in botany and helped him develop an interest in medicine. By the age of 17, Linnaeus had become well acquainted with the existing botanical literature. He remarks in his journal that he \"read day and night, knowing like the back of my hand, Arvidh Månsson's Rydaholm Book of Herbs, Tillandz's Flora Åboensis, Palmberg's Serta Florea Suecana, Bromelii's Chloros Gothica and Rudbeckii's Hortus Upsaliensis\".",
"title": "Early life"
},
{
"paragraph_id": 12,
"text": "Linnaeus entered the Växjö Katedralskola in 1724, where he studied mainly Greek, Hebrew, theology and mathematics, a curriculum designed for boys preparing for the priesthood. In the last year at the gymnasium, Linnaeus's father visited to ask the professors how his son's studies were progressing; to his dismay, most said that the boy would never become a scholar. Rothman believed otherwise, suggesting Linnaeus could have a future in medicine. The doctor offered to have Linnaeus live with his family in Växjö and to teach him physiology and botany. Nils accepted this offer.",
"title": "Early life"
},
{
"paragraph_id": 13,
"text": "Rothman showed Linnaeus that botany was a serious subject. He taught Linnaeus to classify plants according to Tournefort's system. Linnaeus was also taught about the sexual reproduction of plants, according to Sébastien Vaillant. In 1727, Linnaeus, age 21, enrolled in Lund University in Skåne. He was registered as Carolus Linnæus, the Latin form of his full name, which he also used later for his Latin publications.",
"title": "University studies"
},
{
"paragraph_id": 14,
"text": "Professor Kilian Stobæus, natural scientist, physician and historian, offered Linnaeus tutoring and lodging, as well as the use of his library, which included many books about botany. He also gave the student free admission to his lectures. In his spare time, Linnaeus explored the flora of Skåne, together with students sharing the same interests.",
"title": "University studies"
},
{
"paragraph_id": 15,
"text": "In August 1728, Linnaeus decided to attend Uppsala University on the advice of Rothman, who believed it would be a better choice if Linnaeus wanted to study both medicine and botany. Rothman based this recommendation on the two professors who taught at the medical faculty at Uppsala: Olof Rudbeck the Younger and Lars Roberg. Although Rudbeck and Roberg had undoubtedly been good professors, by then they were older and not so interested in teaching. Rudbeck no longer gave public lectures, and had others stand in for him. The botany, zoology, pharmacology and anatomy lectures were not in their best state. In Uppsala, Linnaeus met a new benefactor, Olof Celsius, who was a professor of theology and an amateur botanist. He received Linnaeus into his home and allowed him use of his library, which was one of the richest botanical libraries in Sweden.",
"title": "University studies"
},
{
"paragraph_id": 16,
"text": "In 1729, Linnaeus wrote a thesis, Praeludia Sponsaliorum Plantarum on plant sexual reproduction. This attracted the attention of Rudbeck; in May 1730, he selected Linnaeus to give lectures at the University although the young man was only a second-year student. His lectures were popular, and Linnaeus often addressed an audience of 300 people. In June, Linnaeus moved from Celsius's house to Rudbeck's to become the tutor of the three youngest of his 24 children. His friendship with Celsius did not wane and they continued their botanical expeditions. Over that winter, Linnaeus began to doubt Tournefort's system of classification and decided to create one of his own. His plan was to divide the plants by the number of stamens and pistils. He began writing several books, which would later result in, for example, Genera Plantarum and Critica Botanica. He also produced a book on the plants grown in the Uppsala Botanical Garden, Adonis Uplandicus.",
"title": "University studies"
},
{
"paragraph_id": 17,
"text": "Rudbeck's former assistant, Nils Rosén, returned to the University in March 1731 with a degree in medicine. Rosén started giving anatomy lectures and tried to take over Linnaeus's botany lectures, but Rudbeck prevented that. Until December, Rosén gave Linnaeus private tutoring in medicine. In December, Linnaeus had a \"disagreement\" with Rudbeck's wife and had to move out of his mentor's house; his relationship with Rudbeck did not appear to suffer. That Christmas, Linnaeus returned home to Stenbrohult to visit his parents for the first time in about three years. His mother had disapproved of his failing to become a priest, but she was pleased to learn he was teaching at the University.",
"title": "University studies"
},
{
"paragraph_id": 18,
"text": "During a visit with his parents, Linnaeus told them about his plan to travel to Lapland; Rudbeck had made the journey in 1695, but the detailed results of his exploration were lost in a fire seven years afterwards. Linnaeus's hope was to find new plants, animals and possibly valuable minerals. He was also curious about the customs of the native Sami people, reindeer-herding nomads who wandered Scandinavia's vast tundras. In April 1732, Linnaeus was awarded a grant from the Royal Society of Sciences in Uppsala for his journey.",
"title": "Expedition to Lapland"
},
{
"paragraph_id": 19,
"text": "Linnaeus began his expedition from Uppsala on 12 May 1732, just before he turned 25. He travelled on foot and horse, bringing with him his journal, botanical and ornithological manuscripts and sheets of paper for pressing plants. Near Gävle he found great quantities of Campanula serpyllifolia, later known as Linnaea borealis, the twinflower that would become his favourite. He sometimes dismounted on the way to examine a flower or rock and was particularly interested in mosses and lichens, the latter a main part of the diet of the reindeer, a common and economically important animal in Lapland.",
"title": "Expedition to Lapland"
},
{
"paragraph_id": 20,
"text": "Linnaeus travelled clockwise around the coast of the Gulf of Bothnia, making major inland incursions from Umeå, Luleå and Tornio. He returned from his six-month-long, over 2,000 kilometres (1,200 mi) expedition in October, having gathered and observed many plants, birds and rocks. Although Lapland was a region with limited biodiversity, Linnaeus described about 100 previously unidentified plants. These became the basis of his book Flora Lapponica. However, on the expedition to Lapland, Linnaeus used Latin names to describe organisms because he had not yet developed the binomial system.",
"title": "Expedition to Lapland"
},
{
"paragraph_id": 21,
"text": "In Flora Lapponica Linnaeus's ideas about nomenclature and classification were first used in a practical way, making this the first proto-modern Flora. The account covered 534 species, used the Linnaean classification system and included, for the described species, geographical distribution and taxonomic notes. It was Augustin Pyramus de Candolle who attributed Linnaeus with Flora Lapponica as the first example in the botanical genre of Flora writing. Botanical historian E. L. Greene described Flora Lapponica as \"the most classic and delightful\" of Linnaeus's works.",
"title": "Expedition to Lapland"
},
{
"paragraph_id": 22,
"text": "It was also during this expedition that Linnaeus had a flash of insight regarding the classification of mammals. Upon observing the lower jawbone of a horse at the side of a road he was travelling, Linnaeus remarked: \"If I only knew how many teeth and of what kind every animal had, how many teats and where they were placed, I should perhaps be able to work out a perfectly natural system for the arrangement of all quadrupeds.\"",
"title": "Expedition to Lapland"
},
{
"paragraph_id": 23,
"text": "In 1734, Linnaeus led a small group of students to Dalarna. Funded by the Governor of Dalarna, the expedition was to catalogue known natural resources and discover new ones, but also to gather intelligence on Norwegian mining activities at Røros.",
"title": "Expedition to Lapland"
},
{
"paragraph_id": 24,
"text": "His relations with Nils Rosén having worsened, Linnaeus accepted an invitation from Claes Sohlberg, son of a mining inspector, to spend the Christmas holiday in Falun, where Linnaeus was permitted to visit the mines.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 25,
"text": "In April 1735, at the suggestion of Sohlberg's father, Linnaeus and Sohlberg set out for the Dutch Republic, where Linnaeus intended to study medicine at the University of Harderwijk while tutoring Sohlberg in exchange for an annual salary. At the time, it was common for Swedes to pursue doctoral degrees in the Netherlands, then a highly revered place to study natural history.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 26,
"text": "On the way, the pair stopped in Hamburg, where they met the mayor, who proudly showed them a supposed wonder of nature in his possession: the taxidermied remains of a seven-headed hydra. Linnaeus quickly discovered the specimen was a fake, cobbled together from the jaws and paws of weasels and the skins of snakes. The provenance of the hydra suggested to Linnaeus that it had been manufactured by monks to represent the Beast of Revelation. Even at the risk of incurring the mayor's wrath, Linnaeus made his observations public, dashing the mayor's dreams of selling the hydra for an enormous sum. Linnaeus and Sohlberg were forced to flee from Hamburg.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 27,
"text": "Linnaeus began working towards his degree as soon as he reached Harderwijk, a university known for awarding degrees in as little as a week. He submitted a dissertation, written back in Sweden, entitled Dissertatio medica inauguralis in qua exhibetur hypothesis nova de febrium intermittentium causa, in which he laid out his hypothesis that malaria arose only in areas with clay-rich soils. Although he failed to identify the true source of disease transmission, (i.e., the Anopheles mosquito), he did correctly predict that Artemisia annua (wormwood) would become a source of antimalarial medications.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 28,
"text": "Within two weeks he had completed his oral and practical examinations and was awarded a doctoral degree.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 29,
"text": "That summer Linnaeus reunited with Peter Artedi, a friend from Uppsala with whom he had once made a pact that should either of the two predecease the other, the survivor would finish the decedent's work. Ten weeks later, Artedi drowned in the canals of Amsterdam, leaving behind an unfinished manuscript on the classification of fish.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 30,
"text": "One of the first scientists Linnaeus met in the Netherlands was Johan Frederik Gronovius, to whom Linnaeus showed one of the several manuscripts he had brought with him from Sweden. The manuscript described a new system for classifying plants. When Gronovius saw it, he was very impressed, and offered to help pay for the printing. With an additional monetary contribution by the Scottish doctor Isaac Lawson, the manuscript was published as Systema Naturae (1735).",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 31,
"text": "Linnaeus became acquainted with one of the most respected physicians and botanists in the Netherlands, Herman Boerhaave, who tried to convince Linnaeus to make a career there. Boerhaave offered him a journey to South Africa and America, but Linnaeus declined, stating he would not stand the heat. Instead, Boerhaave convinced Linnaeus that he should visit the botanist Johannes Burman. After his visit, Burman, impressed with his guest's knowledge, decided Linnaeus should stay with him during the winter. During his stay, Linnaeus helped Burman with his Thesaurus Zeylanicus. Burman also helped Linnaeus with the books on which he was working: Fundamenta Botanica and Bibliotheca Botanica.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 32,
"text": "In August 1735, during Linnaeus's stay with Burman, he met George Clifford III, a director of the Dutch East India Company and the owner of a rich botanical garden at the estate of Hartekamp in Heemstede. Clifford was very impressed with Linnaeus's ability to classify plants, and invited him to become his physician and superintendent of his garden. Linnaeus had already agreed to stay with Burman over the winter, and could thus not accept immediately. However, Clifford offered to compensate Burman by offering him a copy of Sir Hans Sloane's Natural History of Jamaica, a rare book, if he let Linnaeus stay with him, and Burman accepted. On 24 September 1735, Linnaeus moved to Hartekamp to become personal physician to Clifford, and curator of Clifford's herbarium. He was paid 1,000 florins a year, with free board and lodging. Though the agreement was only for a winter of that year, Linnaeus practically stayed there until 1738. It was here that he wrote a book Hortus Cliffortianus, in the preface of which he described his experience as \"the happiest time of my life\". (A portion of Hartekamp was declared as public garden in April 1956 by the Heemstede local authority, and was named \"Linnaeushof\". It eventually became, as it is claimed, the biggest playground in Europe.)",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 33,
"text": "In July 1736, Linnaeus travelled to England, at Clifford's expense. He went to London to visit Sir Hans Sloane, a collector of natural history, and to see his cabinet, as well as to visit the Chelsea Physic Garden and its keeper, Philip Miller. He taught Miller about his new system of subdividing plants, as described in Systema Naturae. Miller was in fact reluctant to use the new binomial nomenclature, preferring the classifications of Joseph Pitton de Tournefort and John Ray at first. Linnaeus, nevertheless, applauded Miller's Gardeners Dictionary, the conservative Scot actually retained in his dictionary a number of pre-Linnaean binomial signifiers discarded by Linnaeus but which have been retained by modern botanists. He only fully changed to the Linnaean system in the edition of The Gardeners Dictionary of 1768. Miller ultimately was impressed, and from then on started to arrange the garden according to Linnaeus's system.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 34,
"text": "Linnaeus also travelled to Oxford University to visit the botanist Johann Jacob Dillenius. He failed to make Dillenius publicly fully accept his new classification system, though the two men remained in correspondence for many years afterwards. Linnaeus dedicated his Critica Botanica to him, as \"opus botanicum quo absolutius mundus non-vidit\". Linnaeus would later name a genus of tropical tree Dillenia in his honour. He then returned to Hartekamp, bringing with him many specimens of rare plants. The next year, 1737, he published Genera Plantarum, in which he described 935 genera of plants, and shortly thereafter he supplemented it with Corollarium Generum Plantarum, with another sixty (sexaginta) genera.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 35,
"text": "His work at Hartekamp led to another book, Hortus Cliffortianus, a catalogue of the botanical holdings in the herbarium and botanical garden of Hartekamp. He wrote it in nine months (completed in July 1737), but it was not published until 1738. It contains the first use of the name Nepenthes, which Linnaeus used to describe a genus of pitcher plants.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 36,
"text": "Linnaeus stayed with Clifford at Hartekamp until 18 October 1737 (new style), when he left the house to return to Sweden. Illness and the kindness of Dutch friends obliged him to stay some months longer in Holland. In May 1738, he set out for Sweden again. On the way home, he stayed in Paris for about a month, visiting botanists such as Antoine de Jussieu. After his return, Linnaeus never again left Sweden.",
"title": "Years in the Dutch Republic (1735–38)"
},
{
"paragraph_id": 37,
"text": "When Linnaeus returned to Sweden on 28 June 1738, he went to Falun, where he entered into an engagement to Sara Elisabeth Moræa. Three months later, he moved to Stockholm to find employment as a physician, and thus to make it possible to support a family. Once again, Linnaeus found a patron; he became acquainted with Count Carl Gustav Tessin, who helped him get work as a physician at the Admiralty. During this time in Stockholm, Linnaeus helped found the Royal Swedish Academy of Science; he became the first Praeses of the academy by drawing of lots.",
"title": "Return to Sweden"
},
{
"paragraph_id": 38,
"text": "Because his finances had improved and were now sufficient to support a family, he received permission to marry his fiancée, Sara Elisabeth Moræa. Their wedding was held 26 June 1739. Seventeen months later, Sara gave birth to their first son, Carl. Two years later, a daughter, Elisabeth Christina, was born, and the subsequent year Sara gave birth to Sara Magdalena, who died when 15 days old. Sara and Linnaeus would later have four other children: Lovisa, Sara Christina [sv], Johannes and Sophia.",
"title": "Return to Sweden"
},
{
"paragraph_id": 39,
"text": "In May 1741, Linnaeus was appointed Professor of Medicine at Uppsala University, first with responsibility for medicine-related matters. Soon, he changed place with the other Professor of Medicine, Nils Rosén, and thus was responsible for the Botanical Garden (which he would thoroughly reconstruct and expand), botany and natural history, instead. In October that same year, his wife and nine-month-old son followed him to live in Uppsala.",
"title": "Return to Sweden"
},
{
"paragraph_id": 40,
"text": "Ten days after he was appointed Professor, he undertook an expedition to the island provinces of Öland and Gotland with six students from the university to look for plants useful in medicine. First, they travelled to Öland and stayed there until 21 June, when they sailed to Visby in Gotland. Linnaeus and the students stayed on Gotland for about a month, and then returned to Uppsala. During this expedition, they found 100 previously unrecorded plants. The observations from the expedition were later published in Öländska och Gothländska Resa, written in Swedish. Like Flora Lapponica, it contained both zoological and botanical observations, as well as observations concerning the culture in Öland and Gotland.",
"title": "Return to Sweden"
},
{
"paragraph_id": 41,
"text": "During the summer of 1745, Linnaeus published two more books: Flora Suecica and Fauna Suecica. Flora Suecica was a strictly botanical book, while Fauna Suecica was zoological. Anders Celsius had created the temperature scale named after him in 1742. Celsius's scale was inverted compared to today, the boiling point at 0 °C and freezing point at 100 °C. In 1745, Linnaeus inverted the scale to its present standard.",
"title": "Return to Sweden"
},
{
"paragraph_id": 42,
"text": "In the summer of 1746, Linnaeus was once again commissioned by the Government to carry out an expedition, this time to the Swedish province of Västergötland. He set out from Uppsala on 12 June and returned on 11 August. On the expedition his primary companion was Erik Gustaf Lidbeck, a student who had accompanied him on his previous journey. Linnaeus described his findings from the expedition in the book Wästgöta-Resa, published the next year. After he returned from the journey, the Government decided Linnaeus should take on another expedition to the southernmost province Scania. This journey was postponed, as Linnaeus felt too busy.",
"title": "Return to Sweden"
},
{
"paragraph_id": 43,
"text": "In 1747, Linnaeus was given the title archiater, or chief physician, by the Swedish king Adolf Frederick—a mark of great respect. The same year he was elected member of the Academy of Sciences in Berlin.",
"title": "Return to Sweden"
},
{
"paragraph_id": 44,
"text": "In the spring of 1749, Linnaeus could finally journey to Scania, again commissioned by the Government. With him he brought his student, Olof Söderberg. On the way to Scania, he made his last visit to his brothers and sisters in Stenbrohult since his father had died the previous year. The expedition was similar to the previous journeys in most aspects, but this time he was also ordered to find the best place to grow walnut and Swedish whitebeam trees; these trees were used by the military to make rifles. While there, they also visited the Ramlösa mineral spa, where he remarked on the quality of its ferruginous water. The journey was successful, and Linnaeus's observations were published the next year in Skånska Resa.",
"title": "Return to Sweden"
},
{
"paragraph_id": 45,
"text": "In 1750, Linnaeus became rector of Uppsala University, starting a period where natural sciences were esteemed. Perhaps the most important contribution he made during his time at Uppsala was to teach; many of his students travelled to various places in the world to collect botanical samples. Linnaeus called the best of these students his \"apostles\". His lectures were normally very popular and were often held in the Botanical Garden. He tried to teach the students to think for themselves and not trust anybody, not even him. Even more popular than the lectures were the botanical excursions made every Saturday during summer, where Linnaeus and his students explored the flora and fauna in the vicinity of Uppsala.",
"title": "Return to Sweden"
},
{
"paragraph_id": 46,
"text": "Linnaeus published Philosophia Botanica in 1751. The book contained a complete survey of the taxonomy system he had been using in his earlier works. It also contained information of how to keep a journal on travels and how to maintain a botanical garden.",
"title": "Return to Sweden"
},
{
"paragraph_id": 47,
"text": "During Linnaeus's time it was normal for upper class women to have wet nurses for their babies. Linnaeus joined an ongoing campaign to end this practice in Sweden and promote breast-feeding by mothers. In 1752 Linnaeus published a thesis along with Frederick Lindberg, a physician student, based on their experiences. In the tradition of the period, this dissertation was essentially an idea of the presiding reviewer (prases) expounded upon by the student. Linnaeus's dissertation was translated into French by J. E. Gilibert in 1770 as La Nourrice marâtre, ou Dissertation sur les suites funestes du nourrisage mercénaire. Linnaeus suggested that children might absorb the personality of their wet nurse through the milk. He admired the child care practices of the Lapps and pointed out how healthy their babies were compared to those of Europeans who employed wet nurses. He compared the behaviour of wild animals and pointed out how none of them denied their newborns their breastmilk. It is thought that his activism played a role in his choice of the term Mammalia for the class of organisms.",
"title": "Return to Sweden"
},
{
"paragraph_id": 48,
"text": "Linnaeus published Species Plantarum, the work which is now internationally accepted as the starting point of modern botanical nomenclature, in 1753. The first volume was issued on 24 May, the second volume followed on 16 August of the same year. The book contained 1,200 pages and was published in two volumes; it described over 7,300 species. The same year the king dubbed him knight of the Order of the Polar Star, the first civilian in Sweden to become a knight in this order. He was then seldom seen not wearing the order's insignia.",
"title": "Return to Sweden"
},
{
"paragraph_id": 49,
"text": "Linnaeus felt Uppsala was too noisy and unhealthy, so he bought two farms in 1758: Hammarby and Sävja. The next year, he bought a neighbouring farm, Edeby. He spent the summers with his family at Hammarby; initially it only had a small one-storey house, but in 1762 a new, larger main building was added. In Hammarby, Linnaeus made a garden where he could grow plants that could not be grown in the Botanical Garden in Uppsala. He began constructing a museum on a hill behind Hammarby in 1766, where he moved his library and collection of plants. A fire that destroyed about one third of Uppsala and had threatened his residence there necessitated the move.",
"title": "Return to Sweden"
},
{
"paragraph_id": 50,
"text": "Since the initial release of Systema Naturae in 1735, the book had been expanded and reprinted several times; the tenth edition was released in 1758. This edition established itself as the starting point for zoological nomenclature, the equivalent of Species Plantarum.",
"title": "Return to Sweden"
},
{
"paragraph_id": 51,
"text": "The Swedish King Adolf Frederick granted Linnaeus nobility in 1757, but he was not ennobled until 1761. With his ennoblement, he took the name Carl von Linné (Latinised as Carolus a Linné), 'Linné' being a shortened and gallicised version of 'Linnæus', and the German nobiliary particle 'von' signifying his ennoblement. The noble family's coat of arms prominently features a twinflower, one of Linnaeus's favourite plants; it was given the scientific name Linnaea borealis in his honour by Gronovius. The shield in the coat of arms is divided into thirds: red, black and green for the three kingdoms of nature (animal, mineral and vegetable) in Linnaean classification; in the centre is an egg \"to denote Nature, which is continued and perpetuated in ovo.\" At the bottom is a phrase in Latin, borrowed from the Aeneid, which reads \"Famam extendere factis\": we extend our fame by our deeds. Linnaeus inscribed this personal motto in books that were given to him by friends.",
"title": "Return to Sweden"
},
{
"paragraph_id": 52,
"text": "After his ennoblement, Linnaeus continued teaching and writing. His reputation had spread over the world, and he corresponded with many different people. For example, Catherine II of Russia sent him seeds from her country. He also corresponded with Giovanni Antonio Scopoli, \"the Linnaeus of the Austrian Empire\", who was a doctor and a botanist in Idrija, Duchy of Carniola (nowadays Slovenia). Scopoli communicated all of his research, findings, and descriptions (for example of the olm and the dormouse, two little animals hitherto unknown to Linnaeus). Linnaeus greatly respected Scopoli and showed great interest in his work. He named a solanaceous genus, Scopolia, the source of scopolamine, after him, but because of the great distance between them, they never met.",
"title": "Return to Sweden"
},
{
"paragraph_id": 53,
"text": "Linnaeus was relieved of his duties in the Royal Swedish Academy of Science in 1763, but continued his work there as usual for more than ten years after. In 1769 he was elected to the American Philosophical Society for his work. He stepped down as rector at Uppsala University in December 1772, mostly due to his declining health.",
"title": "Final years"
},
{
"paragraph_id": 54,
"text": "Linnaeus's last years were troubled by illness. He had had a disease called the Uppsala fever in 1764, but survived due to the care of Rosén. He developed sciatica in 1773, and the next year, he had a stroke which partially paralysed him. He had a second stroke in 1776, losing the use of his right side and leaving him bereft of his memory; while still able to admire his own writings, he could not recognise himself as their author.",
"title": "Final years"
},
{
"paragraph_id": 55,
"text": "In December 1777, he had another stroke which greatly weakened him, and eventually led to his death on 10 January 1778 in Hammarby. Despite his desire to be buried in Hammarby, he was buried in Uppsala Cathedral on 22 January.",
"title": "Final years"
},
{
"paragraph_id": 56,
"text": "His library and collections were left to his widow Sara and their children. Joseph Banks, an eminent botanist, wished to purchase the collection, but his son Carl refused the offer and instead moved the collection to Uppsala. In 1783 Carl died and Sara inherited the collection, having outlived both her husband and son. She tried to sell it to Banks, but he was no longer interested; instead an acquaintance of his agreed to buy the collection. The acquaintance was a 24-year-old medical student, James Edward Smith, who bought the whole collection: 14,000 plants, 3,198 insects, 1,564 shells, about 3,000 letters and 1,600 books. Smith founded the Linnean Society of London five years later.",
"title": "Final years"
},
{
"paragraph_id": 57,
"text": "The von Linné name ended with his son Carl, who never married. His other son, Johannes, had died aged 3. There are over two hundred descendants of Linnaeus through two of his daughters.",
"title": "Final years"
},
{
"paragraph_id": 58,
"text": "During Linnaeus's time as Professor and Rector of Uppsala University, he taught many devoted students, 17 of whom he called \"apostles\". They were the most promising, most committed students, and all of them made botanical expeditions to various places in the world, often with his help. The amount of this help varied; sometimes he used his influence as Rector to grant his apostles a scholarship or a place on an expedition. To most of the apostles he gave instructions of what to look for on their journeys. Abroad, the apostles collected and organised new plants, animals and minerals according to Linnaeus's system. Most of them also gave some of their collection to Linnaeus when their journey was finished. Thanks to these students, the Linnaean system of taxonomy spread through the world without Linnaeus ever having to travel outside Sweden after his return from Holland. The British botanist William T. Stearn notes, without Linnaeus's new system, it would not have been possible for the apostles to collect and organise so many new specimens. Many of the apostles died during their expeditions.",
"title": "Apostles"
},
{
"paragraph_id": 59,
"text": "Christopher Tärnström, the first apostle and a 43-year-old pastor with a wife and children, made his journey in 1746. He boarded a Swedish East India Company ship headed for China. Tärnström never reached his destination, dying of a tropical fever on Côn Sơn Island the same year. Tärnström's widow blamed Linnaeus for making her children fatherless, causing Linnaeus to prefer sending out younger, unmarried students after Tärnström. Six other apostles later died on their expeditions, including Pehr Forsskål and Pehr Löfling.",
"title": "Apostles"
},
{
"paragraph_id": 60,
"text": "Two years after Tärnström's expedition, Finnish-born Pehr Kalm set out as the second apostle to North America. There he spent two-and-a-half years studying the flora and fauna of Pennsylvania, New York, New Jersey and Canada. Linnaeus was overjoyed when Kalm returned, bringing back with him many pressed flowers and seeds. At least 90 of the 700 North American species described in Species Plantarum had been brought back by Kalm.",
"title": "Apostles"
},
{
"paragraph_id": 61,
"text": "Daniel Solander was living in Linnaeus's house during his time as a student in Uppsala. Linnaeus was very fond of him, promising Solander his eldest daughter's hand in marriage. On Linnaeus's recommendation, Solander travelled to England in 1760, where he met the English botanist Joseph Banks. With Banks, Solander joined James Cook on his expedition to Oceania on the Endeavour in 1768–71. Solander was not the only apostle to journey with James Cook; Anders Sparrman followed on the Resolution in 1772–75 bound for, among other places, Oceania and South America. Sparrman made many other expeditions, one of them to South Africa.",
"title": "Apostles"
},
{
"paragraph_id": 62,
"text": "Perhaps the most famous and successful apostle was Carl Peter Thunberg, who embarked on a nine-year expedition in 1770. He stayed in South Africa for three years, then travelled to Japan. All foreigners in Japan were forced to stay on the island of Dejima outside Nagasaki, so it was thus hard for Thunberg to study the flora. He did, however, manage to persuade some of the translators to bring him different plants, and he also found plants in the gardens of Dejima. He returned to Sweden in 1779, one year after Linnaeus's death.",
"title": "Apostles"
},
{
"paragraph_id": 63,
"text": "The first edition of Systema Naturae was printed in the Netherlands in 1735. It was a twelve-page work. By the time it reached its 10th edition in 1758, it classified 4,400 species of animals and 7,700 species of plants. People from all over the world sent their specimens to Linnaeus to be included. By the time he started work on the 12th edition, Linnaeus needed a new invention—the index card—to track classifications.",
"title": "Major publications"
},
{
"paragraph_id": 64,
"text": "In Systema Naturae, the unwieldy names mostly used at the time, such as \"Physalis annua ramosissima, ramis angulosis glabris, foliis dentato-serratis\", were supplemented with concise and now familiar \"binomials\", composed of the generic name, followed by a specific epithet—in the case given, Physalis angulata. These binomials could serve as a label to refer to the species. Higher taxa were constructed and arranged in a simple and orderly manner. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers (see Gaspard Bauhin and Johann Bauhin) almost 200 years earlier, Linnaeus was the first to use it consistently throughout the work, including in monospecific genera, and may be said to have popularised it within the scientific community.",
"title": "Major publications"
},
{
"paragraph_id": 65,
"text": "After the decline in Linnaeus's health in the early 1770s, publication of editions of Systema Naturae went in two different directions. Another Swedish scientist, Johan Andreas Murray issued the Regnum Vegetabile section separately in 1774 as the Systema Vegetabilium, rather confusingly labelled the 13th edition. Meanwhile, a 13th edition of the entire Systema appeared in parts between 1788 and 1793 under the editorship of Johann Friedrich Gmelin. It was through the Systema Vegetabilium that Linnaeus's work became widely known in England, following its translation from the Latin by the Lichfield Botanical Society as A System of Vegetables (1783–1785).",
"title": "Major publications"
},
{
"paragraph_id": 66,
"text": "('Opinion of the learned world on the writings of Carl Linnaeus, Doctor') Published in 1740, this small octavo-sized pamphlet was presented to the State Library of New South Wales by the Linnean Society of NSW in 2018. This is considered among the rarest of all the writings of Linnaeus, and crucial to his career, securing him his appointment to a professorship of medicine at Uppsala University. From this position he laid the groundwork for his radical new theory of classifying and naming organisms for which he was considered the founder of modern taxonomy.",
"title": "Major publications"
},
{
"paragraph_id": 67,
"text": "Species Plantarum (or, more fully, Species Plantarum, exhibentes plantas rite cognitas, ad genera relatas, cum differentiis specificis, nominibus trivialibus, synonymis selectis, locis natalibus, secundum systema sexuale digestas) was first published in 1753, as a two-volume work. Its prime importance is perhaps that it is the primary starting point of plant nomenclature as it exists today.",
"title": "Major publications"
},
{
"paragraph_id": 68,
"text": "Genera plantarum: eorumque characteres naturales secundum numerum, figuram, situm, et proportionem omnium fructificationis partium was first published in 1737, delineating plant genera. Around 10 editions were published, not all of them by Linnaeus himself; the most important is the 1754 fifth edition. In it Linnaeus divided the plant Kingdom into 24 classes. One, Cryptogamia, included all the plants with concealed reproductive parts (algae, fungi, mosses and liverworts and ferns).",
"title": "Major publications"
},
{
"paragraph_id": 69,
"text": "Philosophia Botanica (1751) was a summary of Linnaeus's thinking on plant classification and nomenclature, and an elaboration of the work he had previously published in Fundamenta Botanica (1736) and Critica Botanica (1737). Other publications forming part of his plan to reform the foundations of botany include his Classes Plantarum and Bibliotheca Botanica: all were printed in Holland (as were Genera Plantarum (1737) and Systema Naturae (1735)), the Philosophia being simultaneously released in Stockholm.",
"title": "Major publications"
},
{
"paragraph_id": 70,
"text": "At the end of his lifetime the Linnean collection in Uppsala was considered one of the finest collections of natural history objects in Sweden. Next to his own collection he had also built up a museum for the university of Uppsala, which was supplied by material donated by Carl Gyllenborg (in 1744–1745), crown-prince Adolf Fredrik (in 1745), Erik Petreus (in 1746), Claes Grill (in 1746), Magnus Lagerström (in 1748 and 1750) and Jonas Alströmer (in 1749). The relation between the museum and the private collection was not formalised and the steady flow of material from Linnean pupils were incorporated to the private collection rather than to the museum. Linnaeus felt his work was reflecting the harmony of nature and he said in 1754 \"the earth is then nothing else but a museum of the all-wise creator's masterpieces, divided into three chambers\". He had turned his own estate into a microcosm of that 'world museum'.",
"title": "Collections"
},
{
"paragraph_id": 71,
"text": "In April 1766 parts of the town were destroyed by a fire and the Linnean private collection was subsequently moved to a barn outside the town, and shortly afterwards to a single-room stone building close to his country house at Hammarby near Uppsala. This resulted in a physical separation between the two collections; the museum collection remained in the botanical garden of the university. Some material which needed special care (alcohol specimens) or ample storage space was moved from the private collection to the museum.",
"title": "Collections"
},
{
"paragraph_id": 72,
"text": "In Hammarby the Linnean private collections suffered seriously from damp and the depredations by mice and insects. Carl von Linné's son (Carl Linnaeus) inherited the collections in 1778 and retained them until his own death in 1783. Shortly after Carl von Linné's death his son confirmed that mice had caused \"horrible damage\" to the plants and that also moths and mould had caused considerable damage. He tried to rescue them from the neglect they had suffered during his father's later years, and also added further specimens. This last activity however reduced rather than augmented the scientific value of the original material.",
"title": "Collections"
},
{
"paragraph_id": 73,
"text": "In 1784 the young medical student James Edward Smith purchased the entire specimen collection, library, manuscripts, and correspondence of Carl Linnaeus from his widow and daughter and transferred the collections to London. Not all material in Linné's private collection was transported to England. Thirty-three fish specimens preserved in alcohol were not sent and were later lost.",
"title": "Collections"
},
{
"paragraph_id": 74,
"text": "In London Smith tended to neglect the zoological parts of the collection; he added some specimens and also gave some specimens away. Over the following centuries the Linnean collection in London suffered enormously at the hands of scientists who studied the collection, and in the process disturbed the original arrangement and labels, added specimens that did not belong to the original series and withdrew precious original type material.",
"title": "Collections"
},
{
"paragraph_id": 75,
"text": "Much material which had been intensively studied by Linné in his scientific career belonged to the collection of Queen Lovisa Ulrika (1720–1782) (in the Linnean publications referred to as \"Museum Ludovicae Ulricae\" or \"M. L. U.\"). This collection was donated by her grandson King Gustav IV Adolf (1778–1837) to the museum in Uppsala in 1804. Another important collection in this respect was that of her husband King Adolf Fredrik (1710–1771) (in the Linnean sources known as \"Museum Adolphi Friderici\" or \"Mus. Ad. Fr.\"), the wet parts (alcohol collection) of which were later donated to the Royal Swedish Academy of Sciences, and is today housed in the Swedish Museum of Natural History at Stockholm. The dry material was transferred to Uppsala.",
"title": "Collections"
},
{
"paragraph_id": 76,
"text": "The establishment of universally accepted conventions for the naming of organisms was Linnaeus's main contribution to taxonomy—his work marks the starting point of consistent use of binomial nomenclature. During the 18th century expansion of natural history knowledge, Linnaeus also developed what became known as the Linnaean taxonomy; the system of scientific classification now widely used in the biological sciences. A previous zoologist Rumphius (1627–1702) had more or less approximated the Linnaean system and his material contributed to the later development of the binomial scientific classification by Linnaeus.",
"title": "System of taxonomy"
},
{
"paragraph_id": 77,
"text": "The Linnaean system classified nature within a nested hierarchy, starting with three kingdoms. Kingdoms were divided into classes and they, in turn, into orders, and thence into genera (singular: genus), which were divided into species (singular: species). Below the rank of species he sometimes recognised taxa of a lower (unnamed) rank; these have since acquired standardised names such as variety in botany and subspecies in zoology. Modern taxonomy includes a rank of family between order and genus and a rank of phylum between kingdom and class that were not present in Linnaeus's original system.",
"title": "System of taxonomy"
},
{
"paragraph_id": 78,
"text": "Linnaeus's groupings were based upon shared physical characteristics, and not based upon differences. Of his higher groupings, only those for animals are still in use, and the groupings themselves have been significantly changed since their conception, as have the principles behind them. Nevertheless, Linnaeus is credited with establishing the idea of a hierarchical structure of classification which is based upon observable characteristics and intended to reflect natural relationships. While the underlying details concerning what are considered to be scientifically valid \"observable characteristics\" have changed with expanding knowledge (for example, DNA sequencing, unavailable in Linnaeus's time, has proven to be a tool of considerable utility for classifying living organisms and establishing their evolutionary relationships), the fundamental principle remains sound.",
"title": "System of taxonomy"
},
{
"paragraph_id": 79,
"text": "Linnaeus's system of taxonomy was especially noted as the first to include humans (Homo) taxonomically grouped with apes (Simia), under the header of Anthropomorpha. German biologist Ernst Haeckel speaking in 1907 noted this as the \"most important sign of Linnaeus's genius\".",
"title": "System of taxonomy"
},
{
"paragraph_id": 80,
"text": "Linnaeus classified humans among the primates beginning with the first edition of Systema Naturae. During his time at Hartekamp, he had the opportunity to examine several monkeys and noted similarities between them and man. He pointed out both species basically have the same anatomy; except for speech, he found no other differences. Thus he placed man and monkeys under the same category, Anthropomorpha, meaning \"manlike.\" This classification received criticism from other biologists such as Johan Gottschalk Wallerius, Jacob Theodor Klein and Johann Georg Gmelin on the ground that it is illogical to describe man as human-like. In a letter to Gmelin from 1747, Linnaeus replied:",
"title": "System of taxonomy"
},
{
"paragraph_id": 81,
"text": "It does not please [you] that I've placed Man among the Anthropomorpha, perhaps because of the term 'with human form', but man learns to know himself. Let's not quibble over words. It will be the same to me whatever name we apply. But I seek from you and from the whole world a generic difference between man and simian that [follows] from the principles of Natural History. I absolutely know of none. If only someone might tell me a single one! If I would have called man a simian or vice versa, I would have brought together all the theologians against me. Perhaps I ought to have by virtue of the law of the discipline.",
"title": "System of taxonomy"
},
{
"paragraph_id": 82,
"text": "The theological concerns were twofold: first, putting man at the same level as monkeys or apes would lower the spiritually higher position that man was assumed to have in the great chain of being, and second, because the Bible says man was created in the image of God (theomorphism), if monkeys/apes and humans were not distinctly and separately designed, that would mean monkeys and apes were created in the image of God as well. This was something many could not accept. The conflict between world views that was caused by asserting man was a type of animal would simmer for a century until the much greater, and still ongoing, creation–evolution controversy began in earnest with the publication of On the Origin of Species by Charles Darwin in 1859.",
"title": "System of taxonomy"
},
{
"paragraph_id": 83,
"text": "After such criticism, Linnaeus felt he needed to explain himself more clearly. The 10th edition of Systema Naturae introduced new terms, including Mammalia and Primates, the latter of which would replace Anthropomorpha as well as giving humans the full binomial Homo sapiens. The new classification received less criticism, but many natural historians still believed he had demoted humans from their former place of ruling over nature and not being a part of it. Linnaeus believed that man biologically belongs to the animal kingdom and had to be included in it. In his book Dieta Naturalis, he said, \"One should not vent one's wrath on animals, Theology decree that man has a soul and that the animals are mere 'automata mechanica,' but I believe they would be better advised that animals have a soul and that the difference is of nobility.\"",
"title": "System of taxonomy"
},
{
"paragraph_id": 84,
"text": "",
"title": "System of taxonomy"
},
{
"paragraph_id": 85,
"text": "Linnaeus added a second species to the genus Homo in Systema Naturae based on a figure and description by Jacobus Bontius from a 1658 publication: Homo troglodytes (\"caveman\") and published a third in 1771: Homo lar. Swedish historian Gunnar Broberg states that the new human species Linnaeus described were actually simians or native people clad in skins to frighten colonial settlers, whose appearance had been exaggerated in accounts to Linnaeus. For Homo troglodytes Linnaeus asked the Swedish East India Company to search for one, but they did not find any signs of its existence. Homo lar has since been reclassified as Hylobates lar, the lar gibbon.",
"title": "System of taxonomy"
},
{
"paragraph_id": 86,
"text": "",
"title": "System of taxonomy"
},
{
"paragraph_id": 87,
"text": "In the first edition of Systema Naturae, Linnaeus subdivided the human species into four varieties: \"Europæus albesc[ens]\" (whitish European), \"Americanus rubesc[ens]\" (reddish American), \"Asiaticus fuscus\" (tawny Asian) and \"Africanus nigr[iculus]\" (blackish African). In the tenth edition of Systema Naturae he further detailed phenotypical characteristics for each variety, based on the concept of the four temperaments from classical antiquity, and changed the description of Asians' skin tone to \"luridus\" (yellow). While Linnaeus believed that these varieties resulted from environmental differences between the four known continents, the Linnean Society acknowledges that his categorization's focus on skin color and later inclusion of cultural and behavioral traits cemented colonial stereotypes and provided the foundations for scientific racism. Additionally, Linnaeus created a wastebasket taxon \"monstrosus\" for \"wild and monstrous humans, unknown groups, and more or less abnormal people\".",
"title": "System of taxonomy"
},
{
"paragraph_id": 88,
"text": "In 1959, W. T. Stearn designated Linnaeus to be the lectotype of H. sapiens.",
"title": "System of taxonomy"
},
{
"paragraph_id": 89,
"text": "Linnaeus's applied science was inspired not only by the instrumental utilitarianism general to the early Enlightenment, but also by his adherence to the older economic doctrine of Cameralism. Additionally, Linnaeus was a state interventionist. He supported tariffs, levies, export bounties, quotas, embargoes, navigation acts, subsidised investment capital, ceilings on wages, cash grants, state-licensed producer monopolies, and cartels.",
"title": "Influences and economic beliefs"
},
{
"paragraph_id": 90,
"text": "Anniversaries of Linnaeus's birth, especially in centennial years, have been marked by major celebrations. Linnaeus has appeared on numerous Swedish postage stamps and banknotes. There are numerous statues of Linnaeus in countries around the world. The Linnean Society of London has awarded the Linnean Medal for excellence in botany or zoology since 1888. Following approval by the Riksdag of Sweden, Växjö University and Kalmar College merged on 1 January 2010 to become Linnaeus University. Other things named after Linnaeus include the twinflower genus Linnaea, Linnaeosicyos (a monotypic genus in the family Cucurbitaceae), the crater Linné on the Earth's moon, a street in Cambridge, Massachusetts, and the cobalt sulfide mineral Linnaeite.",
"title": "Commemoration"
},
{
"paragraph_id": 91,
"text": "Andrew Dickson White wrote in A History of the Warfare of Science with Theology in Christendom (1896):",
"title": "Commentary"
},
{
"paragraph_id": 92,
"text": "Linnaeus ... was the most eminent naturalist of his time, a wide observer, a close thinker; but the atmosphere in which he lived and moved and had his being was saturated with biblical theology, and this permeated all his thinking. ... Toward the end of his life he timidly advanced the hypothesis that all the species of one genus constituted at the creation one species; and from the last edition of his Systema Naturæ he quietly left out the strongly orthodox statement of the fixity of each species, which he had insisted upon in his earlier works. ... warnings came speedily both from the Catholic and Protestant sides.",
"title": "Commentary"
},
{
"paragraph_id": 93,
"text": "The mathematical PageRank algorithm, applied to 24 multilingual Wikipedia editions in 2014, published in PLOS ONE in 2015, placed Carl Linnaeus at the top historical figure, above Jesus, Aristotle, Napoleon, and Adolf Hitler (in that order).",
"title": "Commentary"
},
{
"paragraph_id": 94,
"text": "In the 21st century, Linnæus's taxonomy of human \"races\" has been problematised and discussed. Some critics claim that Linnæus was one of the forebears of the modern pseudoscientific notion of scientific racism, while others hold the view that while his classification was stereotyped, it did not imply that certain human \"races\" were superior to others.",
"title": "Commentary"
},
{
"paragraph_id": 95,
"text": "Biographies",
"title": "External links"
},
{
"paragraph_id": 96,
"text": "Resources",
"title": "External links"
},
{
"paragraph_id": 97,
"text": "Other",
"title": "External links"
}
] | Carl Linnaeus, also known after ennoblement in 1761 as Carl von Linné, was a Swedish biologist and physician who formalised binomial nomenclature, the modern system of naming organisms. He is known as the "father of modern taxonomy". Many of his writings were in Latin; his name is rendered in Latin as Carolus Linnæus and, after his 1761 ennoblement, as Carolus a Linné. Linnaeus was the son of a curate and he was born in Råshult, the countryside of Småland, in southern Sweden. He received most of his higher education at Uppsala University and began giving lectures in botany there in 1730. He lived abroad between 1735 and 1738, where he studied and also published the first edition of his Systema Naturae in the Netherlands. He then returned to Sweden where he became professor of medicine and botany at Uppsala. In the 1740s, he was sent on several journeys through Sweden to find and classify plants and animals. In the 1750s and 1760s, he continued to collect and classify animals, plants, and minerals, while publishing several volumes. By the time of his death in 1778, he was one of the most acclaimed scientists in Europe. Philosopher Jean-Jacques Rousseau sent him the message: "Tell him I know no greater man on Earth." Johann Wolfgang von Goethe wrote: "With the exception of Shakespeare and Spinoza, I know no one among the no longer living who has influenced me more strongly." Swedish author August Strindberg wrote: "Linnaeus was in reality a poet who happened to become a naturalist." Linnaeus has been called Princeps botanicorum and "The Pliny of the North". He is also considered one of the founders of modern ecology. In botany and zoology, the abbreviation L. is used to indicate Linnaeus as the authority for a species' name. In older publications, the abbreviation "Linn." is found. Linnaeus's remains constitute the type specimen for the species Homo sapiens following the International Code of Zoological Nomenclature, since the sole specimen that he is known to have examined was himself. | 2001-11-14T00:43:21Z | 2023-12-26T05:03:43Z | [
"Template:Convert",
"Template:Anchor",
"Template:Dubious",
"Template:Clear",
"Template:Botanist",
"Template:Webarchive",
"Template:Good article",
"Template:See also",
"Template:Refn",
"Template:Div col end",
"Template:Quote needed",
"Template:Cite magazine",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Redirect-multi",
"Template:Infobox scientist",
"Template:Lang",
"Template:Refbegin",
"Template:Div col",
"Template:Commons category",
"Template:Wikisource author",
"Template:Short description",
"Template:Wikivoyage",
"Template:Cite journal",
"Template:Cite news",
"Template:Carl Linnaeus",
"Template:Linnaeus1758",
"Template:Sfn",
"Template:Pp-vandalism",
"Template:Cite book",
"Template:Wikispecies",
"Template:Historical definitions of race",
"Template:Distinguish",
"Template:Multiple image",
"Template:Blockquote",
"Template:Refend",
"Template:Harvnb",
"Template:Wikiquote",
"Template:Zoology",
"Template:Efn",
"Template:Main",
"Template:Interlanguage link",
"Template:Cite web",
"Template:Internet Archive author",
"Template:Use British English",
"Template:Isbn",
"Template:Bibleverse",
"Template:Gutenberg author",
"Template:Natural history",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Carl_Linnaeus |
5,236 | Coast | The coast, also known as the coastline, shoreline or seashore, is defined as the area where land meets the ocean, or as a line that forms the boundary between the land and the coastline. Shores are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore which is created. The Earth has around 620,000 kilometres (390,000 mi) of coastline. Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas they harbor saltmarshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic species. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds. In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of 1–50 meters (3.3–164.0 feet).
According to an atlas prepared by the United Nations, 44% of all humans live within 150 km (93 mi) of the sea. Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism. Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide.
However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, and related issues such as coastal erosion, saltwater intrusion and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems. The interactive effects of climate change, habitat destruction, overfishing and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
Because coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore).
The Earth has approximately 620,000 kilometres (390,000 mi) of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. As of October 2010, about 2.86% of exclusive economic zones were part of marine protected areas.
The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientist might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems).
While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons.
The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve–like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot.
Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean.
Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than 4 m (13 ft); mesotidal coasts with a tidal range of 2 to 4 m (6.6 to 13 ft); and microtidal coasts with a tidal range of less than 2 m (7 ft). The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts.
Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast.
Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands.
Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias).
More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals.
Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard.
Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing.
Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast.
Estuarine and marine coastal ecosystems are both marine ecosystems. Together, these ecosystems perform the four categories of ecosystem services in a variety of ways: "Regulating services" include climate regulation as well as waste treatment and disease regulation and buffer zones. The "provisioning services" include forest products, marine products, fresh water, raw materials, biochemical and genetic resources. "Cultural services" of coastal ecosystems include inspirational aspects, recreation and tourism, science and education. "Supporting services" of coastal ecosystems include nutrient cycling, biologically mediated habitats and primary production.
According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or "drowned" landforms, such as rias (drowned valleys) and fjords
According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings.
Riviera is an Italian word for "shoreline", ultimately derived from Latin ripa ("riverbank"). It came to be applied as a proper name to the coast of the Ligurian Sea, in the form riviera ligure, then shortened to riviera. Historically, the Ligurian Riviera extended from Capo Corvo (Punta Bianca) south of Genoa, north and west into what is now French territory past Monaco and sometimes as far as Marseilles. Today, this coast is divided into the Italian Riviera and the French Riviera, although the French use the term "Riviera" to refer to the Italian Riviera and call the French portion the "Côte d'Azur".
As a result of the fame of the Ligurian rivieras, the term came into English to refer to any shoreline, especially one that is sunny, topographically diverse and popular with tourists. Such places using the term include the Australian Riviera in Queensland and the Turkish Riviera along the Aegean Sea.
The following articles describe some coastal landforms:
"Coastal waters" (or "coastal seas") is a rather general term used differently in different contexts, ranging geographically from the waters within a few kilometers of the coast, through to the entire continental shelf which may stretch for more than a hundred kilometers from land. Thus the term coastal waters is used in a slightly different way in discussions of legal and economic boundaries (see territorial waters and international waters) or when considering the geography of coastal landforms or the ecological systems operating through the continental shelf (marine coastal ecosystems). The research on coastal waters often divides into these separate areas too.
The dynamic fluid nature of the ocean means that all components of the whole ocean system are ultimately connected, although certain regional classifications are useful and relevant. The waters of the continental shelves represent such a region. The term "coastal waters" has been used in a wide variety of different ways in different contexts. In European Union environmental management it extends from the coast to just a few nautical miles while in the United States the US EPA considers this region to extend much further offshore.
"Coastal waters" has specific meanings in the context of commercial coastal shipping, and somewhat different meanings in the context of naval littoral warfare. Oceanographers and marine biologists have yet other takes. Coastal waters have a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf.
Similarly, the term littoral zone has no single definition. It is the part of a sea, lake, or river that is close to the shore. In coastal environments, the littoral zone extends from the high water mark, which is rarely inundated, to shoreline areas that are permanently submerged.
Coastal waters can be threatened by coastal eutrophication and harmful algal blooms.
The identification of bodies of rock formed from sediments deposited in shoreline and nearshore environments (shoreline and nearshore facies) is extremely important to geologists. These provide vital clues for reconstructing the geography of ancient continents (paleogeography). The locations of these beds show the extent of ancient seas at particular points in geological time, and provide clues to the magnitudes of tides in the distant past.
Sediments deposited in the shoreface are preserved as lenses of sandstone in which the upper part of the sandstone is coarser than the lower part (a coarsening upwards sequence). Geologists refer to these are parasequences. Each records an episode of retreat of the ocean from the shoreline over a period of 10,000 to 1,000,000 years. These often show laminations reflecting various kinds of tidal cycles.
Some of the best-studied shoreline deposits in the world are found along the former western shore of the Western Interior Seaway, a shallow sea that flooded central North America during the late Cretaceous Period (about 100 to 66 million years ago). These are beautifully exposed along the Book Cliffs of Utah and Colorado.
The following articles describe the various geologic processes that affect a coastal zone:
Larger animals that live in coastal areas include puffins, sea turtles and rockhopper penguins, among many others. Sea snails and various kinds of barnacles live on rocky coasts and scavenge on food deposited by the sea. Some coastal animals are used to humans in developed areas, such as dolphins and seagulls who eat food thrown for them by tourists. Since the coastal areas are all part of the littoral zone, there is a profusion of marine life found just off-coast, including sessile animals such as corals, sponges, starfish, mussels, seaweeds, fishes, and sea anemones.
There are many kinds of seabirds on various coasts. These include pelicans and cormorants, who join up with terns and oystercatchers to forage for fish and shellfish. There are sea lions on the coast of Wales and other countries.
Coastal fish, also called inshore fish or neritic fish, inhabit the sea between the shoreline and the edge of the continental shelf. Since the continental shelf is usually less than 200 metres (660 ft) deep, it follows that pelagic coastal fish are generally epipelagic fish, inhabiting the sunlit epipelagic zone. Coastal fish can be contrasted with oceanic fish or offshore fish, which inhabit the deep seas beyond the continental shelves.
Many coastal areas are famous for their kelp beds. Kelp is a fast-growing seaweed that can grow up to half a meter a day in ideal conditions. Mangroves, seagrasses, macroalgal beds, and salt marsh are important coastal vegetation types in tropical and temperate environments respectively. Restinga is another type of coastal vegetation.
Coasts also face many human-induced environmental impacts and coastal development hazards. The most important ones are:
The pollution of coastlines is connected to marine pollution which can occur from a number of sources: Marine debris (garbage and industrial debris); the transportation of petroleum in tankers, increasing the probability of large oil spills; small oil spills created by large and small vessels, which flush bilge water into the ocean.
Marine pollution occurs when substances used or spread by humans, such as industrial, agricultural and residential waste, particles, noise, excess carbon dioxide or invasive organisms enter the ocean and cause harmful effects there. The majority of this waste (80%) comes from land-based activity, although marine transportation significantly contributes as well. It is a combination of chemicals and trash, most of which comes from land sources and is washed or blown into the ocean. This pollution results in damage to the environment, to the health of all organisms, and to economic structures worldwide. Since most inputs come from land, either via the rivers, sewage or the atmosphere, it means that continental shelves are more vulnerable to pollution. Air pollution is also a contributing factor by carrying off iron, carbonic acid, nitrogen, silicon, sulfur, pesticides or dust particles into the ocean. The pollution often comes from nonpoint sources such as agricultural runoff, wind-blown debris, and dust. These nonpoint sources are largely due to runoff that enters the ocean through rivers, but wind-blown debris and dust can also play a role, as these pollutants can settle into waterways and oceans. Pathways of pollution include direct discharge, land runoff, ship pollution, bilge pollution, atmospheric pollution and, potentially, deep sea mining.
Marine debris, also known as marine litter, is human-created waste that has deliberately or accidentally been released in a sea or ocean. Floating oceanic debris tends to accumulate at the center of gyres and on coastlines, frequently washing aground, when it is known as beach litter or tidewrack. Deliberate disposal of wastes at sea is called ocean dumping. Naturally occurring debris, such as driftwood and drift seeds, are also present. With the increasing use of plastic, human influence has become an issue as many types of (petrochemical) plastics do not biodegrade quickly, as would natural or organic materials. The largest single type of plastic pollution (~10%) and majority of large plastic in the oceans is discarded and lost nets from the fishing industry. Waterborne plastic poses a serious threat to fish, seabirds, marine reptiles, and marine mammals, as well as to boats and coasts.
A growing concern regarding plastic pollution in the marine ecosystem is the use of microplastics. Microplastics are beads of plastic less than 5 millimeters wide, and they are commonly found in hand soaps, face cleansers, and other exfoliators. When these products are used, the microplastics go through the water filtration system and into the ocean, but because of their small size they are likely to escape capture by the preliminary treatment screens on wastewater plants. These beads are harmful to the organisms in the ocean, especially filter feeders, because they can easily ingest the plastic and become sick. The microplastics are such a concern because it is difficult to clean them up due to their size, so humans can try to avoid using these harmful plastics by purchasing products that use environmentally safe exfoliates.
Between 1901 and 2018, the average global sea level rose by 15–25 cm (6–10 in), or an average of 1–2 mm per year. This rate accelerated to 4.62 mm/yr for the decade 2013–2022. Climate change due to human activities is the main cause. Between 1993 and 2018, thermal expansion of water accounted for 42% of sea level rise. Melting temperate glaciers accounted for 21%, with Greenland accounting for 15% and Antarctica 8%. Sea level rise lags changes in the Earth's temperature. So sea level rise will continue to accelerate between now and 2050 in response to warming that is already happening. What happens after that will depend on what happens with human greenhouse gas emissions. Sea level rise may slow down between 2050 and 2100 if there are deep cuts in emissions. It could then reach a little over 30 cm (1 ft) from now by 2100. With high emissions it may accelerate. It could rise by 1 m (3+1⁄2 ft) or even 2 m (6+1⁄2 ft) by then. In the long run, sea level rise would amount to 2–3 m (7–10 ft) over the next 2000 years if warming amounts to 1.5 °C (2.7 °F). It would be 19–22 metres (62–72 ft) if warming peaks at 5 °C (9.0 °F).
International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention. | [
{
"paragraph_id": 0,
"text": "The coast, also known as the coastline, shoreline or seashore, is defined as the area where land meets the ocean, or as a line that forms the boundary between the land and the coastline. Shores are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore which is created. The Earth has around 620,000 kilometres (390,000 mi) of coastline. Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas they harbor saltmarshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic species. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds. In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of 1–50 meters (3.3–164.0 feet).",
"title": ""
},
{
"paragraph_id": 1,
"text": "According to an atlas prepared by the United Nations, 44% of all humans live within 150 km (93 mi) of the sea. Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism. Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide.",
"title": ""
},
{
"paragraph_id": 2,
"text": "However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, and related issues such as coastal erosion, saltwater intrusion and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems. The interactive effects of climate change, habitat destruction, overfishing and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 \"Life Below Water\" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Because coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore).",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Earth has approximately 620,000 kilometres (390,000 mi) of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. As of October 2010, about 2.86% of exclusive economic zones were part of marine protected areas.",
"title": "Size"
},
{
"paragraph_id": 5,
"text": "The definition of coasts varies. Marine scientists think of the \"wet\" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientist might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems).",
"title": "Size"
},
{
"paragraph_id": 6,
"text": "While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons.",
"title": "Size"
},
{
"paragraph_id": 7,
"text": "The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve–like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the \"paradox of length\" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot.",
"title": "Size"
},
{
"paragraph_id": 8,
"text": "Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean.",
"title": "Formation "
},
{
"paragraph_id": 9,
"text": "Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than 4 m (13 ft); mesotidal coasts with a tidal range of 2 to 4 m (6.6 to 13 ft); and microtidal coasts with a tidal range of less than 2 m (7 ft). The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts.",
"title": "Formation "
},
{
"paragraph_id": 10,
"text": "Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast.",
"title": "Formation "
},
{
"paragraph_id": 11,
"text": "Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands.",
"title": "Formation "
},
{
"paragraph_id": 12,
"text": "Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias).",
"title": "Formation "
},
{
"paragraph_id": 13,
"text": "More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals.",
"title": "Importance for humans and ecosystems"
},
{
"paragraph_id": 14,
"text": "Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard.",
"title": "Importance for humans and ecosystems"
},
{
"paragraph_id": 15,
"text": "Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing.",
"title": "Importance for humans and ecosystems"
},
{
"paragraph_id": 16,
"text": "Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast.",
"title": "Importance for humans and ecosystems"
},
{
"paragraph_id": 17,
"text": "Estuarine and marine coastal ecosystems are both marine ecosystems. Together, these ecosystems perform the four categories of ecosystem services in a variety of ways: \"Regulating services\" include climate regulation as well as waste treatment and disease regulation and buffer zones. The \"provisioning services\" include forest products, marine products, fresh water, raw materials, biochemical and genetic resources. \"Cultural services\" of coastal ecosystems include inspirational aspects, recreation and tourism, science and education. \"Supporting services\" of coastal ecosystems include nutrient cycling, biologically mediated habitats and primary production.",
"title": "Importance for humans and ecosystems"
},
{
"paragraph_id": 18,
"text": "According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or \"drowned\" landforms, such as rias (drowned valleys) and fjords",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "Riviera is an Italian word for \"shoreline\", ultimately derived from Latin ripa (\"riverbank\"). It came to be applied as a proper name to the coast of the Ligurian Sea, in the form riviera ligure, then shortened to riviera. Historically, the Ligurian Riviera extended from Capo Corvo (Punta Bianca) south of Genoa, north and west into what is now French territory past Monaco and sometimes as far as Marseilles. Today, this coast is divided into the Italian Riviera and the French Riviera, although the French use the term \"Riviera\" to refer to the Italian Riviera and call the French portion the \"Côte d'Azur\".",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "As a result of the fame of the Ligurian rivieras, the term came into English to refer to any shoreline, especially one that is sunny, topographically diverse and popular with tourists. Such places using the term include the Australian Riviera in Queensland and the Turkish Riviera along the Aegean Sea.",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "The following articles describe some coastal landforms:",
"title": "Landforms"
},
{
"paragraph_id": 23,
"text": "\"Coastal waters\" (or \"coastal seas\") is a rather general term used differently in different contexts, ranging geographically from the waters within a few kilometers of the coast, through to the entire continental shelf which may stretch for more than a hundred kilometers from land. Thus the term coastal waters is used in a slightly different way in discussions of legal and economic boundaries (see territorial waters and international waters) or when considering the geography of coastal landforms or the ecological systems operating through the continental shelf (marine coastal ecosystems). The research on coastal waters often divides into these separate areas too.",
"title": "Coastal waters"
},
{
"paragraph_id": 24,
"text": "The dynamic fluid nature of the ocean means that all components of the whole ocean system are ultimately connected, although certain regional classifications are useful and relevant. The waters of the continental shelves represent such a region. The term \"coastal waters\" has been used in a wide variety of different ways in different contexts. In European Union environmental management it extends from the coast to just a few nautical miles while in the United States the US EPA considers this region to extend much further offshore.",
"title": "Coastal waters"
},
{
"paragraph_id": 25,
"text": "\"Coastal waters\" has specific meanings in the context of commercial coastal shipping, and somewhat different meanings in the context of naval littoral warfare. Oceanographers and marine biologists have yet other takes. Coastal waters have a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf.",
"title": "Coastal waters"
},
{
"paragraph_id": 26,
"text": "Similarly, the term littoral zone has no single definition. It is the part of a sea, lake, or river that is close to the shore. In coastal environments, the littoral zone extends from the high water mark, which is rarely inundated, to shoreline areas that are permanently submerged.",
"title": "Coastal waters"
},
{
"paragraph_id": 27,
"text": "Coastal waters can be threatened by coastal eutrophication and harmful algal blooms.",
"title": "Coastal waters"
},
{
"paragraph_id": 28,
"text": "The identification of bodies of rock formed from sediments deposited in shoreline and nearshore environments (shoreline and nearshore facies) is extremely important to geologists. These provide vital clues for reconstructing the geography of ancient continents (paleogeography). The locations of these beds show the extent of ancient seas at particular points in geological time, and provide clues to the magnitudes of tides in the distant past.",
"title": "In geology"
},
{
"paragraph_id": 29,
"text": "Sediments deposited in the shoreface are preserved as lenses of sandstone in which the upper part of the sandstone is coarser than the lower part (a coarsening upwards sequence). Geologists refer to these are parasequences. Each records an episode of retreat of the ocean from the shoreline over a period of 10,000 to 1,000,000 years. These often show laminations reflecting various kinds of tidal cycles.",
"title": "In geology"
},
{
"paragraph_id": 30,
"text": "Some of the best-studied shoreline deposits in the world are found along the former western shore of the Western Interior Seaway, a shallow sea that flooded central North America during the late Cretaceous Period (about 100 to 66 million years ago). These are beautifully exposed along the Book Cliffs of Utah and Colorado.",
"title": "In geology"
},
{
"paragraph_id": 31,
"text": "The following articles describe the various geologic processes that affect a coastal zone:",
"title": "In geology"
},
{
"paragraph_id": 32,
"text": "Larger animals that live in coastal areas include puffins, sea turtles and rockhopper penguins, among many others. Sea snails and various kinds of barnacles live on rocky coasts and scavenge on food deposited by the sea. Some coastal animals are used to humans in developed areas, such as dolphins and seagulls who eat food thrown for them by tourists. Since the coastal areas are all part of the littoral zone, there is a profusion of marine life found just off-coast, including sessile animals such as corals, sponges, starfish, mussels, seaweeds, fishes, and sea anemones.",
"title": "Wildlife"
},
{
"paragraph_id": 33,
"text": "There are many kinds of seabirds on various coasts. These include pelicans and cormorants, who join up with terns and oystercatchers to forage for fish and shellfish. There are sea lions on the coast of Wales and other countries.",
"title": "Wildlife"
},
{
"paragraph_id": 34,
"text": "Coastal fish, also called inshore fish or neritic fish, inhabit the sea between the shoreline and the edge of the continental shelf. Since the continental shelf is usually less than 200 metres (660 ft) deep, it follows that pelagic coastal fish are generally epipelagic fish, inhabiting the sunlit epipelagic zone. Coastal fish can be contrasted with oceanic fish or offshore fish, which inhabit the deep seas beyond the continental shelves.",
"title": "Wildlife"
},
{
"paragraph_id": 35,
"text": "Many coastal areas are famous for their kelp beds. Kelp is a fast-growing seaweed that can grow up to half a meter a day in ideal conditions. Mangroves, seagrasses, macroalgal beds, and salt marsh are important coastal vegetation types in tropical and temperate environments respectively. Restinga is another type of coastal vegetation.",
"title": "Wildlife"
},
{
"paragraph_id": 36,
"text": "Coasts also face many human-induced environmental impacts and coastal development hazards. The most important ones are:",
"title": "Threats"
},
{
"paragraph_id": 37,
"text": "The pollution of coastlines is connected to marine pollution which can occur from a number of sources: Marine debris (garbage and industrial debris); the transportation of petroleum in tankers, increasing the probability of large oil spills; small oil spills created by large and small vessels, which flush bilge water into the ocean.",
"title": "Threats"
},
{
"paragraph_id": 38,
"text": "Marine pollution occurs when substances used or spread by humans, such as industrial, agricultural and residential waste, particles, noise, excess carbon dioxide or invasive organisms enter the ocean and cause harmful effects there. The majority of this waste (80%) comes from land-based activity, although marine transportation significantly contributes as well. It is a combination of chemicals and trash, most of which comes from land sources and is washed or blown into the ocean. This pollution results in damage to the environment, to the health of all organisms, and to economic structures worldwide. Since most inputs come from land, either via the rivers, sewage or the atmosphere, it means that continental shelves are more vulnerable to pollution. Air pollution is also a contributing factor by carrying off iron, carbonic acid, nitrogen, silicon, sulfur, pesticides or dust particles into the ocean. The pollution often comes from nonpoint sources such as agricultural runoff, wind-blown debris, and dust. These nonpoint sources are largely due to runoff that enters the ocean through rivers, but wind-blown debris and dust can also play a role, as these pollutants can settle into waterways and oceans. Pathways of pollution include direct discharge, land runoff, ship pollution, bilge pollution, atmospheric pollution and, potentially, deep sea mining.",
"title": "Threats"
},
{
"paragraph_id": 39,
"text": "Marine debris, also known as marine litter, is human-created waste that has deliberately or accidentally been released in a sea or ocean. Floating oceanic debris tends to accumulate at the center of gyres and on coastlines, frequently washing aground, when it is known as beach litter or tidewrack. Deliberate disposal of wastes at sea is called ocean dumping. Naturally occurring debris, such as driftwood and drift seeds, are also present. With the increasing use of plastic, human influence has become an issue as many types of (petrochemical) plastics do not biodegrade quickly, as would natural or organic materials. The largest single type of plastic pollution (~10%) and majority of large plastic in the oceans is discarded and lost nets from the fishing industry. Waterborne plastic poses a serious threat to fish, seabirds, marine reptiles, and marine mammals, as well as to boats and coasts.",
"title": "Threats"
},
{
"paragraph_id": 40,
"text": "A growing concern regarding plastic pollution in the marine ecosystem is the use of microplastics. Microplastics are beads of plastic less than 5 millimeters wide, and they are commonly found in hand soaps, face cleansers, and other exfoliators. When these products are used, the microplastics go through the water filtration system and into the ocean, but because of their small size they are likely to escape capture by the preliminary treatment screens on wastewater plants. These beads are harmful to the organisms in the ocean, especially filter feeders, because they can easily ingest the plastic and become sick. The microplastics are such a concern because it is difficult to clean them up due to their size, so humans can try to avoid using these harmful plastics by purchasing products that use environmentally safe exfoliates.",
"title": "Threats"
},
{
"paragraph_id": 41,
"text": "Between 1901 and 2018, the average global sea level rose by 15–25 cm (6–10 in), or an average of 1–2 mm per year. This rate accelerated to 4.62 mm/yr for the decade 2013–2022. Climate change due to human activities is the main cause. Between 1993 and 2018, thermal expansion of water accounted for 42% of sea level rise. Melting temperate glaciers accounted for 21%, with Greenland accounting for 15% and Antarctica 8%. Sea level rise lags changes in the Earth's temperature. So sea level rise will continue to accelerate between now and 2050 in response to warming that is already happening. What happens after that will depend on what happens with human greenhouse gas emissions. Sea level rise may slow down between 2050 and 2100 if there are deep cuts in emissions. It could then reach a little over 30 cm (1 ft) from now by 2100. With high emissions it may accelerate. It could rise by 1 m (3+1⁄2 ft) or even 2 m (6+1⁄2 ft) by then. In the long run, sea level rise would amount to 2–3 m (7–10 ft) over the next 2000 years if warming amounts to 1.5 °C (2.7 °F). It would be 19–22 metres (62–72 ft) if warming peaks at 5 °C (9.0 °F).",
"title": "Threats"
},
{
"paragraph_id": 42,
"text": "International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 \"Life Below Water\" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.",
"title": "Global goals"
}
] | The coast, also known as the coastline, shoreline or seashore, is defined as the area where land meets the ocean, or as a line that forms the boundary between the land and the coastline. Shores are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore which is created. The Earth has around 620,000 kilometres (390,000 mi) of coastline. Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas they harbor saltmarshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic species. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals and various kinds of seaweeds. In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of 1–50 meters. According to an atlas prepared by the United Nations, 44% of all humans live within 150 km of the sea. Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism. Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide. However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, and related issues such as coastal erosion, saltwater intrusion and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems. The interactive effects of climate change, habitat destruction, overfishing and water pollution have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention. Because coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans and lakes. | 2002-02-25T15:43:11Z | 2023-12-29T05:45:13Z | [
"Template:Authority control",
"Template:Portal",
"Template:Cite book",
"Template:Wikiquote",
"Template:Hatgrp",
"Template:Asof",
"Template:Excerpt",
"Template:Further",
"Template:Div col",
"Template:Commons category",
"Template:Short description",
"Template:Convert",
"Template:Citation needed",
"Template:Anchor",
"Template:Unsourced section",
"Template:Div col end",
"Template:Sfn",
"Template:Coastal geography",
"Template:See also",
"Template:Further information",
"Template:Reflist",
"Template:Cite web",
"Template:Wiktionary",
"Template:TOC level",
"Template:Rp",
"Template:Cite encyclopedia",
"Template:Cite journal",
"Template:Wikiversity"
] | https://en.wikipedia.org/wiki/Coast |
5,237 | Catatonia | Catatonia is a complex neuropsychiatric behavioral syndrome that is characterized by abnormal movements, immobility, abnormal behaviors, and withdrawal. The onset of catatonia can be acute or subtle and symptoms can wax, wane, or change during episodes. It has historically been related to schizophrenia (catatonic schizophrenia), but catatonia is most often seen in mood disorders. It is now known that catatonic symptoms are nonspecific and may be observed in other mental, neurological, and medical conditions. Catatonia is not a stand-alone diagnosis (although some experts disagree), and the term is used to describe a feature of the underlying disorder.
There are several subtypes of catatonia: akinetic catatonia, excited catatonia, malignant catatonia, and delirious mania.
Recognizing and treating catatonia is very important as failure to do so can lead to poor outcomes and can be potentially fatal. Treatment with benzodiazepines or ECT can lead to remission of catatonia. There is growing evidence of the effectiveness of the NMDA receptor antagonists amantadine and memantine for benzodiazepine-resistant catatonia. Antipsychotics are sometimes employed, but they can worsen symptoms and have serious adverse effects.
The presentation of a patient with catatonia varies greatly depending on the subtype and underlying cause, and can be acute or subtle.
Because most patients with catatonia have an underlying psychiatric illness, the majority will present with worsening depression, mania, or psychosis followed by catatonia symptoms. Catatonia presents as a motor disturbance in which patients will display marked reduction in movement, marked agitation, or a mixture of both despite having the physical capacity to move normally. These patients may be unable to start an action or stop one. Movements and mannerisms may be repetitive, or purposeless.
The most common signs of catatonia are immobility, mutism, withdrawal and refusal to eat, staring, negativism, posturing (rigidity), rigidity, waxy flexibility/catalepsy, stereotypy (purposeless, repetitive movements), echolalia or echopraxia, verbigeration (repeat meaningless phrases). It should not be assumed that patients presenting with catatonia are unaware of their surroundings as some patients can recall in detail their catatonic state and their actions.
There are several subtypes of catatonia and they are characterized by the specific movement disturbance and associated features. Although catatonia can be divided into various subtypes, the natural history of catatonia is often fluctuant and different states can exist within the same individual.
Withdrawn Catatonia: This form of catatonia is characterized by decreased response to external stimuli, immobility or inhibited movement, mutism, staring, posturing, and negativism. Patients may sit or stand in the same position for hours, may hold odd positions, and may resist movement of their extremities.
Excited Catatonia: Excited catatonia is characterized by odd mannerisms/gestures, performing purposeless or inappropriate actions, excessive motor activity, restlessness, stereotypy, impulsivity, agitation, and combativeness. Speech and actions may be repetitive or mimic another person's. People in this state are extremely hyperactive and may have delusions and hallucinations. Catatonic excitement is commonly cited as one of the most dangerous mental states in psychiatry.
Malignant Catatonia: Malignant catatonia is a life-threatening condition that may progress rapidly within a few days. It is characterized by fever, abnormalities in blood pressure, heart rate, respiratory rate, diaphoresis (sweating), and delirium. Certain lab findings are common with this presentation; however, they are nonspecific, which means that they are also present in other conditions and do not diagnose catatonia. These lab findings include: leukocytosis, elevated creatine kinase, low serum iron. The signs and symptoms of malignant catatonia overlap significantly with neuroleptic malignant syndrome (NMS) and so a careful history, review of medications, and physical exam are critical to properly differentiate these conditions. For example, if the patient has waxy flexibility and holds a position against gravity when passively moved into that position, then it is likely catatonia. If the patient has a "lead-pipe rigidity" then NMS should be the prime suspect.
Other forms:
Patients may experience several complications from being in a catatonic state. The nature of these complications will depend on the type of catatonia being experienced by the patient. For example, patients presenting with withdrawn catatonia may have refusal to eat which will in turn lead to malnutrition and dehydration. Furthermore, if immobility is a symptom the patient is presenting with, then they may develop pressure ulcers, muscle contractions, and are at risk of developing deep vein thrombosis (DVT) and pulmonary embolus (PE). Patients with excited catatonia may be aggressive and violent, and physical trauma may result from this. Catatonia may progress to the malignant type which will present with autonomic instability and may be life-threatening. Other complications also include the development of pneumonia and neuroleptic malignant syndrome.
Catatonia is almost always secondary to another underlying illness, often a psychiatric disorder. Mood disorders such as a bipolar disorder and depression are the most common etiologies to progress to catatonia. Other psychiatric associations include schizophrenia and other primary psychotic disorders. It also is related to autism spectrum disorders and ADHD. Psychodynamic theorists have interpreted catatonia as a defense against the potentially destructive consequences of responsibility, and the passivity of the disorder provides relief.
Catatonia is also seen in many medical disorders, including infections (such as encephalitis), autoimmune disorders, meningitis, focal neurological lesions (including strokes), alcohol withdrawal, abrupt or overly rapid benzodiazepine withdrawal, cerebrovascular disease, neoplasms, head injury, and some metabolic conditions (homocystinuria, diabetic ketoacidosis, hepatic encephalopathy, and hypercalcaemia).
The pathophysiology that leads to catatonia is still poorly understood and a definite mechanism remains unknown. Neurologic studies have implicated several pathways; however, it remains unclear whether these findings are the cause or the consequence of the disorder.
Abnormalities in GABA, glutamate signaling, serotonin, and dopamine transmission are believed to be implicated in catatonia.
Furthermore, it has also been hypothesized that pathways that connect the basal ganglia with the cortex and thalamus is involved in the development of catatonia.
There is not yet a definitive consensus regarding diagnostic criteria of catatonia. In the fifth edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5, 2013) and the World Health Organization's eleventh edition of the International Classification of Diseases (ICD-11, 2022), the classification is more homogeneous than in earlier editions. Prominent researchers in the field have other suggestions for diagnostic criteria.
DSM-5 classification
The DSM-5 does not classify catatonia as an independent disorder, but rather it classifies it as catatonia associated with another mental disorder, due to another medical condition, or as unspecified catatonia.
Catatonia is diagnosed by the presence of three or more of the following 12 psychomotor symptoms in association with a mental disorder, medical condition, or unspecified:
Other disorders (additional code 293.89 [F06.1] to indicate the presence of the co-morbid catatonia):
If catatonic symptoms are present but do not form the catatonic syndrome, a medication- or substance-induced aetiology should first be considered.
ICD-11 classification
In ICD-11 catatonia is defined as a syndrome of primarily psychomotor disturbances that is characterized by the simultaneous occurrence of several symptoms such as stupor; catalepsy; waxy flexibility; mutism; negativism; posturing; mannerisms; stereotypies; psychomotor agitation; grimacing; echolalia and echopraxia. Catatonia may occur in the context of specific mental disorders, including mood disorders, schizophrenia or other primary psychotic disorders, and Neurodevelopmental disorders, and may be induced by psychoactive substances, including medications. Catatonia may also be caused by a medical condition not classified under mental, behavioral, or neurodevelopmental disorders.
Catatonia is often overlooked and under-diagnosed. Patients with catatonia most commonly have an underlying psychiatric disorder, for this reason, physicians may overlook signs of catatonia due to the severity of the psychosis the patient is presenting with. Furthermore, the patient may not be presenting with the common signs of catatonia such as mutism and posturing. Additionally, the motor abnormalities seen in catatonia are also present in psychiatric disorders. For example, a patient with mania will show increased motor activity that may progress to exciting catatonia. One way in which physicians can differentiate between the two is to observe the motor abnormality. Patients with mania present with increased goal-directed activity. On the other hand, the increased activity in catatonia is not goal-directed and often repetitive.
Catatonia is a clinical diagnosis and there is no specific laboratory test to diagnose it. However, certain testing can help determine what is causing the catatonia. An EEG will likely show diffuse slowing. If seizure activity is driving the syndrome, then an EEG would also be helpful in detecting this. CT or MRI will not show catatonia; however, they might reveal abnormalities that might be leading to the syndrome. Metabolic screens, inflammatory markers, or autoantibodies may reveal reversible medical causes of catatonia.
Vital signs should be frequently monitored as catatonia can progress to malignant catatonia which is life-threatening. Malignant catatonia is characterized by fever, hypertension, tachycardia, and tachypnea.
Various rating scales for catatonia have been developed, however, their utility for clinical care has not been well established. The most commonly used scale is the Bush-Francis Catatonia Rating Scale (BFCRS) (external link is provided below). The scale is composed of 23 items with the first 14 items being used as the screening tool. If 2 of the 14 are positive, this prompts for further evaluation and completion of the remaining 9 items.
A diagnosis can be supported by the lorazepam challenge or the zolpidem challenge. While proven useful in the past, barbiturates are no longer commonly used in psychiatry; thus the option of either benzodiazepines or ECT.
The differential diagnosis of catatonia is extensive as signs and symptoms of catatonia may overlap significantly with those of other conditions. Therefore, a careful and detailed history, medication review, and physical exam are key to diagnosing catatonia and differentiating it from other conditions. Furthermore, some of these conditions can themselves lead to catatonia. The differential diagnosis is as follows:
The initial treatment of catatonia is to stop medication that could be potentially leading to the syndrome. These may include steroids, stimulants, anticonvulsants, neuroleptics, dopamine blockers, etc. The next step is to provide a "lorazepam challenge," in which patients are given 2 mg of IV lorazepam (or another benzodiazepine). Most patients with catatonia will respond significantly to this within the first 15–30 minutes. If no change is observed during the first dose, then a second dose is given and the patient is re-examined. If the patient responds to the lorazepam challenge, then lorazepam can be scheduled at interval doses until the catatonia resolves. The lorazepam must be tapered slowly, otherwise, the catatonia symptoms may return. The underlying cause of the catatonia should also be treated during this time. If within a week the catatonia is not resolved, then ECT can be used to reverse the symptoms. ECT in combination with benzodiazepines is used to treat malignant catatonia. In France, zolpidem has also been used in diagnosis, and response may occur within the same time period. Ultimately the underlying cause needs to be treated.
Electroconvulsive therapy (ECT) is an effective treatment for catatonia that is well acknowledged. ECT has also shown favorable outcomes in patients with chronic catatonia. However, it has been pointed out that further high quality randomized controlled trials are needed to evaluate the efficacy, tolerance, and protocols of ECT in catatonia.
Antipsychotics should be used with care as they can worsen catatonia and are the cause of neuroleptic malignant syndrome, a dangerous condition that can mimic catatonia and requires immediate discontinuation of the antipsychotic.
There is evidence clozapine works better than other antipsychotics to treat catatonia, following a recent systematic review.
Excessive glutamate activity is believed to be involved in catatonia; when first-line treatment options fail, NMDA antagonists such as amantadine or memantine may be used. Amantadine may have an increased incidence of tolerance with prolonged use and can cause psychosis, due to its additional effects on the dopamine system. Memantine has a more targeted pharmacological profile for the glutamate system, reduced incidence of psychosis and may therefore be preferred for individuals who cannot tolerate amantadine. Topiramate is another treatment option for resistant catatonia; it produces its therapeutic effects by producing glutamate antagonism via modulation of AMPA receptors.
Patients who experience an episode of catatonia are more likely to experience another recurring episode. Treatment response for patients with catatonia is 50–70% and these patients have a good prognosis. However, failure to respond to medication is a very poor prognosis. Many of these patients will require long-term and continuous mental health care. For patients with catatonia with underlying schizophrenia, the prognosis is much poorer.
Catatonia has been mostly studied in acutely ill psychiatric patients. Catatonia frequently goes unrecognized, leading to the belief that the syndrome is rare; however, this is not true and prevalence has been reported to be as high as 10% in patients with acute psychiatric illnesses. One large population estimate has suggested that the incidence of catatonia is 10.6 episodes per 100 000 person-years. It occurs in males and females in approximately equal numbers. 21-46% of all catatonia cases can be attributed to a general medical condition.
Reports of stupor-like and catatonia-like states abound in the history of psychiatry. After the middle of the 19th century there was an increase of interest in the motor disorders accompanying madness, culminating in the publication by Karl Ludwig Kahlbaum in 1874 of Die Katatonie oder das Spannungsirresein (Catatonia or Tension Insanity). | [
{
"paragraph_id": 0,
"text": "Catatonia is a complex neuropsychiatric behavioral syndrome that is characterized by abnormal movements, immobility, abnormal behaviors, and withdrawal. The onset of catatonia can be acute or subtle and symptoms can wax, wane, or change during episodes. It has historically been related to schizophrenia (catatonic schizophrenia), but catatonia is most often seen in mood disorders. It is now known that catatonic symptoms are nonspecific and may be observed in other mental, neurological, and medical conditions. Catatonia is not a stand-alone diagnosis (although some experts disagree), and the term is used to describe a feature of the underlying disorder.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are several subtypes of catatonia: akinetic catatonia, excited catatonia, malignant catatonia, and delirious mania.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Recognizing and treating catatonia is very important as failure to do so can lead to poor outcomes and can be potentially fatal. Treatment with benzodiazepines or ECT can lead to remission of catatonia. There is growing evidence of the effectiveness of the NMDA receptor antagonists amantadine and memantine for benzodiazepine-resistant catatonia. Antipsychotics are sometimes employed, but they can worsen symptoms and have serious adverse effects.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The presentation of a patient with catatonia varies greatly depending on the subtype and underlying cause, and can be acute or subtle.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 4,
"text": "Because most patients with catatonia have an underlying psychiatric illness, the majority will present with worsening depression, mania, or psychosis followed by catatonia symptoms. Catatonia presents as a motor disturbance in which patients will display marked reduction in movement, marked agitation, or a mixture of both despite having the physical capacity to move normally. These patients may be unable to start an action or stop one. Movements and mannerisms may be repetitive, or purposeless.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 5,
"text": "The most common signs of catatonia are immobility, mutism, withdrawal and refusal to eat, staring, negativism, posturing (rigidity), rigidity, waxy flexibility/catalepsy, stereotypy (purposeless, repetitive movements), echolalia or echopraxia, verbigeration (repeat meaningless phrases). It should not be assumed that patients presenting with catatonia are unaware of their surroundings as some patients can recall in detail their catatonic state and their actions.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 6,
"text": "There are several subtypes of catatonia and they are characterized by the specific movement disturbance and associated features. Although catatonia can be divided into various subtypes, the natural history of catatonia is often fluctuant and different states can exist within the same individual.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 7,
"text": "Withdrawn Catatonia: This form of catatonia is characterized by decreased response to external stimuli, immobility or inhibited movement, mutism, staring, posturing, and negativism. Patients may sit or stand in the same position for hours, may hold odd positions, and may resist movement of their extremities.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 8,
"text": "Excited Catatonia: Excited catatonia is characterized by odd mannerisms/gestures, performing purposeless or inappropriate actions, excessive motor activity, restlessness, stereotypy, impulsivity, agitation, and combativeness. Speech and actions may be repetitive or mimic another person's. People in this state are extremely hyperactive and may have delusions and hallucinations. Catatonic excitement is commonly cited as one of the most dangerous mental states in psychiatry.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 9,
"text": "Malignant Catatonia: Malignant catatonia is a life-threatening condition that may progress rapidly within a few days. It is characterized by fever, abnormalities in blood pressure, heart rate, respiratory rate, diaphoresis (sweating), and delirium. Certain lab findings are common with this presentation; however, they are nonspecific, which means that they are also present in other conditions and do not diagnose catatonia. These lab findings include: leukocytosis, elevated creatine kinase, low serum iron. The signs and symptoms of malignant catatonia overlap significantly with neuroleptic malignant syndrome (NMS) and so a careful history, review of medications, and physical exam are critical to properly differentiate these conditions. For example, if the patient has waxy flexibility and holds a position against gravity when passively moved into that position, then it is likely catatonia. If the patient has a \"lead-pipe rigidity\" then NMS should be the prime suspect.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 10,
"text": "Other forms:",
"title": "Signs and symptoms"
},
{
"paragraph_id": 11,
"text": "Patients may experience several complications from being in a catatonic state. The nature of these complications will depend on the type of catatonia being experienced by the patient. For example, patients presenting with withdrawn catatonia may have refusal to eat which will in turn lead to malnutrition and dehydration. Furthermore, if immobility is a symptom the patient is presenting with, then they may develop pressure ulcers, muscle contractions, and are at risk of developing deep vein thrombosis (DVT) and pulmonary embolus (PE). Patients with excited catatonia may be aggressive and violent, and physical trauma may result from this. Catatonia may progress to the malignant type which will present with autonomic instability and may be life-threatening. Other complications also include the development of pneumonia and neuroleptic malignant syndrome.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 12,
"text": "Catatonia is almost always secondary to another underlying illness, often a psychiatric disorder. Mood disorders such as a bipolar disorder and depression are the most common etiologies to progress to catatonia. Other psychiatric associations include schizophrenia and other primary psychotic disorders. It also is related to autism spectrum disorders and ADHD. Psychodynamic theorists have interpreted catatonia as a defense against the potentially destructive consequences of responsibility, and the passivity of the disorder provides relief.",
"title": "Causes"
},
{
"paragraph_id": 13,
"text": "Catatonia is also seen in many medical disorders, including infections (such as encephalitis), autoimmune disorders, meningitis, focal neurological lesions (including strokes), alcohol withdrawal, abrupt or overly rapid benzodiazepine withdrawal, cerebrovascular disease, neoplasms, head injury, and some metabolic conditions (homocystinuria, diabetic ketoacidosis, hepatic encephalopathy, and hypercalcaemia).",
"title": "Causes"
},
{
"paragraph_id": 14,
"text": "The pathophysiology that leads to catatonia is still poorly understood and a definite mechanism remains unknown. Neurologic studies have implicated several pathways; however, it remains unclear whether these findings are the cause or the consequence of the disorder.",
"title": "Pathogenesis"
},
{
"paragraph_id": 15,
"text": "Abnormalities in GABA, glutamate signaling, serotonin, and dopamine transmission are believed to be implicated in catatonia.",
"title": "Pathogenesis"
},
{
"paragraph_id": 16,
"text": "Furthermore, it has also been hypothesized that pathways that connect the basal ganglia with the cortex and thalamus is involved in the development of catatonia.",
"title": "Pathogenesis"
},
{
"paragraph_id": 17,
"text": "There is not yet a definitive consensus regarding diagnostic criteria of catatonia. In the fifth edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5, 2013) and the World Health Organization's eleventh edition of the International Classification of Diseases (ICD-11, 2022), the classification is more homogeneous than in earlier editions. Prominent researchers in the field have other suggestions for diagnostic criteria.",
"title": "Diagnosis"
},
{
"paragraph_id": 18,
"text": "DSM-5 classification",
"title": "Diagnosis"
},
{
"paragraph_id": 19,
"text": "The DSM-5 does not classify catatonia as an independent disorder, but rather it classifies it as catatonia associated with another mental disorder, due to another medical condition, or as unspecified catatonia.",
"title": "Diagnosis"
},
{
"paragraph_id": 20,
"text": "Catatonia is diagnosed by the presence of three or more of the following 12 psychomotor symptoms in association with a mental disorder, medical condition, or unspecified:",
"title": "Diagnosis"
},
{
"paragraph_id": 21,
"text": "Other disorders (additional code 293.89 [F06.1] to indicate the presence of the co-morbid catatonia):",
"title": "Diagnosis"
},
{
"paragraph_id": 22,
"text": "If catatonic symptoms are present but do not form the catatonic syndrome, a medication- or substance-induced aetiology should first be considered.",
"title": "Diagnosis"
},
{
"paragraph_id": 23,
"text": "ICD-11 classification",
"title": "Diagnosis"
},
{
"paragraph_id": 24,
"text": "In ICD-11 catatonia is defined as a syndrome of primarily psychomotor disturbances that is characterized by the simultaneous occurrence of several symptoms such as stupor; catalepsy; waxy flexibility; mutism; negativism; posturing; mannerisms; stereotypies; psychomotor agitation; grimacing; echolalia and echopraxia. Catatonia may occur in the context of specific mental disorders, including mood disorders, schizophrenia or other primary psychotic disorders, and Neurodevelopmental disorders, and may be induced by psychoactive substances, including medications. Catatonia may also be caused by a medical condition not classified under mental, behavioral, or neurodevelopmental disorders.",
"title": "Diagnosis"
},
{
"paragraph_id": 25,
"text": "Catatonia is often overlooked and under-diagnosed. Patients with catatonia most commonly have an underlying psychiatric disorder, for this reason, physicians may overlook signs of catatonia due to the severity of the psychosis the patient is presenting with. Furthermore, the patient may not be presenting with the common signs of catatonia such as mutism and posturing. Additionally, the motor abnormalities seen in catatonia are also present in psychiatric disorders. For example, a patient with mania will show increased motor activity that may progress to exciting catatonia. One way in which physicians can differentiate between the two is to observe the motor abnormality. Patients with mania present with increased goal-directed activity. On the other hand, the increased activity in catatonia is not goal-directed and often repetitive.",
"title": "Diagnosis"
},
{
"paragraph_id": 26,
"text": "Catatonia is a clinical diagnosis and there is no specific laboratory test to diagnose it. However, certain testing can help determine what is causing the catatonia. An EEG will likely show diffuse slowing. If seizure activity is driving the syndrome, then an EEG would also be helpful in detecting this. CT or MRI will not show catatonia; however, they might reveal abnormalities that might be leading to the syndrome. Metabolic screens, inflammatory markers, or autoantibodies may reveal reversible medical causes of catatonia.",
"title": "Diagnosis"
},
{
"paragraph_id": 27,
"text": "Vital signs should be frequently monitored as catatonia can progress to malignant catatonia which is life-threatening. Malignant catatonia is characterized by fever, hypertension, tachycardia, and tachypnea.",
"title": "Diagnosis"
},
{
"paragraph_id": 28,
"text": "Various rating scales for catatonia have been developed, however, their utility for clinical care has not been well established. The most commonly used scale is the Bush-Francis Catatonia Rating Scale (BFCRS) (external link is provided below). The scale is composed of 23 items with the first 14 items being used as the screening tool. If 2 of the 14 are positive, this prompts for further evaluation and completion of the remaining 9 items.",
"title": "Diagnosis"
},
{
"paragraph_id": 29,
"text": "A diagnosis can be supported by the lorazepam challenge or the zolpidem challenge. While proven useful in the past, barbiturates are no longer commonly used in psychiatry; thus the option of either benzodiazepines or ECT.",
"title": "Diagnosis"
},
{
"paragraph_id": 30,
"text": "The differential diagnosis of catatonia is extensive as signs and symptoms of catatonia may overlap significantly with those of other conditions. Therefore, a careful and detailed history, medication review, and physical exam are key to diagnosing catatonia and differentiating it from other conditions. Furthermore, some of these conditions can themselves lead to catatonia. The differential diagnosis is as follows:",
"title": "Diagnosis"
},
{
"paragraph_id": 31,
"text": "The initial treatment of catatonia is to stop medication that could be potentially leading to the syndrome. These may include steroids, stimulants, anticonvulsants, neuroleptics, dopamine blockers, etc. The next step is to provide a \"lorazepam challenge,\" in which patients are given 2 mg of IV lorazepam (or another benzodiazepine). Most patients with catatonia will respond significantly to this within the first 15–30 minutes. If no change is observed during the first dose, then a second dose is given and the patient is re-examined. If the patient responds to the lorazepam challenge, then lorazepam can be scheduled at interval doses until the catatonia resolves. The lorazepam must be tapered slowly, otherwise, the catatonia symptoms may return. The underlying cause of the catatonia should also be treated during this time. If within a week the catatonia is not resolved, then ECT can be used to reverse the symptoms. ECT in combination with benzodiazepines is used to treat malignant catatonia. In France, zolpidem has also been used in diagnosis, and response may occur within the same time period. Ultimately the underlying cause needs to be treated.",
"title": "Treatment"
},
{
"paragraph_id": 32,
"text": "Electroconvulsive therapy (ECT) is an effective treatment for catatonia that is well acknowledged. ECT has also shown favorable outcomes in patients with chronic catatonia. However, it has been pointed out that further high quality randomized controlled trials are needed to evaluate the efficacy, tolerance, and protocols of ECT in catatonia.",
"title": "Treatment"
},
{
"paragraph_id": 33,
"text": "Antipsychotics should be used with care as they can worsen catatonia and are the cause of neuroleptic malignant syndrome, a dangerous condition that can mimic catatonia and requires immediate discontinuation of the antipsychotic.",
"title": "Treatment"
},
{
"paragraph_id": 34,
"text": "There is evidence clozapine works better than other antipsychotics to treat catatonia, following a recent systematic review.",
"title": "Treatment"
},
{
"paragraph_id": 35,
"text": "Excessive glutamate activity is believed to be involved in catatonia; when first-line treatment options fail, NMDA antagonists such as amantadine or memantine may be used. Amantadine may have an increased incidence of tolerance with prolonged use and can cause psychosis, due to its additional effects on the dopamine system. Memantine has a more targeted pharmacological profile for the glutamate system, reduced incidence of psychosis and may therefore be preferred for individuals who cannot tolerate amantadine. Topiramate is another treatment option for resistant catatonia; it produces its therapeutic effects by producing glutamate antagonism via modulation of AMPA receptors.",
"title": "Treatment"
},
{
"paragraph_id": 36,
"text": "Patients who experience an episode of catatonia are more likely to experience another recurring episode. Treatment response for patients with catatonia is 50–70% and these patients have a good prognosis. However, failure to respond to medication is a very poor prognosis. Many of these patients will require long-term and continuous mental health care. For patients with catatonia with underlying schizophrenia, the prognosis is much poorer.",
"title": "Prognosis"
},
{
"paragraph_id": 37,
"text": "Catatonia has been mostly studied in acutely ill psychiatric patients. Catatonia frequently goes unrecognized, leading to the belief that the syndrome is rare; however, this is not true and prevalence has been reported to be as high as 10% in patients with acute psychiatric illnesses. One large population estimate has suggested that the incidence of catatonia is 10.6 episodes per 100 000 person-years. It occurs in males and females in approximately equal numbers. 21-46% of all catatonia cases can be attributed to a general medical condition.",
"title": "Epidemiology"
},
{
"paragraph_id": 38,
"text": "Reports of stupor-like and catatonia-like states abound in the history of psychiatry. After the middle of the 19th century there was an increase of interest in the motor disorders accompanying madness, culminating in the publication by Karl Ludwig Kahlbaum in 1874 of Die Katatonie oder das Spannungsirresein (Catatonia or Tension Insanity).",
"title": "History"
}
] | Catatonia is a complex neuropsychiatric behavioral syndrome that is characterized by abnormal movements, immobility, abnormal behaviors, and withdrawal. The onset of catatonia can be acute or subtle and symptoms can wax, wane, or change during episodes. It has historically been related to schizophrenia, but catatonia is most often seen in mood disorders. It is now known that catatonic symptoms are nonspecific and may be observed in other mental, neurological, and medical conditions. Catatonia is not a stand-alone diagnosis, and the term is used to describe a feature of the underlying disorder. There are several subtypes of catatonia: akinetic catatonia, excited catatonia, malignant catatonia, and delirious mania. Recognizing and treating catatonia is very important as failure to do so can lead to poor outcomes and can be potentially fatal. Treatment with benzodiazepines or ECT can lead to remission of catatonia. There is growing evidence of the effectiveness of the NMDA receptor antagonists amantadine and memantine for benzodiazepine-resistant catatonia. Antipsychotics are sometimes employed, but they can worsen symptoms and have serious adverse effects. | 2001-08-07T08:17:53Z | 2023-12-28T04:34:44Z | [
"Template:About",
"Template:Rp",
"Template:Reflist",
"Template:Page needed",
"Template:Medical condition classification and resources",
"Template:Distinguish",
"Template:Citation needed",
"Template:Lang-de",
"Template:Div col",
"Template:Mental and behavioral disorders",
"Template:Use dmy dates",
"Template:Cite journal",
"Template:Cite web",
"Template:Dead link",
"Template:Short description",
"Template:Infobox medical condition (new)",
"Template:Div col end",
"Template:Cite book",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Catatonia |
5,244 | Cipher | In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext.
Most modern ciphers can be categorized in several ways
Originating from the Arabic word for zero صفر (sifr), the word "cipher" spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept of zero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.
The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers".
In casual contexts, "code" and "cipher" can typically be used interchangeably, however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message.
An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams.
Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizes Kanji (meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬". Stenographers sometimes use specific symbols to abbreviate whole words.
Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are used synonymously with substitution and transposition, respectively.
Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding, codetext, decoding" and so on.
However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.
There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.
The Caesar Cipher is one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.[11]
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.
William Shakespeare often used the concept of ciphers in his writing to symbolize nothingness. In Shakespeare's Henry V, he relates one of the accounting methods that brought the Arabic Numeral system and zero to Europe, to the human imagination. The actors who perform this play were not at the battles of Henry V's reign, so they represent absence. In another sense, ciphers are important to people who work with numbers, but they do not hold value. Shakespeare used this concept to outline how those who counted and identified the dead from the battles used that information as a political weapon, furthering class biases and xenophobia.
In the 1640s, the Parliamentarian commander, Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during the English Civil War.
Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad though, but the usual disadvantages of one-time pads apply.
During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods.
Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.
By type of key used ciphers are divided into:
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.[12]
Ciphers can be distinguished into two types by the type of input data:
In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:
Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly.
An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetrical cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 256 bits, all have similar difficulty at present.
Claude Shannon proved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad. | [
{
"paragraph_id": 0,
"text": "In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, \"cipher\" is synonymous with \"code\", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, \"UQJHSE\" could be the code for \"Proceed to the following coordinates.\" When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Most modern ciphers can be categorized in several ways",
"title": ""
},
{
"paragraph_id": 5,
"text": "Originating from the Arabic word for zero صفر (sifr), the word \"cipher\" spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept of zero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to \"ciphers\".",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "In casual contexts, \"code\" and \"cipher\" can typically be used interchangeably, however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message.",
"title": "Versus codes"
},
{
"paragraph_id": 8,
"text": "An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams.",
"title": "Versus codes"
},
{
"paragraph_id": 9,
"text": "Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizes Kanji (meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace \"The quick brown fox jumps over the lazy dog\" by \"The quick brown 狐 jumps 上 the lazy 犬\". Stenographers sometimes use specific symbols to abbreviate whole words.",
"title": "Versus codes"
},
{
"paragraph_id": 10,
"text": "Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are used synonymously with substitution and transposition, respectively.",
"title": "Versus codes"
},
{
"paragraph_id": 11,
"text": "Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: \"encoding, codetext, decoding\" and so on.",
"title": "Versus codes"
},
{
"paragraph_id": 12,
"text": "However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.",
"title": "Versus codes"
},
{
"paragraph_id": 13,
"text": "There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "The Caesar Cipher is one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.[11]",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, \"GOOD DOG\" can be encrypted as \"PLLX XLP\" where \"L\" substitutes for \"O\", \"P\" for \"G\", and \"X\" for \"D\" in the message. Transposition of the letters \"GOOD DOG\" can result in \"DGOGDOO\". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "William Shakespeare often used the concept of ciphers in his writing to symbolize nothingness. In Shakespeare's Henry V, he relates one of the accounting methods that brought the Arabic Numeral system and zero to Europe, to the human imagination. The actors who perform this play were not at the battles of Henry V's reign, so they represent absence. In another sense, ciphers are important to people who work with numbers, but they do not hold value. Shakespeare used this concept to outline how those who counted and identified the dead from the battles used that information as a political weapon, furthering class biases and xenophobia.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "In the 1640s, the Parliamentarian commander, Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during the English Civil War.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, \"GOOD DOG\" can be encrypted as \"PLSX TWF\" where \"L\", \"S\", and \"W\" substitute for \"O\". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad though, but the usual disadvantages of one-time pads apply.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of \"additive\" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "By type of key used ciphers are divided into:",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.[12]",
"title": "Types"
},
{
"paragraph_id": 23,
"text": "Ciphers can be distinguished into two types by the type of input data:",
"title": "Types"
},
{
"paragraph_id": 24,
"text": "In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:",
"title": "Key size and vulnerability"
},
{
"paragraph_id": 25,
"text": "Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly.",
"title": "Key size and vulnerability"
},
{
"paragraph_id": 26,
"text": "An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetrical cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 256 bits, all have similar difficulty at present.",
"title": "Key size and vulnerability"
},
{
"paragraph_id": 27,
"text": "Claude Shannon proved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad.",
"title": "Key size and vulnerability"
}
] | In cryptography, a cipher is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information. Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called a key. The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several ways By whether they work on blocks of symbols usually of a fixed size, or on a continuous stream of symbols.
By whether the same key is used for both encryption and decryption, or if a different key is used for each. If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality. | 2001-09-21T01:09:20Z | 2023-11-06T15:43:44Z | [
"Template:Wiktionary",
"Template:Cryptography navbox",
"Template:Short description",
"Template:Other uses",
"Template:Cite journal",
"Template:Citation",
"Template:Doi",
"Template:Authority control",
"Template:Cn",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Main",
"Template:ISBN",
"Template:More footnotes",
"Template:Harvnb"
] | https://en.wikipedia.org/wiki/Cipher |
5,247 | Country music | Country (also called country and western) is a music genre originating in the Southern and Southwestern United States. First produced in the 1920s, country music primarily focuses on working class Americans and blue-collar American life.
Country music is known for its ballads and dance tunes (also known as "honky-tonk music") with simple form, folk lyrics, and harmonies generally accompanied by instruments such as banjos, fiddles, harmonicas, and many types of guitar (including acoustic, electric, steel, and resonator guitars). Though it is primarily rooted in various forms of American folk music, such as old-time music and Appalachian music, many other traditions, including, Mexican, Irish, and Hawaiian music, have also had a formative influence on the genre. Blues modes have been used extensively throughout its history as well.
The term country music gained popularity in the 1940s in preference to hillbilly music; it came to encompass western music, which evolved parallel to hillbilly music from similar roots, in the mid-20th century. Contemporary styles of western music include Texas country, red dirt, and Hispano- and Mexican American-led Tejano and New Mexico music, all extant alongside longstanding indigenous traditions.
In 2009, in the United States, country music was the most listened to rush hour radio genre during the evening commute, and second most popular in the morning commute.
The main components of the modern country music style date back to music traditions throughout the Southern United States and Southwestern United States, while its place in American popular music was established in the 1920s during the early days of music recording. According to country historian Bill C. Malone, country music was "introduced to the world as a Southern phenomenon."
Migration into the southern Appalachian Mountains, of the Southeastern United States, brought the folk music and instruments of Europe, Africa, and the Mediterranean Basin along with it for nearly 300 years, which developed into Appalachian music. As the country expanded westward, the Mississippi River and Louisiana became a crossroads for country music, giving rise to Cajun music. In the Southwestern United States, it was the Rocky Mountains, American frontier, and Rio Grande that acted as a similar backdrop for Native American, Mexican, and cowboy ballads, which resulted in New Mexico music and the development of western music, and its directly related Red Dirt, Texas country, and Tejano music styles. In the Asia-Pacific, the steel guitar sound of country music has its provenance in the music of Hawaii.
The U.S. Congress has formally recognized Bristol, Tennessee as the "Birthplace of Country Music", based on the historic Bristol recording sessions of 1927. Since 2014, the city has been home to the Birthplace of Country Music Museum. Historians have also noted the influence of the less-known Johnson City sessions of 1928 and 1929, and the Knoxville sessions of 1929 and 1930. In addition, the Mountain City Fiddlers Convention, held in 1925, helped to inspire modern country music. Before these, pioneer settlers, in the Great Smoky Mountains region, had developed a rich musical heritage.
The first generation emerged in the 1920s, with Atlanta's music scene playing a major role in launching country's earliest recording artists. James Gideon "Gid" Tanner (1885–1960) was an American old-time fiddler and one of the earliest stars of what would come to be known as country music. His band, the Skillet Lickers, was one of the most innovative and influential string bands of the 1920s and 1930s. Its most notable members were Clayton McMichen (fiddle and vocal), Dan Hornsby (vocals), Riley Puckett (guitar and vocal) and Robert Lee Sweat (guitar). New York City record label Okeh Records began issuing hillbilly music records by Fiddlin' John Carson as early as 1923, followed by Columbia Records (series 15000D "Old Familiar Tunes") (Samantha Bumgarner) in 1924, and RCA Victor Records in 1927 with the first famous pioneers of the genre Jimmie Rodgers, who is widely considered the "Father of Country Music", and the first family of country music the Carter Family. Many "hillbilly" musicians recorded blues songs throughout the 1920s.
During the second generation (1930s–1940s), radio became a popular source of entertainment, and "barn dance" shows featuring country music were started all over the South, as far north as Chicago, and as far west as California. The most important was the Grand Ole Opry, aired starting in 1925 by WSM in Nashville and continuing to the present day. During the 1930s and 1940s, cowboy songs, or western music, which had been recorded since the 1920s, were popularized by films made in Hollywood, many featuring Gene Autry, who was known as king of the "singing cowboys," and Hank Williams. Bob Wills was another country musician from the Lower Great Plains who had become very popular as the leader of a "hot string band," and who also appeared in Hollywood westerns. His mix of country and jazz, which started out as dance hall music, would become known as western swing. Wills was one of the first country musicians known to have added an electric guitar to his band, in 1938. Country musicians began recording boogie in 1939, shortly after it had been played at Carnegie Hall, when Johnny Barfield recorded "Boogie Woogie".
The third generation (1950s–1960s) started at the end of World War II with "mountaineer" string band music known as bluegrass, which emerged when Bill Monroe, along with Lester Flatt and Earl Scruggs were introduced by Roy Acuff at the Grand Ole Opry. Gospel music remained a popular component of country music. The Native American, Hispano, and American frontier music of the Southwestern United States and Northern Mexico, became popular among poor communities in New Mexico, Oklahoma, and Texas; the basic ensemble consisted of classical guitar, bass guitar, dobro or steel guitar, though some larger ensembles featured electric guitars, trumpets, keyboards (especially the honky-tonk piano, a type of tack piano), banjos, and drums. By the early 1950s it blended with rock and roll, becoming the rockabilly sound produced by Sam Phillips, Norman Petty, and Bob Keane. Musicians like Elvis Presley, Bo Diddley, Buddy Holly, Jerry Lee Lewis, Ritchie Valens, Carl Perkins, Roy Orbison, and Johnny Cash emerged as enduring representatives of the style. Beginning in the mid-1950s, and reaching its peak during the early 1960s, the Nashville sound turned country music into a multimillion-dollar industry centered in Nashville, Tennessee; Patsy Cline and Jim Reeves were two of the most broadly popular Nashville sound artists, and their deaths in separate plane crashes in the early 1960s were a factor in the genre's decline. Starting in the 1950s to the mid-1960s, western singer-songwriters such as Michael Martin Murphey and Marty Robbins rose in prominence as did others, throughout western music traditions, like New Mexico music's Al Hurricane. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the "old values" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock.
Fourth generation (1970s–1980s) music included outlaw country with roots in the Bakersfield sound, and country pop with roots in the countrypolitan, folk music and soft rock. Between 1972 and 1975 singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles. By the mid-1970s, Texas country and Tejano music gained popularity with performers like Freddie Fender. During the early 1980s country artists continued to see their records perform well on the pop charts. In 1980 a style of "neocountry disco music" was popularized. During the mid-1980s a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts in favor of more traditional "back-to-basics" production.
During the fifth generation (the 1990s), neotraditionalists and stadium country acts prospered.
The sixth generation (2000s–present) has seen a certain amount of diversification in regard to country music styles. It has also, however, seen a shift into patriotism and conservative politics since 9/11, though such themes are less prevalent in more modern trends. The influence of rock music in country has become more overt during the late 2000s and early 2010s. Most of the best-selling country songs of this era were those by Lady A, Florida Georgia Line, Carrie Underwood, and Taylor Swift. Hip hop also made its mark on country music with the emergence of country rap.
The first commercial recordings of what was considered instrumental music in the traditional country style were "Arkansas Traveler" and "Turkey in the Straw" by fiddlers Henry Gilliland & A.C. (Eck) Robertson on June 30, 1922, for Victor Records and released in April 1923. Columbia Records began issuing records with "hillbilly" music (series 15000D "Old Familiar Tunes") as early as 1924.
The first commercial recording of what is widely considered to be the first country song featuring vocals and lyrics was Fiddlin' John Carson with "Little Log Cabin in the Lane" for Okeh Records on June 14, 1923.
Vernon Dalhart was the first country singer to have a nationwide hit in May 1924 with "Wreck of the Old 97". The flip side of the record was "Lonesome Road Blues", which also became very popular. In April 1924, "Aunt" Samantha Bumgarner and Eva Davis became the first female musicians to record and release country songs. Many of the early country musicians, such as the yodeler Cliff Carlisle, recorded blues songs into the 1930s. Other important early recording artists were Riley Puckett, Don Richardson, Fiddlin' John Carson, Uncle Dave Macon, Al Hopkins, Ernest V. Stoneman, Blind Alfred Reed, Charlie Poole and the North Carolina Ramblers and the Skillet Lickers. The steel guitar entered country music as early as 1922, when Jimmie Tarlton met famed Hawaiian guitarist Frank Ferera on the West Coast.
Jimmie Rodgers and the Carter Family are widely considered to be important early country musicians. From Scott County, Virginia, the Carters had learned sight reading of hymnals and sheet music using solfege. Their songs were first captured at a historic recording session in Bristol, Tennessee, on August 1, 1927, where Ralph Peer was the talent scout and sound recordist. A scene in the movie O Brother, Where Art Thou? depicts a similar occurrence in the same timeframe.
Rodgers fused hillbilly country, gospel, jazz, blues, pop, cowboy, and folk, and many of his best songs were his compositions, including "Blue Yodel", which sold over a million records and established Rodgers as the premier singer of early country music. Beginning in 1927, and for the next 17 years, the Carters recorded some 300 old-time ballads, traditional tunes, country songs and gospel hymns, all representative of America's southeastern folklore and heritage. Maybelle Carter went on to continue the family tradition with her daughters as The Carter Sisters; her daughter June would marry (in succession) Carl Smith, Rip Nix and Johnny Cash, having children with each who would also become country singers.
Record sales declined during the Great Depression, but radio became a popular source of entertainment, and "barn dance" shows featuring country music were started by radio stations all over the South, as far north as Chicago, and as far west as California.
The most important was the Grand Ole Opry, aired starting in 1925 by WSM in Nashville and continuing to the present day. Some of the early stars on the Opry were Uncle Dave Macon, Roy Acuff and African American harmonica player DeFord Bailey. WSM's 50,000-watt signal (in 1934) could often be heard across the country. Many musicians performed and recorded songs in any number of styles. Moon Mullican, for example, played western swing but also recorded songs that can be called rockabilly. Between 1947 and 1949, country crooner Eddy Arnold placed eight songs in the top 10. From 1945 to 1955 Jenny Lou Carson was one of the most prolific songwriters in country music.
In the 1930s and 1940s, cowboy songs, or western music, which had been recorded since the 1920s, were popularized by films made in Hollywood. Some of the popular singing cowboys from the era were Gene Autry, the Sons of the Pioneers, and Roy Rogers. Country music and western music were frequently played together on the same radio stations, hence the term country and western music, despite country and western being two distinct genres.
Cowgirls contributed to the sound in various family groups. Patsy Montana opened the door for female artists with her history-making song "I Want To Be a Cowboy's Sweetheart". This would begin a movement toward opportunities for women to have successful solo careers. Bob Wills was another country musician from the Lower Great Plains who had become very popular as the leader of a "hot string band," and who also appeared in Hollywood westerns. His mix of country and jazz, which started out as dance hall music, would become known as western swing. Cliff Bruner, Moon Mullican, Milton Brown and Adolph Hofner were other early western swing pioneers. Spade Cooley and Tex Williams also had very popular bands and appeared in films. At its height, western swing rivaled the popularity of big band swing music.
Drums were scorned by early country musicians as being "too loud" and "not pure", but by 1935 western swing big band leader Bob Wills had added drums to the Texas Playboys. In the mid-1940s, the Grand Ole Opry did not want the Playboys' drummer to appear on stage. Although drums were commonly used by rockabilly groups by 1955, the less-conservative-than-the-Grand-Ole-Opry Louisiana Hayride kept its infrequently used drummer backstage as late as 1956. By the early 1960s, however, it was rare for a country band not to have a drummer. Bob Wills was one of the first country musicians known to have added an electric guitar to his band, in 1938. A decade later (1948) Arthur Smith achieved top 10 US country chart success with his MGM Records recording of "Guitar Boogie", which crossed over to the US pop chart, introducing many people to the potential of the electric guitar. For several decades Nashville session players preferred the warm tones of the Gibson and Gretsch archtop electrics, but a "hot" Fender style, using guitars which became available beginning in the early 1950s, eventually prevailed as the signature guitar sound of country.
Country musicians began recording boogie in 1939, shortly after it had been played at Carnegie Hall, when Johnny Barfield recorded "Boogie Woogie". The trickle of what was initially called hillbilly boogie, or okie boogie (later to be renamed country boogie), became a flood beginning in late 1945. One notable release from this period was the Delmore Brothers' "Freight Train Boogie", considered to be part of the combined evolution of country music and blues towards rockabilly. In 1948, Arthur "Guitar Boogie" Smith achieved top ten US country chart success with his MGM Records recordings of "Guitar Boogie" and "Banjo Boogie", with the former crossing over to the US pop charts. Other country boogie artists included Moon Mullican, Merrill Moore and Tennessee Ernie Ford. The hillbilly boogie period lasted into the 1950s and remains one of many subgenres of country into the 21st century.
By the end of World War II, "mountaineer" string band music known as bluegrass had emerged when Bill Monroe joined with Lester Flatt and Earl Scruggs, introduced by Roy Acuff at the Grand Ole Opry. That was the ordination of bluegrass music and how Bill Monroe came to be known as the "Father of Bluegrass." Gospel music, too, remained a popular component of bluegrass and other sorts of country music. Red Foley, the biggest country star following World War II, had one of the first million-selling gospel hits ("Peace in the Valley") and also sang boogie, blues and rockabilly. In the post-war period, country music was called "folk" in the trades, and "hillbilly" within the industry. In 1944, Billboard replaced the term "hillbilly" with "folk songs and blues," and switched to "country and western" in 1949.
Another type of stripped-down and raw music with a variety of moods and a basic ensemble of guitar, bass, dobro or steel guitar (and later) drums became popular, especially among rural residents in the three states of Texhomex, those being Texas, Oklahoma, and New Mexico. It became known as honky tonk and had its roots in western swing and the ranchera music of Mexico and the border states, particularly New Mexico and Texas, together with the blues of the American South. Bob Wills and His Texas Playboys personified this music which has been described as "a little bit of this, and a little bit of that, a little bit of black and a little bit of white ... just loud enough to keep you from thinking too much and to go right on ordering the whiskey." East Texan Al Dexter had a hit with "Honky Tonk Blues", and seven years later "Pistol Packin' Mama". These "honky tonk" songs were associated with barrooms, and was performed by the likes of Ernest Tubb, Kitty Wells (the first major female country solo singer), Ted Daffan, Floyd Tillman, the Maddox Brothers and Rose, Lefty Frizzell and Hank Williams; the music of these artists would later be called "traditional" country. Williams' influence in particular would prove to be enormous, inspiring many of the pioneers of rock and roll, such as Elvis Presley, Jerry Lee Lewis, Chuck Berry and Ike Turner, while providing a framework for emerging honky tonk talents like George Jones. Webb Pierce was the top-charting country artist of the 1950s, with 13 of his singles spending 113 weeks at number one. He charted 48 singles during the decade; 31 reached the top ten and 26 reached the top four.
By the early 1950s, a blend of western swing, country boogie, and honky tonk was played by most country bands, a mixture which followed in the footsteps of Gene Autry, Lydia Mendoza, Roy Rogers, and Patsy Montana. Western music, influenced by the cowboy ballads, New Mexico, Texas country and Tejano music rhythms of the Southwestern United States and Northern Mexico, reached its peak in popularity in the late 1950s, most notably with the song "El Paso", first recorded by Marty Robbins in September 1959. Western music's influence would continue to grow within the country music sphere, western musicians like Michael Martin Murphey, New Mexico music artists Al Hurricane and Antonia Apodaca, Tejano music performer Little Joe, and even folk revivalist John Denver, all first rose to prominence during this time. This western music influence largely kept the music of the folk revival and folk rock from influencing the country music genre much, despite the similarity in instrumentation and origins (see, for instance, the Byrds' negative reception during their appearance on the Grand Ole Opry). The main concern was largely political: most folk revival was largely driven by progressive activists, a stark contrast to the culturally conservative audiences of country music. John Denver was perhaps the only musician to have major success in both the country and folk revival genres throughout his career, later only a handful of artists like Burl Ives and Canadian musician Gordon Lightfoot successfully made the crossover to country after folk revival fell out of fashion. During the mid-1950s a new style of country music became popular, eventually to be referred to as rockabilly.
In 1953, the first all-country radio station was established in Lubbock, Texas. The music of the 1960s and 1970s targeted the American working class, and truckers in particular. As country radio became more popular, trucking songs like the 1963 hit song Six Days on the Road by Dave Dudley began to make up their own subgenre of country. These revamped songs sought to portray American truckers as a "new folk hero", marking a significant shift in sound from earlier country music. The song was written by actual truckers and contained numerous references to the trucker culture of the time like "ICC" for Interstate Commerce Commission and "little white pills" as a reference to amphetamines. Starday Records in Nashville followed up on Dudley's initial success with the release of Give Me 40 Acres by the Willis Brothers.
Rockabilly was most popular with country fans in the 1950s; one of the first rock and roll superstars was former western yodeler Bill Haley, who repurposed his Four Aces of Western Swing into a rockabilly band in the early 1950s and renamed it the Comets. Bill Haley & His Comets are credited with two of the first successful rock and roll records, "Crazy Man, Crazy" of 1953 and "Rock Around the Clock" in 1954.
1956 could be called the year of rockabilly in country music. Rockabilly was an early form of rock and roll, an upbeat combination of blues and country music. The number two, three and four songs on Billboard's charts for that year were Elvis Presley, "Heartbreak Hotel"; Johnny Cash, "I Walk the Line"; and Carl Perkins, "Blue Suede Shoes". Reflecting this success, George Jones released a rockabilly record that year under the pseudonym "Thumper Jones", wanting to capitalize on the popularity of rockabilly without alienating his traditional country base. Cash and Presley placed songs in the top 5 in 1958 with No. 3 "Guess Things Happen That Way/Come In, Stranger" by Cash, and No. 5 by Presley "Don't/I Beg of You." Presley acknowledged the influence of rhythm and blues artists and his style, saying "The colored folk been singin' and playin' it just the way I'm doin' it now, man for more years than I know." Within a few years, many rockabilly musicians returned to a more mainstream style or had defined their own unique style.
Country music gained national television exposure through Ozark Jubilee on ABC-TV and radio from 1955 to 1960 from Springfield, Missouri. The program showcased top stars including several rockabilly artists, some from the Ozarks. As Webb Pierce put it in 1956, "Once upon a time, it was almost impossible to sell country music in a place like New York City. Nowadays, television takes us everywhere, and country music records and sheet music sell as well in large cities as anywhere else."
The Country Music Association was founded in 1958, in part because numerous country musicians were appalled by the increased influence of rock and roll on country music.
Beginning in the mid-1950s, and reaching its peak during the early 1960s, the Nashville sound turned country music into a multimillion-dollar industry centered in Nashville, Tennessee. Under the direction of producers such as Chet Atkins, Bill Porter, Paul Cohen, Owen Bradley, Bob Ferguson, and later Billy Sherrill, the sound brought country music to a diverse audience and helped revive country as it emerged from a commercially fallow period. This subgenre was notable for borrowing from 1950s pop stylings: a prominent and smooth vocal, backed by a string section (violins and other orchestral strings) and vocal chorus. Instrumental soloing was de-emphasized in favor of trademark "licks". Leading artists in this genre included Jim Reeves, Skeeter Davis, Connie Smith, the Browns, Patsy Cline, and Eddy Arnold. The "slip note" piano style of session musician Floyd Cramer was an important component of this style. The Nashville Sound collapsed in mainstream popularity in 1964, a victim of both the British Invasion and the deaths of Reeves and Cline in separate airplane crashes. By the mid-1960s, the genre had developed into countrypolitan. Countrypolitan was aimed straight at mainstream markets, and it sold well throughout the later 1960s into the early 1970s. Top artists included Tammy Wynette, Lynn Anderson and Charlie Rich, as well as such former "hard country" artists as Ray Price and Marty Robbins. Despite the appeal of the Nashville sound, many traditional country artists emerged during this period and dominated the genre: Loretta Lynn, Merle Haggard, Buck Owens, Porter Wagoner, George Jones, and Sonny James among them.
In 1962, Ray Charles surprised the pop world by turning his attention to country and western music, topping the charts and rating number three for the year on Billboard's pop chart with the "I Can't Stop Loving You" single, and recording the landmark album Modern Sounds in Country and Western Music.
Another subgenre of country music grew out of hardcore honky tonk with elements of western swing and originated 112 miles (180 km) north-northwest of Los Angeles in Bakersfield, California, where many "Okies" and other Dust Bowl migrants had settled. Influenced by one-time West Coast residents Bob Wills and Lefty Frizzell, by 1966 it was known as the Bakersfield sound. It relied on electric instruments and amplification, in particular the Telecaster electric guitar, more than other subgenres of the country music of the era, and it can be described as having a sharp, hard, driving, no-frills, edgy flavor—hard guitars and honky-tonk harmonies. Leading practitioners of this style were Buck Owens, Merle Haggard, Tommy Collins, Dwight Yoakam, Gary Allan, and Wynn Stewart, each of whom had his own style.
Ken Nelson, who had produced Owens and Haggard and Rose Maddox became interested in the trucking song subgenre following the success of Six Days on the Road and asked Red Simpson to record an album of trucking songs. Haggard's White Line Fever was also part of the trucking subgenre.
The country music scene of the 1940s until the 1970s was largely dominated by western music influences, so much so that the genre began to be called "country and western". Even today, cowboy and frontier values continue to play a role in the larger country music, with western wear, cowboy boots, and cowboy hats continues to be in fashion for country artists.
West of the Mississippi river, many of these western genres continue to flourish, including the Red Dirt of Oklahoma, New Mexico music of New Mexico, and both Texas country music and Tejano music of Texas. During the 1950s until the early 1970s, the latter part of the western heyday in country music, many of these genres featured popular artists that continue to influence both their distinctive genres and larger country music. Red Dirt featured Bob Childers and Steve Ripley; for New Mexico music Al Hurricane, Al Hurricane Jr., and Antonia Apodaca; and within the Texas scenes Willie Nelson, Freddie Fender, Johnny Rodriguez, and Little Joe.
As Outlaw country music emerged as subgenre in its own right, Red Dirt, New Mexico, Texas country, and Tejano grew in popularity as a part of the Outlaw country movement. Originating in the bars, fiestas, and honky-tonks of Oklahoma, New Mexico, and Texas, their music supplemented outlaw country's singer-songwriter tradition as well as 21st-century rock-inspired alternative country and hip hop-inspired country rap artists.
Outlaw country was derived from the traditional western, including Red Dirt, New Mexico, Texas country, Tejano, and honky-tonk musical styles of the late 1950s and 1960s. Songs such as the 1963 Johnny Cash popularized "Ring of Fire" show clear influences from the likes of Al Hurricane and Little Joe, this influence just happened to culminate with artists such as Ray Price (whose band, the "Cherokee Cowboys", included Willie Nelson and Roger Miller) and mixed with the anger of an alienated subculture of the nation during the period, a collection of musicians that came to be known as the outlaw movement revolutionized the genre of country music in the early 1970s. "After I left Nashville (the early 70s), I wanted to relax and play the music that I wanted to play, and just stay around Texas, maybe Oklahoma. Waylon and I had that outlaw image going, and when it caught on at colleges and we started selling records, we were O.K. The whole outlaw thing, it had nothing to do with the music, it was something that got written in an article, and the young people said, 'Well, that's pretty cool.' And started listening." (Willie Nelson) The term outlaw country is traditionally associated with Willie Nelson, Jerry Jeff Walker, Hank Williams, Jr., Merle Haggard, Waylon Jennings and Joe Ely. It was encapsulated in the 1976 album Wanted! The Outlaws.
Though the outlaw movement as a cultural fad had died down after the late 1970s (with Jennings noting in 1978 that it had gotten out of hand and led to real-life legal scrutiny), many western and outlaw country music artists maintained their popularity during the 1980s by forming supergroups, such as The Highwaymen, Texas Tornados, and Bandido.
Country pop or soft pop, with roots in the countrypolitan sound, folk music, and soft rock, is a subgenre that first emerged in the 1970s. Although the term first referred to country music songs and artists that crossed over to top 40 radio, country pop acts are now more likely to cross over to adult contemporary music. It started with pop music singers like Glen Campbell, Bobbie Gentry, John Denver, Olivia Newton-John, Anne Murray, B. J. Thomas, the Bellamy Brothers, and Linda Ronstadt having hits on the country charts. Between 1972 and 1975, singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles ("Rocky Mountain High", "Sunshine on My Shoulders", "Annie's Song", "Thank God I'm a Country Boy", and "I'm Sorry"), and was named Country Music Entertainer of the Year in 1975. The year before, Olivia Newton-John, an Australian pop singer, won the "Best Female Country Vocal Performance" as well as the Country Music Association's most coveted award for females, "Female Vocalist of the Year". In response George Jones, Tammy Wynette, Jean Shepard and other traditional Nashville country artists dissatisfied with the new trend formed the short-lived "Association of Country Entertainers" in 1974; the ACE soon unraveled in the wake of Jones and Wynette's bitter divorce and Shepard's realization that most others in the industry lacked her passion for the movement.
During the mid-1970s, Dolly Parton, a successful mainstream country artist since the late 1960s, mounted a high-profile campaign to cross over to pop music, culminating in her 1977 hit "Here You Come Again", which topped the U.S. country singles chart, and also reached No. 3 on the pop singles charts. Parton's male counterpart, Kenny Rogers, came from the opposite direction, aiming his music at the country charts, after a successful career in pop, rock and folk music with the First Edition, achieving success the same year with "Lucille", which topped the country charts and reached No. 5 on the U.S. pop singles charts, as well as reaching Number 1 on the British all-genre chart. Parton and Rogers would both continue to have success on both country and pop charts simultaneously, well into the 1980s. Country music propelled Kenny Rogers’ career, making him a three-time Grammy Award winner and six-time Country Music Association Awards winner. Having sold more than 50 million albums in the US, one of his Song "The Gambler," inspired several TV films, with Rogers as the main character. Artists like Crystal Gayle, Ronnie Milsap and Barbara Mandrell would also find success on the pop charts with their records. In 1975, author Paul Hemphill stated in the Saturday Evening Post, "Country music isn't really country anymore; it is a hybrid of nearly every form of popular music in America."
During the early 1980s, country artists continued to see their records perform well on the pop charts. Willie Nelson and Juice Newton each had two songs in the top 5 of the Billboard Hot 100 in the early eighties: Nelson charted "Always on My Mind" (#5, 1982) and "To All the Girls I've Loved Before" (#5, 1984, a duet with Julio Iglesias), and Newton achieved success with "Queen of Hearts" (#2, 1981) and "Angel of the Morning" (#4, 1981). Four country songs topped the Billboard Hot 100 in the 1980s: "Lady" by Kenny Rogers, from the late fall of 1980; "9 to 5" by Dolly Parton, "I Love a Rainy Night" by Eddie Rabbitt (these two back-to-back at the top in early 1981); and "Islands in the Stream", a duet by Dolly Parton and Kenny Rogers in 1983, a pop-country crossover hit written by Barry, Robin, and Maurice Gibb of the Bee Gees. Newton's "Queen of Hearts" almost reached No. 1, but was kept out of the spot by the pop ballad juggernaut "Endless Love" by Diana Ross and Lionel Richie. The move of country music toward neotraditional styles led to a marked decline in country/pop crossovers in the late 1980s, and only one song in that period—Roy Orbison's "You Got It", from 1989—made the top 10 of both the Billboard Hot Country Singles" and Hot 100 charts, due largely to a revival of interest in Orbison after his sudden death. The only song with substantial country airplay to reach number one on the pop charts in the late 1980s was "At This Moment" by Billy Vera and the Beaters, an R&B song with slide guitar embellishment that appeared at number 42 on the country charts from minor crossover airplay. The record-setting, multi-platinum group Alabama was named Artist of the Decade for the 1980s by the Academy of Country Music.
Country rock is a genre that started in the 1960s but became prominent in the 1970s. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the "old values" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock. Early innovators in this new style of music in the 1960s and 1970s included Bob Dylan, who was the first to revert to country music with his 1967 album John Wesley Harding (and even more so with that album's follow-up, Nashville Skyline), followed by Gene Clark, Clark's former band the Byrds (with Gram Parsons on Sweetheart of the Rodeo) and its spin-off the Flying Burrito Brothers (also featuring Gram Parsons), guitarist Clarence White, Michael Nesmith (the Monkees and the First National Band), the Grateful Dead, Neil Young, Commander Cody, the Allman Brothers Band, Charlie Daniels, the Marshall Tucker Band, Poco, Buffalo Springfield, Stephen Stills' band Manassas and Eagles, among many, even the former folk music duo Ian & Sylvia, who formed Great Speckled Bird in 1969. The Eagles would become the most successful of these country rock acts, and their compilation album Their Greatest Hits (1971–1975) remains the second-best-selling album in the US with 29 million copies sold. The Rolling Stones also got into the act with songs like "Dead Flowers"; the original recording of "Honky Tonk Women" was performed in a country style, but it was subsequently re-recorded in a hard rock style for the single version, and the band's preferred country version was later released on the album Let It Bleed, under the title "Country Honk".
Described by AllMusic as the "father of country-rock", Gram Parsons' work in the early 1970s was acclaimed for its purity and for his appreciation for aspects of traditional country music. Though his career was cut tragically short by his 1973 death, his legacy was carried on by his protégé and duet partner Emmylou Harris; Harris would release her debut solo in 1975, an amalgamation of country, rock and roll, folk, blues and pop. Subsequent to the initial blending of the two polar opposite genres, other offspring soon resulted, including Southern rock, heartland rock and in more recent years, alternative country. In the decades that followed, artists such as Juice Newton, Alabama, Hank Williams, Jr. (and, to an even greater extent, Hank Williams III), Gary Allan, Shania Twain, Brooks & Dunn, Faith Hill, Garth Brooks, Dwight Yoakam, Steve Earle, Dolly Parton, Rosanne Cash and Linda Ronstadt moved country further towards rock influence.
In 1980, a style of "neocountry disco music" was popularized by the film Urban Cowboy. It was during this time that a glut of pop-country crossover artists began appearing on the country charts: former pop stars Bill Medley (of the Righteous Brothers), "England Dan" Seals (of England Dan and John Ford Coley), Tom Jones, and Merrill Osmond (both alone and with some of his brothers; his younger sister Marie Osmond was already an established country star) all recorded significant country hits in the early 1980s. Sales in record stores rocketed to $250 million in 1981; by 1984, 900 radio stations began programming country or neocountry pop full-time. As with most sudden trends, however, by 1984 sales had dropped below 1979 figures.
Truck driving country music is a genre of country music and is a fusion of honky-tonk, country rock and the Bakersfield sound. It has the tempo of country rock and the emotion of honky-tonk, and its lyrics focus on a truck driver's lifestyle. Truck driving country songs often deal with the profession of trucking and love. Well-known artists who sing truck driving country include Dave Dudley, Red Sovine, Dick Curless, Red Simpson, Del Reeves, the Willis Brothers and Jerry Reed, with C. W. McCall and Cledus Maggard (pseudonyms of Bill Fries and Jay Huguely, respectively) being more humorous entries in the subgenre. Dudley is known as the father of truck driving country.
During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, "back-to-basics" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds.
Country music was aided by the U.S. Federal Communications Commission's (FCC) Docket 80–90, which led to a significant expansion of FM radio in the 1980s by adding numerous higher-fidelity FM signals to rural and suburban areas. At this point, country music was mainly heard on rural AM radio stations; the expansion of FM was particularly helpful to country music, which migrated to FM from the AM band as AM became overcome by talk radio (the country music stations that stayed on AM developed the classic country format for the AM audience). At the same time, beautiful music stations already in rural areas began abandoning the format (leading to its effective demise) to adopt country music as well. This wider availability of country music led to producers seeking to polish their product for a wider audience. In 1990, Billboard, which had published a country music chart since the 1940s, changed the methodology it used to compile the chart: singles sales were removed from the methodology, and only airplay on country radio determined a song's place on the chart.
In the 1990s, country music became a worldwide phenomenon thanks to Garth Brooks, who enjoyed one of the most successful careers in popular music history, breaking records for both sales and concert attendance throughout the decade. The RIAA has certified his recordings at a combined (128× platinum), denoting roughly 113 million U.S. shipments. Other artists who experienced success during this time included Clint Black, John Michael Montgomery, Tracy Lawrence, Tim McGraw, Kenny Chesney, Travis Tritt, Alan Jackson and the newly formed duo of Brooks & Dunn; George Strait, whose career began in the 1980s, also continued to have widespread success in this decade and beyond. Toby Keith began his career as a more pop-oriented country singer in the 1990s, evolving into an outlaw persona in the early 2000s with Pull My Chain and its follow-up, Unleashed.
Female artists such as Reba McEntire, Patty Loveless, Faith Hill, Martina McBride, Deana Carter, LeAnn Rimes, Mindy McCready, Pam Tillis, Lorrie Morgan, Shania Twain, and Mary Chapin Carpenter all released platinum-selling albums in the 1990s. The Dixie Chicks became one of the most popular country bands in the 1990s and early 2000s. Their 1998 debut album Wide Open Spaces went on to become certified 12× platinum while their 1999 album Fly went on to become 10× platinum. After their third album, Home, was released in 2003, the band made political news in part because of lead singer Natalie Maines's comments disparaging then-President George W. Bush while the band was overseas (Maines stated that she and her bandmates were ashamed to be from the same state as Bush, who had just commenced the Iraq War a few days prior). The comments caused a rift between the band and the country music scene, and the band's fourth (and most recent) album, 2006's Taking the Long Way, took a more rock-oriented direction; the album was commercially successful overall among non-country audiences but largely ignored among country audiences. After Taking the Long Way, the band broke up for a decade (with two of its members continuing as the Court Yard Hounds) before reuniting in 2016 and releasing new material in 2020.
Canadian artist Shania Twain became the best selling female country artist of the decade. This was primarily due to the success of her breakthrough sophomore 1995 album, The Woman in Me, which was certified 12× platinum sold over 20 million copies worldwide and its follow-up, 1997's Come On Over, which was certified 20× platinum and sold over 40 million copies. The album became a major worldwide phenomenon and became one of the world's best selling albums for three years (1998, 1999 and 2000); it also went on to become the best selling country album of all time.
Unlike the majority of her contemporaries, Twain enjoyed large international success that had been seen by very few country artists, before or after her. Critics have noted that Twain enjoyed much of her success due to breaking free of traditional country stereotypes and for incorporating elements of rock and pop into her music. In 2002, she released her successful fourth studio album, titled Up!, which was certified 11× platinum and sold over 15 million copies worldwide. Shania Twain has been nominated eighteen times for Grammy Awards and won five Grammys. [] She was the best-paid country music star in 2016 according to Forbes, with a net worth of $27.5 million. []Twain has been credited with breaking international boundaries for country music, as well as inspiring many country artists to incorporate different genres into their music in order to attract a wider audience. She is also credited with changing the way in which many female country performers would market themselves, as unlike many before her she used fashion and her sex appeal to get rid of the stereotypical 'honky-tonk' image the majority of country singers had in order to distinguish herself from many female country artists of the time.
In the early-mid-1990s, country western music was influenced by the popularity of line dancing. This influence was so great that Chet Atkins was quoted as saying, "The music has gotten pretty bad, I think. It's all that damn line dancing." By the end of the decade, however, at least one line dance choreographer complained that good country line dance music was no longer being released. In contrast, artists such as Don Williams and George Jones who had more or less had consistent chart success through the 1970s and 1980s suddenly had their fortunes fall rapidly around 1991 when the new chart rules took effect.
Country influences combined with Punk rock and alternative rock to forge the "cowpunk" scene in Southern California during the 1980s, which included bands such as the Long Ryders, Lone Justice and the Beat Farmers, as well as the established punk group X, whose music had begun to include country and rockabilly influences. Simultaneously, a generation of diverse country artists outside of California emerged that rejected the perceived cultural and musical conservatism associated with Nashville's mainstream country musicians in favor of more countercultural outlaw country and the folk singer-songwriter traditions of artists such as Woody Guthrie, Gram Parsons and Bob Dylan.
Artists from outside California who were associated with early alternative country included singer-songwriters such as Lucinda Williams, Lyle Lovett and Steve Earle, the Nashville country rock band Jason and the Scorchers, the Providence "cowboy pop" band Rubber Rodeo, and the British post-punk band the Mekons. Earle, in particular, was noted for his popularity with both country and college rock audiences: He promoted his 1986 debut album Guitar Town with a tour that saw him open for both country singer Dwight Yoakam and alternative rock band the Replacements. Yoakam also cultivated a fanbase spanning multiple genres through his stripped-down honky-tonk influenced sound, association with the cowpunk scene, and performances at Los Angeles punk rock clubs.
These early styles had coalesced into a genre by the time the Illinois group Uncle Tupelo released their influential debut album No Depression in 1990. The album is widely credited as being the first "alternative country" album, and inspired the name of No Depression magazine, which exclusively covered the new genre. Following Uncle Tupelo's disbanding in 1994, its members formed two significant bands in genre: Wilco and Son Volt. Although Wilco's sound had moved away from country and towards indie rock by the time they released their critically acclaimed album Yankee Hotel Foxtrot in 2002, they have continued to be an influence on later alt-country artists.
Other acts who became prominent in the alt-country genre during the 1990s and 2000s included the Bottle Rockets, the Handsome Family, Blue Mountain, Robbie Fulks, Blood Oranges, Bright Eyes, Drive-By Truckers, Old 97's, Old Crow Medicine Show, Nickel Creek, Neko Case, and Whiskeytown, whose lead singer Ryan Adams later had a successful solo-career. Alt-country, in various iterations overlapped with other genres, including Red Dirt country music (Cross Canadian Ragweed), jam bands (My Morning Jacket and the String Cheese Incident), and indie folk (the Avett Brothers).
Despite the genre's growing popularity in the 1980s, 1990s and 2000s, alternative country and neo-traditionalist artists saw minimal support from country radio in those decades, despite strong sales and critical acclaim for albums such as the soundtrack to the 2000 film O Brother, Where Art Thou?. In 1987, the Beat Farmers gained airplay on country music stations with their song "Make It Last", but the single was pulled from the format when station programmers decreed the band's music was too rock-oriented for their audience. However, some alt-country songs have been crossover hits to mainstream country radio in cover versions by established artists on the format; Lucinda Williams' "Passionate Kisses" was a hit for Mary Chapin Carpenter in 1993, Ryan Adams' "When the Stars Go Blue" was a hit for Tim McGraw in 2007, and Old Crow Medicine Show's "Wagon Wheel" was a hit for Darius Rucker (member of Hootie & The Blowfish) in 2013.
In the 2010s, the alt-country genre saw an increase in its critical and commercial popularity, owing to the success of artists such as the Civil Wars, Chris Stapleton, Sturgill Simpson, Jason Isbell, Lydia Loveless and Margo Price. In 2019, Kacey Musgraves – a country artist who had gained a following with indie rock fans and music critics despite minimal airplay on country radio – won the Grammy Award for Album of the Year for her album Golden Hour.
The sixth generation of country music continued to be influenced by other genres such as pop, rock, and R&B. Richard Marx crossed over with his Days in Avalon album, which features five country songs and several singers and musicians. Alison Krauss sang background vocals to Marx's single "Straight from My Heart." Also, Bon Jovi had a hit single, "Who Says You Can't Go Home", with Jennifer Nettles of Sugarland. Kid Rock's collaboration with Sheryl Crow, "Picture," was a major crossover hit in 2001 and began Kid Rock's transition from hard rock to a country-rock hybrid that would later produce another major crossover hit, 2008's "All Summer Long." (Crow, whose music had often incorporated country elements, would also officially cross over into country with her hit "Easy" from her debut country album Feels like Home). Darius Rucker, frontman for the 1990s pop-rock band Hootie & the Blowfish, began a country solo career in the late 2000s, one that to date has produced five albums and several hits on both the country charts and the Billboard Hot 100. Singer-songwriter Unknown Hinson became famous for his appearance in the Charlotte television show Wild, Wild, South, after which Hinson started his own band and toured in southern states. Other rock stars who featured a country song on their albums were Don Henley (who released Cass County in 2015, an album which featured collaborations with numerous country artists) and Poison.
The back half of the 2010-2020 decade saw an increasing number of mainstream country acts collaborate with pop and R&B acts; many of these songs achieved commercial success by appealing to fans across multiple genres; examples include collaborations between Kane Brown and Marshmello and Maren Morris and Zedd. There has also been interest from pop singers in country music, including Beyoncé, Lady Gaga, Alicia Keys, Gwen Stefani, Justin Timberlake, Justin Bieber and Pink. Supporting this movement is the new generation of contemporary pop-country, including Taylor Swift, Miranda Lambert, Carrie Underwood, Kacey Musgraves, Miley Cyrus, Billy Ray Cyrus, Sam Hunt, Chris Young, who introduced new themes in their works, touching on fundamental rights, feminism, and controversies about racism and religion of the older generations.
In 2005, country singer Carrie Underwood rose to fame as the winner of the fourth season of American Idol and has since become one of the most prominent recording artists in the genre, with worldwide sales of more than 65 million records and seven Grammy Awards. With her first single, "Inside Your Heaven", Underwood became the only solo country artist to have a number 1 hit on the Billboard Hot 100 chart in the 2000–2009 decade and also broke Billboard chart history as the first country music artist ever to debut at No. 1 on the Hot 100. Underwood's debut album, Some Hearts, became the best-selling solo female debut album in country music history, the fastest-selling debut country album in the history of the SoundScan era and the best-selling country album of the last 10 years, being ranked by Billboard as the number 1 Country Album of the 2000–2009 decade. She has also become the female country artist with the most number one hits on the Billboard Hot Country Songs chart in the Nielsen SoundScan era (1991–present), having 14 #1s and breaking her own Guinness Book record of ten. In 2007, Underwood won the Grammy Award for Best New Artist, becoming only the second Country artist in history (and the first in a decade) to win it. She also made history by becoming the seventh woman to win Entertainer of the Year at the Academy of Country Music Awards, and the first woman in history to win the award twice, as well as twice consecutively. Time has listed Underwood as one of the 100 most influential people in the world. In 2016, Underwood topped the Country Airplay chart for the 15th time, becoming the female artist with the most number ones on that chart.
Carrie Underwood was only one of several country stars produced by a television series in the 2000s. In addition to Underwood, American Idol launched the careers of Kellie Pickler, Josh Gracin, Bucky Covington, Kristy Lee Cook, Danny Gokey, Lauren Alaina and Scotty McCreery (as well as that of occasional country singer Kelly Clarkson) in the decade, and would continue to launch country careers in the 2010s. The series Nashville Star, while not nearly as successful as Idol, did manage to bring Miranda Lambert, Kacey Musgraves and Chris Young to mainstream success, also launching the careers of lower-profile musicians such as Buddy Jewell, Sean Patrick McGraw, and Canadian musician George Canyon. Can You Duet? produced the duos Steel Magnolia and Joey + Rory. Teen sitcoms also have influenced modern country music; in 2008, actress Jennette McCurdy (best known as the sidekick Sam on the teen sitcom iCarly) released her first single, "So Close", following that with the single "Generation Love" in 2011. Another teen sitcom star, Miley Cyrus (of Disney Channel's Hannah Montana), also had a crossover hit in the late 2000s with "The Climb" and another with a duet with her father, Billy Ray Cyrus, with "Ready, Set, Don't Go." Jana Kramer, an actress in the teen drama One Tree Hill, released a country album in 2012 that has produced two hit singles as of 2013. Actresses Hayden Panettiere and Connie Britton began recording country songs as part of their roles in the TV shows Nashville and Pretty Little Liars star Lucy Hale released her debut album Road Between in 2014.
In 2010, the group Lady Antebellum won five Grammys, including the coveted Song of the Year and Record of the Year for "Need You Now". A large number of duos and vocal groups emerged on the charts in the 2010s, many of which feature close harmony in the lead vocals. In addition to Lady A, groups such as Little Big Town, the Band Perry, Gloriana, Thompson Square, Eli Young Band, Zac Brown Band and British duo the Shires have emerged to occupy a large share of mainstream success alongside solo singers such as Kacey Musgraves and Miranda Lambert.
One of the most commercially successful country artists of the late 2000s and early 2010s has been singer-songwriter Taylor Swift. Swift first became widely known in 2006 when her debut single, "Tim McGraw", was released when Swift was only 16 years old. In 2006, Swift released her self-titled debut studio album, which spent 275 weeks on Billboard 200, one of the longest runs of any album on that chart. In 2008, Taylor Swift released her second studio album, Fearless, which made her the second longest number-one charted on Billboard 200 and the second best-selling album (just behind Adele's 21) within the past 5 years. At the 2010 Grammys, Taylor Swift was 20 and won Album of the Year for Fearless, which made her the youngest artist to win this award. Swift has received twelve Grammys already.
Buoyed by her teen idol status among girls and a change in the methodology of compiling the Billboard charts to favor pop-crossover songs, Swift's 2012 single "We Are Never Ever Getting Back Together" spent the most weeks at the top of Billboard's Hot 100 chart and Hot Country Songs chart of any song in nearly five decades. The song's long run at the top of the chart was somewhat controversial, as the song is largely a pop song without much country influence and its success on the charts driven by a change to the chart's criteria to include airplay on non-country radio stations, prompting disputes over what constitutes a country song; many of Swift's later releases, such as album 1989 (2014), Reputation (2017), and Lover (2019) were released solely to pop audiences. Swift returned to country music in her recent folk-inspired releases, Folklore (2020) and Evermore (2020), with songs like "Betty" and "No Body, No Crime".
In the mid to late 2010s, country music began to increasingly sound more like the style of modern-day Pop music, with more simple and repetitive lyrics, more electronic-based instrumentation, and experimentation with "talk-singing" and rap, pop-country pulled farther away from the traditional sounds of country music and received criticisms from country music purists while gaining in popularity with mainstream audiences. The topics addressed have also changed, turning controversial such as acceptance of the LGBT community, safe sex, recreational marijuana use, and questioning religious sentiment. Influences also come from some pop artists' interest in the country genre, including Justin Timberlake with the album Man of the Woods, Beyoncé's single "Daddy Lessons" from Lemonade, Gwen Stefani with "Nobody but You", Bruno Mars, Lady Gaga, Alicia Keys, Kelly Clarkson, and Pink.
The influence of rock music in country has become more overt during the late 2000s and early 2010s as artists like Eric Church, Jason Aldean, and Brantley Gilbert have had success; Aaron Lewis, former frontman for the rock group Staind, had a moderately successful entry into country music in 2011 and 2012, as did Dallas Smith, former frontman of the band Default.
Maren Morris success collaboration "The Middle" with EDM producer Zedd is considered one of the representations of the fusion of electro-pop with country music.
Lil Nas X song "Old Town Road" spent 19 weeks atop the US Billboard Hot 100 chart, becoming the longest-running number-one song since the chart debuted in 1958, winning Billboard Music Awards, MTV Video Music Awards and Grammy Award. Sam Hunt "Leave the Night On" peaked concurrently on the Hot Country Songs and Country Airplay charts, making Hunt the first country artist in 22 years, since Billy Ray Cyrus, to reach the top of three country charts simultaneously in the Nielsen SoundScan-era. With the fusion genre of "country trap"—a fusion of country/western themes to a hip hop beat, but usually with fully sung lyrics—emerging in the late 2010s, line dancing country had a minor revival, examples of the phenomenon include "The Git Up" by Blanco Brown. Blanco Brown has gone on to make more traditional country soul songs such as "I Need Love" and a rendition of "Don't Take the Girl" with Tim McGraw, and collaborations like "Just the Way" with Parmalee. Another country trap artist known as Breland has seen success with "My Truck, "Throw It Back" with Keith Urban, and "Praise the Lord" featuring Thomas Rhett.
Emo rap musician Sueco, released a cowpunk song in collaboration is country musician Warren Zeiders titled "Ride It Hard". Alex Melton, known for his music covers, blends pop punk with country music.
In the early 2010s, "bro-country", a genre noted primarily for its themes on drinking and partying, girls, and pickup trucks became particularly popular. Notable artists associated with this genre are Luke Bryan, Jason Aldean, Blake Shelton, Jake Owen and Florida Georgia Line whose song "Cruise" became the best-selling country song of all time. Research in the mid-2010s suggested that about 45 percent of country's best-selling songs could be considered bro-country, with the top two artists being Luke Bryan and Florida Georgia Line. Albums by bro-country singers also sold very well—in 2013, Luke Bryan's Crash My Party was the third best-selling of all albums in the United States, with Florida Georgia Line's Here's to the Good Times at sixth, and Blake Shelton's Based on a True Story at ninth. It is also thought that the popularity of bro-country helped country music to surpass classic rock as the most popular genre in the American country in 2012. The genre however is controversial as it has been criticized by other country musicians and commentators over its themes and depiction of women, opening up a divide between the older generation of country singers and the younger bro country singers that was described as "civil war" by musicians, critics, and journalists." In 2014, Maddie & Tae's "Girl in a Country Song", addressing many of the controversial bro-country themes, peaked at number one on the Billboard Country Airplay chart.
is a genre that contain songs about going through hard times, country loving, and telling stories. Newer artists like Billy Strings, the Grascals, Molly Tuttle, Tyler Childers and the Infamous Stringdusters have been increasing the popularity of this genre, alongside some of the genres more established stars who still remain popular including Rhonda Vincent, Alison Krauss and Union Station, Ricky Skaggs and Del McCoury. The genre has developed in the Northern Kentucky and Cincinnati area. Other artists include New South (band), Doc Watson, Osborne Brothers, and many others.
In an effort to combat the over-reliance of mainstream country music on pop-infused artists, the sister genre of Americana began to gain popularity and increase in prominence, receiving eight Grammy categories of its own in 2009. Americana music incorporates elements of country music, bluegrass, folk, blues, gospel, rhythm and blues, roots rock and southern soul and is overseen by the Americana Music Association and the Americana Music Honors & Awards. As a result of an increasingly pop-leaning mainstream, many more traditional-sounding artists such as Tyler Childers, Zach Bryan and Old Crow Medicine Show began to associate themselves more with Americana and the alternative country scene where their sound was more celebrated. Similarly, many established country acts who no longer received commercial airplay, including Emmylou Harris and Lyle Lovett, began to flourish again.
During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, "back-to-basics" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds.
Beginning in 1989, a confluence of events brought an unprecedented commercial boom to country music. New marketing strategies were used to engage fans, powered by technology that more accurately tracked the popularity of country music, and boosted by a political and economic climate that focused attention on the genre. Garth Brooks ("Friends in Low Places") in particular attracted fans with his fusion of neotraditionalist country and stadium rock. Other artists such as Brooks and Dunn ("Boot Scootin' Boogie") also combined conventional country with slick, rock elements, while Lorrie Morgan, Mary Chapin Carpenter, and Kathy Mattea updated neotraditionalist styles.
Roots of conservative country was Lee Greenwood's "God Bless the USA". The September 11 attacks of 2001 and the economic recession helped move country music back into the spotlight. Many country artists, such as Alan Jackson with his ballad on terrorist attacks, "Where Were You (When the World Stopped Turning)", wrote songs that celebrated the military, highlighted the gospel, and emphasized home and family values over wealth. Alt-Country singer Ryan Adams song "New York, New York" pays tribute to New York City, and its popular music video (which was shot 4 days before the attacks) shows Adams playing in front of the Manhattan skyline, Along with several shots of the city. In contrast, more rock-oriented country singers took more direct aim at the attacks' perpetrators; Toby Keith's "Courtesy of the Red, White and Blue (The Angry American)" threatened to "a boot in" the posterior of the enemy, while Charlie Daniels's "This Ain't No Rag, It's a Flag" promised to "hunt" the perpetrators "down like a mad dog hound." These songs gained such recognition that it put country music back into popular culture. Darryl Worley recorded "Have You Forgotten" also. There have been numerous patriotic country songs throughout the years.
Some modern artists that primarily or entirely produce country pop music include Kacey Musgraves, Maren Morris, Kelsea Ballerini, Sam Hunt, Kane Brown, Chris Lane, and Dan + Shay. The singers who are part of this country movement are also defined as "Nashville's new generation of country".
Although the changes made by the new generation, it has been recognized by major music awards associations and successes in Billboard and international charts. Golden Hour by Kacey Musgraves won album of the year at 61st Annual Grammy Awards, Academy of Country Music Awards, Country Music Association Awards, although it has received widespread criticism from the more traditionalist public.
Australian country music has a long tradition. Influenced by US country music, it has developed a distinct style, shaped by British and Irish folk ballads and Australian bush balladeers like Henry Lawson and Banjo Paterson. Country instruments, including the guitar, banjo, fiddle and harmonica, create the distinctive sound of country music in Australia and accompany songs with strong storyline and memorable chorus.
Folk songs sung in Australia between the 1780s and 1920s, based around such themes as the struggle against government tyranny, or the lives of bushrangers, swagmen, drovers, stockmen and shearers, continue to influence the genre. This strain of Australian country, with lyrics focusing on Australian subjects, is generally known as "bush music" or "bush band music". "Waltzing Matilda", often regarded as Australia's unofficial national anthem, is a quintessential Australian country song, influenced more by British and Irish folk ballads than by US country and western music. The lyrics were composed by the poet Banjo Paterson in 1895. Other popular songs from this tradition include "The Wild Colonial Boy", "Click Go the Shears", "The Queensland Drover" and "The Dying Stockman". Later themes which endure to the present include the experiences of war, of droughts and flooding rains, of Aboriginality and of the railways and trucking routes which link Australia's vast distances.
Pioneers of a more Americanised popular country music in Australia included Tex Morton (known as "The Father of Australian Country Music") in the 1930s. Author Andrew Smith delivers a through research and engaged view of Tex Morton's life and his impact on the country music scene in Australia in the 1930s and 1940s. Other early stars included Buddy Williams, Shirley Thoms and Smoky Dawson. Buddy Williams (1918–1986) was the first Australian-born to record country music in Australia in the late 1930s and was the pioneer of a distinctly Australian style of country music called the bush ballad that others such as Slim Dusty would make popular in later years. During the Second World War, many of Buddy Williams recording sessions were done whilst on leave from the Army. At the end of the war, Williams would go on to operate some of the largest travelling tent rodeo shows Australia has ever seen.
In 1952, Dawson began a radio show and went on to national stardom as a singing cowboy of radio, TV and film. Slim Dusty (1927–2003) was known as the "King of Australian Country Music" and helped to popularise the Australian bush ballad. His successful career spanned almost six decades, and his 1957 hit "A Pub with No Beer" was the biggest-selling record by an Australian to that time, and with over seven million record sales in Australia he is the most successful artist in Australian musical history. Dusty recorded and released his one-hundredth album in the year 2000 and was given the honour of singing "Waltzing Matilda" in the closing ceremony of the Sydney 2000 Olympic Games. Dusty's wife Joy McKean penned several of his most popular songs.
Chad Morgan, who began recording in the 1950s, has represented a vaudeville style of comic Australian country; Frank Ifield achieved considerable success in the early 1960s, especially in the UK Singles Charts and Reg Lindsay was one of the first Australians to perform at Nashville's Grand Ole Opry in 1974. Eric Bogle's 1972 folk lament to the Gallipoli Campaign "And the Band Played Waltzing Matilda" recalled the British and Irish origins of Australian folk-country. Singer-songwriter Paul Kelly, whose music style straddles folk, rock and country, is often described as the poet laureate of Australian music.
By the 1990s, country music had attained crossover success in the pop charts, with artists like James Blundell and James Reyne singing "Way Out West", and country star Kasey Chambers winning the ARIA Award for Best Female Artist in three years (2000, 2002 and 2004), tying with pop stars Wendy Matthews and Sia for the most wins in that category. Furthermore, Chambers has gone on to win nine ARIA Awards for Best Country Album and, in 2018, became the youngest artist to ever be inducted into the ARIA Hall of Fame. The crossover influence of Australian country is also evident in the music of successful contemporary bands the Waifs and the John Butler Trio. Nick Cave has been heavily influenced by the country artist Johnny Cash. In 2000, Cash, covered Cave's "The Mercy Seat" on the album American III: Solitary Man, seemingly repaying Cave for the compliment he paid by covering Cash's "The Singer" (originally "The Folk Singer") on his Kicking Against the Pricks album. Subsequently, Cave cut a duet with Cash on a version of Hank Williams' "I'm So Lonesome I Could Cry" for Cash's American IV: The Man Comes Around album (2002).
Popular contemporary performers of Australian country music include John Williamson (who wrote the iconic "True Blue"), Lee Kernaghan (whose hits include "Boys from the Bush" and "The Outback Club"), Gina Jeffreys, Forever Road and Sara Storer. In the U.S., Olivia Newton-John, Sherrié Austin and Keith Urban have attained great success. During her time as a country singer in the 1970s, Newton-John became the first (and to date only) non-US winner of the Country Music Association Award for Female Vocalist of the Year which many considered a controversial decision by the CMA; after starring in the rock-and-roll musical film Grease in 1978, Newton-John (mirroring the character she played in the film) shifted to pop music in the 1980s. Urban is arguably considered the most successful international Australian country star, winning nine CMA Awards, including three Male Vocalist of the Year wins and two wins of the CMA's top honour Entertainer of the Year. Pop star Kylie Minogue found success with her 2018 country pop album Golden which she recorded in Nashville reaching number one in Scotland, the UK and her native Australia.
Country music has been a particularly popular form of musical expression among Indigenous Australians. Troy Cassar-Daley is among Australia's successful contemporary indigenous performers, and Kev Carmody and Archie Roach employ a combination of folk-rock and country music to sing about Aboriginal rights issues.
The Tamworth Country Music Festival began in 1973 and now attracts up to 100,000 visitors annually. Held in Tamworth, New South Wales (country music capital of Australia), it celebrates the culture and heritage of Australian country music. During the festival the CMAA holds the Country Music Awards of Australia ceremony awarding the Golden Guitar trophies. Other significant country music festivals include the Whittlesea Country Music Festival (near Melbourne) and the Mildura Country Music Festival for "independent" performers during October, and the Canberra Country Music Festival held in the national capital during November.
Country HQ showcases new talent on the rise in the country music scene down under. CMC (the Country Music Channel), a 24‑hour music channel dedicated to non-stop country music, can be viewed on pay TV and features once a year the Golden Guitar Awards, CMAs and CCMAs alongside international shows such as The Wilkinsons, The Road Hammers, and Country Music Across America.
Outside of the United States, Canada has the largest country music fan and artist base, something that is to be expected given the two countries' proximity and cultural parallels. Mainstream country music is culturally ingrained in the prairie provinces, the British Columbia Interior, Northern Ontario, and in Atlantic Canada. Celtic traditional music developed in Atlantic Canada in the form of Scottish, Acadian and Irish folk music popular amongst Irish, French and Scottish immigrants to Canada's Atlantic Provinces (Newfoundland, Nova Scotia, New Brunswick, and Prince Edward Island). Like the southern United States and Appalachia, all four regions are of heavy British Isles stock and rural; as such, the development of traditional music in the Maritimes somewhat mirrored the development of country music in the US South and Appalachia. Country and western music never really developed separately in Canada; however, after its introduction to Canada, following the spread of radio, it developed quite quickly out of the Atlantic Canadian traditional scene. While true Atlantic Canadian traditional music is very Celtic or "sea shanty" in nature, even today, the lines have often been blurred. Certain areas often are viewed as embracing one strain or the other more openly. For example, in Newfoundland the traditional music remains unique and Irish in nature, whereas traditional musicians in other parts of the region may play both genres interchangeably.
Don Messer's Jubilee was a Halifax, Nova Scotia-based country/folk variety television show that was broadcast nationally from 1957 to 1969. In Canada it out-performed The Ed Sullivan Show broadcast from the United States and became the top-rated television show throughout much of the 1960s. Don Messer's Jubilee followed a consistent format throughout its years, beginning with a tune named "Goin' to the Barndance Tonight", followed by fiddle tunes by Messer, songs from some of his "Islanders" including singers Marg Osburne and Charlie Chamberlain, the featured guest performance, and a closing hymn. It ended with "Till We Meet Again". The guest performance slot gave national exposure to numerous Canadian folk musicians, including Stompin' Tom Connors and Catherine McKinnon. Some Maritime country performers went on to further fame beyond Canada. Hank Snow, Wilf Carter (also known as Montana Slim), and Anne Murray are the three most notable. The cancellation of the show by the public broadcaster in 1969 caused a nationwide protest, including the raising of questions in the Parliament of Canada.
The Prairie provinces, due to their western cowboy and agrarian nature, are the true heartland of Canadian country music. While the Prairies never developed a traditional music culture anything like the Maritimes, the folk music of the Prairies often reflected the cultural origins of the settlers, who were a mix of Scottish, Ukrainian, German and others. For these reasons polkas and western music were always popular in the region, and with the introduction of the radio, mainstream country music flourished. As the culture of the region is western and frontier in nature, the specific genre of country and western is more popular today in the Prairies than in any other part of the country. No other area of the country embraces all aspects of the culture, from two-step dancing, to the cowboy dress, to rodeos, to the music itself, like the Prairies do. The Atlantic Provinces, on the other hand, produce far more traditional musicians, but they are not usually specifically country in nature, usually bordering more on the folk or Celtic genres.
Canadian country pop star Shania Twain is the best-selling female country artist of all time and one of the best-selling artists of all time in any genre. Furthermore, she is the only woman to have three consecutive albums be certified Diamond.
Country music artists from the U.S. have seen crossover with Latin American audiences, particularly in Mexico. Country music artists from throughout the U.S. have recorded renditions of Mexican folk songs, including "El Rey" which was performed on George Strait's Twang album and during Al Hurricane's tribute concert. American Latin pop crossover musicians, like Lorenzo Antonio's "Ranchera Jam" have also combined Mexican songs with country songs in a New Mexico music style.
While Tejano and New Mexico music is typically thought of as being Spanish language, the genres have also had charting musicians focused on English language music. During the 1970s, singer-songwriter Freddy Fender had two #1 country music singles, that were popular throughout North America, with "Before the Next Teardrop Falls" and "Wasted Days and Wasted Nights". Notable songs which have been influenced by Hispanic and Latin culture as performed by US country music artists include Marty Robbins' "El Paso" trilogy, Willie Nelson and Merle Haggard covering the Townes Van Zandt song "Pancho and Lefty", "Toes" by Zac Brown Band, and "Sangria" by Blake Shelton.
Regional Mexican is a radio format featuring many of Mexico's versions of country music. It includes a number of different styles, usually named after their region of origin. One specific song style, the Canción Ranchera, or simply Ranchera, literally meaning "ranch song", found its origins in the Mexican countryside and was first popularized with Mariachi. It has since also become popular with Grupero, Banda, Norteño, Tierra Caliente, Duranguense and other regional Mexican styles. The Corrido, a different song style with a similar history, is also performed in many other regional styles, and is most related to the western style of the United States and Canada. Other song styles performed in regional Mexican music include Ballads, Cumbias, Boleros, among others. Country en Español (Country in Spanish) is also popular in Mexico. Some Mexican artists began performing country songs in Spanish during the 1970s, and the genre became prominent mainly in the northern regions of the country during the 1980s. A Country en Español popularity boom also reached the central regions of Mexico during the 1990s. For most of its history, Country en Español mainly resembled Neotraditional country. However, in more modern times, some artists have incorporated influences from other country music subgenres.
In Brazil, there is Música Sertaneja, the most popular music genre in that country. It originated in the countryside of São Paulo state in the 1910s, before the development of U.S. country music.
In Argentina, on the last weekend of September, the yearly San Pedro Country Music Festival takes place in the town of San Pedro, Buenos Aires. The festival features bands from different places in Argentina, as well as international artists from Brazil, Uruguay, Chile, Peru and the U.S.
Country music is popular in the United Kingdom, although somewhat less so than in other English-speaking countries. There are some British country music acts and publications. Although radio stations devoted to country are among the most popular in other Anglophone nations, none of the top ten most-listened-to stations in the UK are country stations, and national broadcaster BBC Radio does not offer a full-time country station (BBC Radio 2 Country, a "pop-up" station, operated four days each year between 2015 and 2017). The BBC does offer a country show on BBC Radio 2 each week hosted by Bob Harris.
The most successful British country music act of the 21st century are Ward Thomas and the Shires. In 2015, the Shires' album Brave, became the first UK country act ever to chart in the Top 10 of the UK Albums Chart and they became the first UK country act to receive an award from the American Country Music Association. In 2016, Ward Thomas then became the first UK country act to hit number 1 in the UK Albums Chart with their album Cartwheels.
There is the C2C: Country to Country festival held every year, and for many years there was a festival at Wembley Arena, which was broadcast on the BBC, the International Festivals of Country Music, promoted by Mervyn Conn, held at the venue between 1969 and 1991. The shows were later taken into Europe, and featured such stars as Johnny Cash, Dolly Parton, Tammy Wynette, David Allan Coe, Emmylou Harris, Boxcar Willie, Johnny Russell and Jerry Lee Lewis. A handful of country musicians had even greater success in mainstream British music than they did in the U.S., despite a certain amount of disdain from the music press. Britain's largest music festival Glastonbury has featured major US country acts in recent years, such as Kenny Rogers in 2013 and Dolly Parton in 2014.
From within the UK, few country musicians achieved widespread mainstream success. Many British singers who performed the occasional country songs are of other genres. Tom Jones, by this point near the end of his peak success as a pop singer, had a string of country hits in the late 1970s and early 1980s. The Bee Gees had some fleeting success in the genre, with one country hit as artists ("Rest Your Love on Me") and a major hit as songwriters ("Islands in the Stream"); Barry Gibb, the band's usual lead singer and last surviving member, acknowledged that country music was a major influence on the band's style. Singer Engelbert Humperdinck, while charting only once in the U.S. country top 40 with "After the Lovin'", achieved widespread success on both the U.S. and British pop charts with his covers of Nashville country ballads such as "Release Me", "Am I That Easy to Forget" and "There Goes My Everything". Welsh singer Bonnie Tyler initially started her career making country records, and in 1978 her single "It's a Heartache" reached number four on the UK Singles Chart. In 2013, Tyler returned to her roots, blending the country elements of her early work with the rock of her successful material on her album Rocks and Honey which featured a duet with Vince Gill. The songwriting tandem of Roger Cook and Roger Greenaway wrote a number of country hits, in addition to their widespread success in pop songwriting; Cook is notable for being the only Briton to be inducted into the Nashville Songwriters Hall of Fame.
A niche country subgenre popular in the West Country is Scrumpy and Western, which consists mostly of novelty songs and comedy music recorded there (its name comes from scrumpy, an alcoholic beverage). A primarily local interest, the largest Scrumpy and Western hit in the UK and Ireland was "The Combine Harvester", which pioneered the genre and reached number one in both the UK and Ireland; Fred Wedlock had a number-six hit in 1981 with "The Oldest Swinger in Town". In 1975, comedian Billy Connolly topped the UK Singles Chart with "D.I.V.O.R.C.E.", a parody of the Tammy Wynette song "D-I-V-O-R-C-E".
The British Country Music Festival is an annual three-day festival held in the seaside resort of Blackpool. It uniquely promotes artists from the United Kingdom and Ireland to celebrate the impact that Celtic and British settlers to America had on the origins of country music. Past headline artists have included Amy Wadge, Ward Thomas, Tom Odell, Nathan Carter, Lisa McHugh, Catherine McGrath, Wildwood Kin, The Wandering Hearts and Henry Priestman.
In Ireland, Country and Irish is a music genre that combines traditional Irish folk music with US country music. Television channel TG4 began a quest for Ireland's next country star called Glór Tíre, translated as "Country Voice". It is now in its sixth season and is one of TG4's most-watched TV shows. Over the past ten years, country and gospel recording artist James Kilbane has reached multi-platinum success with his mix of Christian and traditional country influenced albums. James Kilbane like many other Irish artists is today working closer with Nashville. Daniel O'Donnell achieved international success with his brand of music crossing country, Irish folk and European easy listening, earning a strong following among older women both in the British Isles and in North America. A recent success in the Irish arena has been Crystal Swing.
In Japan, there are forms of J-country and J-western similar to other J-pop movements, J-hip hop and J-rock. One of the first J-western musicians was Biji Kuroda & The Chuck Wagon Boys, other vintage artists included Jimmie Tokita and His Mountain Playboys, The Blue Rangers, Wagon Aces, and Tomi Fujiyama. J-country continues to have a dedicated following in Japan, thanks to Charlie Nagatani, Katsuoshi Suga, J.T. Kanehira, Dicky Kitano, and Manami Sekiya. Country and western venues in Japan include the former annual Country Gold which were put together by Charlie Nagatani, and the modern honky tonks at Little Texas in Tokyo and Armadillo in Nagoya.
In India, there is an annual concert festival called "Blazing Guitars" held in Chennai brings together Anglo-Indian musicians from all over the country (including some who have emigrated to places like Australia). The year 2003 brought home-grown Indian, Bobby Cash to the forefront of the country music culture in India when he became India's first international country music artist to chart singles in Australia.
In the Philippines, country music has found their way into Cordilleran way of life, which often compares the Igorot lifestyle to that of US cowboys. Baguio City has an FM station that caters to country music, DZWR 99.9 Country, which is part of the Catholic Media Network. Bombo Radyo Baguio has a segment on its Sunday slot for Igorot, Ilocano and country music. And as of recently, DWUB occasionally plays country music. Many country music musicians tour the Philippines. Original Pinoy Music has influences from country.
Tom Roland, from the Country Music Association International, explains country music's global popularity: "In this respect, at least, Country Music listeners around the globe have something in common with those in the United States. In Germany, for instance, Rohrbach identifies three general groups that gravitate to the genre: people intrigued with the US cowboy icon, middle-aged fans who seek an alternative to harder rock music and younger listeners drawn to the pop-influenced sound that underscores many current Country hits." One of the first US people to perform country music abroad was George Hamilton IV. He was the first country musician to perform in the Soviet Union; he also toured in Australia and the Middle East. He was deemed the "International Ambassador of Country Music" for his contributions to the globalization of country music. Johnny Cash, Emmylou Harris, Keith Urban, and Dwight Yoakam have also made numerous international tours. The Country Music Association undertakes various initiatives to promote country music internationally.
In Iran, country music has appeared in recent years. According to Melody Music Magazine, the pioneer of country music in Iran is the English-speaking country music band Dream Rovers, whose founder, singer and songwriter is Erfan Rezayatbakhsh (elf). The band was formed in 2007 in Tehran, and during this time they have been trying to introduce and popularize country music in Iran by releasing two studio albums and performing live at concerts, despite the difficulties that the Islamic regime in Iran makes for bands that are active in the western music field.
Musician Toby Keith performed alongside Saudi Arabian folk musician Rabeh Sager in 2017. This concert was similar to the performances of Jazz ambassadors that performed distinctively American style music internationally.
In Sweden, Rednex rose to stardom combining country music with electro-pop in the 1990s. In 1994, the group had a worldwide hit with their version of the traditional Southern tune "Cotton-Eyed Joe". Artists popularizing more traditional country music in Sweden have been Ann-Louise Hanson, Hasse Andersson, Kikki Danielsson, Elisabeth Andreassen and Jill Johnson. In Poland an international country music festival, known as Piknik Country, has been organised in Mrągowo in Masuria since 1983. The number of country music artists in France has increased. Some of the most important are Liane Edwards, Annabel, Rockie Mountains, Tahiana, and Lili West. French rock and roll singer Eddy Mitchell is also inspired by Americana and country music.
In the Netherlands there are many artists producing popular country and Americana music, which is mostly in the English language, as well as Dutch country and country-like music in the Dutch language. The latter is mainly popular on the countrysides in the northern and eastern parts of the Netherlands and is less associated with its US brethren, although it sounds sometimes very similar. Well-known popular artists mainly performing in English are Waylon, Danny Vera, Ilse DeLange, Douwe Bob and Henk Wijngaard.
Several US television networks are at least partly devoted to the genre: Country Music Television (the first channel devoted to country music) and CMT Music (both owned by Paramount Global), RFD-TV and The Cowboy Channel (both owned by Rural Media Group), Heartland (owned by Get After It Media), Circle (a joint venture of the Grand Ole Opry and Gray Television), The Country Network (owned by TCN Country, LLC), and Country Music Channel (the country-oriented sister channel of California Music Channel).
The Nashville Network (TNN) was launched in 1983 as a channel devoted to country music, and later added sports and outdoor lifestyle programming. It actually launched just two days after CMT. In 2000, after TNN and CMT fell under the same corporate ownership, TNN was stripped of its country format and rebranded as The National Network, then Spike TV in 2003, Spike in 2006, and finally Paramount Network in 2018. TNN was later revived from 2012 to 2013 after Jim Owens Entertainment (the company responsible for prominent TNN hosts Crook & Chase) acquired the trademark and licensed it to Luken Communications; that channel renamed itself Heartland after Luken was embroiled in an unrelated dispute that left the company bankrupt.
Great American Country (GAC) was launched in 1995, also as a country music-oriented channel that would later add lifestyle programming pertaining to the American Heartland and South. In Spring 2021, GAC's then-owner, Discovery, Inc. divested the network to GAC Media, which also acquired the equestrian network Ride TV. Later, in the summer of that year, GAC Media relaunched Great American Country as GAC Family, a family-oriented general entertainment network, while Ride TV was relaunched as GAC Living, a network devoted to programming pertaining to lifestyles of the American South. The GAC acronym which once stood for "Great American Country" now stands for "Great American Channels".
Only one television channel was dedicated to country music in Canada: CMT owned by Corus Entertainment (90%) and Viacom (10%). However, the lifting of strict genre licensing restrictions saw the network remove the last of its music programming at the end of August 2017 for a schedule of generic off-network family sitcoms, Cancom-compliant lifestyle programming, and reality programming. In the past, the current-day Cottage Life network saw some country focus as Country Canada and later, CBC Country Canada before that network drifted into an alternate network for overflow CBC content as Bold. Stingray Music continues to maintain several country music audio-only channels on cable radio.
In the past, country music had an extensive presence, especially on the Canadian national broadcaster, CBC Television. The show Don Messer's Jubilee significantly affected country music in Canada; for instance, it was the program that launched Anne Murray's career. Gordie Tapp's Country Hoedown and its successor, The Tommy Hunter Show, ran for a combined 36 years on the CBC, from 1956 to 1992; in its last nine years on air, the U.S. cable network TNN carried Hunter's show.
The only network dedicated to country music in Australia was the Country Music Channel owned by Foxtel. It ceased operations in June 2020 and was replaced by CMT (owned by Network 10 parent company Paramount Networks UK & Australia).
One music video channel is now dedicated to country music in the United Kingdom: Spotlight TV, owned by Canis Media.
Computer science and music experts identified issues with algorithms on streaming services such as Spotify and Apple Music, specifically the categorical homogenization of music curation and metadata within larger genres such as country music. Musicians and songs from minority heritage styles, such as Appalachian, Cajun, New Mexico, and Tejano music, underperform on these platforms due to underrepresentation and miscategorization of these subgenres.
The Country Music Association has awarded the New Artist award to a black American only twice in 63 years, and never to a Hispanic musician. The broader modern Nashville-based Country music industry has underrepresented significant black and Latino contributions within Country music, including popular subgenres such as Cajun, Creole, Tejano, and New Mexico music. A 2021 CNN article states, "Some in country music have signaled that they are no longer content to be associated with a painful history of racism. "
Black country-music artist Mickey Guyton had been included among the nominees for the 2021 award, effectively creating a litmus-test for the genre. Guyton has expressed bewilderment that, despite substantial coverage by online platforms like Spotify and Apple Music, her music, like that of Valerie June, another black musician who embraces aspects of country in her Appalachian- and Gospel-tinged work and who has been embraced by international music audiences, is still effectively ignored by American broadcast country-music radio. Guyton's 2021 album Remember Her Name in part references the case of black health-care professional Breonna Taylor, who was killed in her home by police.
In 2023, "Try That in a Small Town" by Jason Aldean became the subject of widespread controversy and media attention following the release of its music video. Tennessee state representative Justin Jones referred to the song as a "heinous vile racist song" which attempts to normalize "racist, violence, vigilantism and white nationalism". Others thought the lyrics were supportive of lynchings and sundown towns. Amanda Marie Martinez of NPR wrote that the song "builds on a lineage of anti-city songs in country music that place the rural and urban along not only a moral versus immoral binary, but an implicitly racialized one as well...selective availability of home loans in suburbs and racially restrictive housing covenants in cities furthered white flight, making cities synonymous with non-whiteness." She concluded by stating that such songs are "why country music continues to be a frightening space for marginalized communities". | [
{
"paragraph_id": 0,
"text": "Country (also called country and western) is a music genre originating in the Southern and Southwestern United States. First produced in the 1920s, country music primarily focuses on working class Americans and blue-collar American life.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Country music is known for its ballads and dance tunes (also known as \"honky-tonk music\") with simple form, folk lyrics, and harmonies generally accompanied by instruments such as banjos, fiddles, harmonicas, and many types of guitar (including acoustic, electric, steel, and resonator guitars). Though it is primarily rooted in various forms of American folk music, such as old-time music and Appalachian music, many other traditions, including, Mexican, Irish, and Hawaiian music, have also had a formative influence on the genre. Blues modes have been used extensively throughout its history as well.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term country music gained popularity in the 1940s in preference to hillbilly music; it came to encompass western music, which evolved parallel to hillbilly music from similar roots, in the mid-20th century. Contemporary styles of western music include Texas country, red dirt, and Hispano- and Mexican American-led Tejano and New Mexico music, all extant alongside longstanding indigenous traditions.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 2009, in the United States, country music was the most listened to rush hour radio genre during the evening commute, and second most popular in the morning commute.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The main components of the modern country music style date back to music traditions throughout the Southern United States and Southwestern United States, while its place in American popular music was established in the 1920s during the early days of music recording. According to country historian Bill C. Malone, country music was \"introduced to the world as a Southern phenomenon.\"",
"title": "Origins"
},
{
"paragraph_id": 5,
"text": "Migration into the southern Appalachian Mountains, of the Southeastern United States, brought the folk music and instruments of Europe, Africa, and the Mediterranean Basin along with it for nearly 300 years, which developed into Appalachian music. As the country expanded westward, the Mississippi River and Louisiana became a crossroads for country music, giving rise to Cajun music. In the Southwestern United States, it was the Rocky Mountains, American frontier, and Rio Grande that acted as a similar backdrop for Native American, Mexican, and cowboy ballads, which resulted in New Mexico music and the development of western music, and its directly related Red Dirt, Texas country, and Tejano music styles. In the Asia-Pacific, the steel guitar sound of country music has its provenance in the music of Hawaii.",
"title": "Origins"
},
{
"paragraph_id": 6,
"text": "The U.S. Congress has formally recognized Bristol, Tennessee as the \"Birthplace of Country Music\", based on the historic Bristol recording sessions of 1927. Since 2014, the city has been home to the Birthplace of Country Music Museum. Historians have also noted the influence of the less-known Johnson City sessions of 1928 and 1929, and the Knoxville sessions of 1929 and 1930. In addition, the Mountain City Fiddlers Convention, held in 1925, helped to inspire modern country music. Before these, pioneer settlers, in the Great Smoky Mountains region, had developed a rich musical heritage.",
"title": "Origins"
},
{
"paragraph_id": 7,
"text": "The first generation emerged in the 1920s, with Atlanta's music scene playing a major role in launching country's earliest recording artists. James Gideon \"Gid\" Tanner (1885–1960) was an American old-time fiddler and one of the earliest stars of what would come to be known as country music. His band, the Skillet Lickers, was one of the most innovative and influential string bands of the 1920s and 1930s. Its most notable members were Clayton McMichen (fiddle and vocal), Dan Hornsby (vocals), Riley Puckett (guitar and vocal) and Robert Lee Sweat (guitar). New York City record label Okeh Records began issuing hillbilly music records by Fiddlin' John Carson as early as 1923, followed by Columbia Records (series 15000D \"Old Familiar Tunes\") (Samantha Bumgarner) in 1924, and RCA Victor Records in 1927 with the first famous pioneers of the genre Jimmie Rodgers, who is widely considered the \"Father of Country Music\", and the first family of country music the Carter Family. Many \"hillbilly\" musicians recorded blues songs throughout the 1920s.",
"title": "Generations"
},
{
"paragraph_id": 8,
"text": "During the second generation (1930s–1940s), radio became a popular source of entertainment, and \"barn dance\" shows featuring country music were started all over the South, as far north as Chicago, and as far west as California. The most important was the Grand Ole Opry, aired starting in 1925 by WSM in Nashville and continuing to the present day. During the 1930s and 1940s, cowboy songs, or western music, which had been recorded since the 1920s, were popularized by films made in Hollywood, many featuring Gene Autry, who was known as king of the \"singing cowboys,\" and Hank Williams. Bob Wills was another country musician from the Lower Great Plains who had become very popular as the leader of a \"hot string band,\" and who also appeared in Hollywood westerns. His mix of country and jazz, which started out as dance hall music, would become known as western swing. Wills was one of the first country musicians known to have added an electric guitar to his band, in 1938. Country musicians began recording boogie in 1939, shortly after it had been played at Carnegie Hall, when Johnny Barfield recorded \"Boogie Woogie\".",
"title": "Generations"
},
{
"paragraph_id": 9,
"text": "The third generation (1950s–1960s) started at the end of World War II with \"mountaineer\" string band music known as bluegrass, which emerged when Bill Monroe, along with Lester Flatt and Earl Scruggs were introduced by Roy Acuff at the Grand Ole Opry. Gospel music remained a popular component of country music. The Native American, Hispano, and American frontier music of the Southwestern United States and Northern Mexico, became popular among poor communities in New Mexico, Oklahoma, and Texas; the basic ensemble consisted of classical guitar, bass guitar, dobro or steel guitar, though some larger ensembles featured electric guitars, trumpets, keyboards (especially the honky-tonk piano, a type of tack piano), banjos, and drums. By the early 1950s it blended with rock and roll, becoming the rockabilly sound produced by Sam Phillips, Norman Petty, and Bob Keane. Musicians like Elvis Presley, Bo Diddley, Buddy Holly, Jerry Lee Lewis, Ritchie Valens, Carl Perkins, Roy Orbison, and Johnny Cash emerged as enduring representatives of the style. Beginning in the mid-1950s, and reaching its peak during the early 1960s, the Nashville sound turned country music into a multimillion-dollar industry centered in Nashville, Tennessee; Patsy Cline and Jim Reeves were two of the most broadly popular Nashville sound artists, and their deaths in separate plane crashes in the early 1960s were a factor in the genre's decline. Starting in the 1950s to the mid-1960s, western singer-songwriters such as Michael Martin Murphey and Marty Robbins rose in prominence as did others, throughout western music traditions, like New Mexico music's Al Hurricane. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the \"old values\" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock.",
"title": "Generations"
},
{
"paragraph_id": 10,
"text": "Fourth generation (1970s–1980s) music included outlaw country with roots in the Bakersfield sound, and country pop with roots in the countrypolitan, folk music and soft rock. Between 1972 and 1975 singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles. By the mid-1970s, Texas country and Tejano music gained popularity with performers like Freddie Fender. During the early 1980s country artists continued to see their records perform well on the pop charts. In 1980 a style of \"neocountry disco music\" was popularized. During the mid-1980s a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts in favor of more traditional \"back-to-basics\" production.",
"title": "Generations"
},
{
"paragraph_id": 11,
"text": "During the fifth generation (the 1990s), neotraditionalists and stadium country acts prospered.",
"title": "Generations"
},
{
"paragraph_id": 12,
"text": "The sixth generation (2000s–present) has seen a certain amount of diversification in regard to country music styles. It has also, however, seen a shift into patriotism and conservative politics since 9/11, though such themes are less prevalent in more modern trends. The influence of rock music in country has become more overt during the late 2000s and early 2010s. Most of the best-selling country songs of this era were those by Lady A, Florida Georgia Line, Carrie Underwood, and Taylor Swift. Hip hop also made its mark on country music with the emergence of country rap.",
"title": "Generations"
},
{
"paragraph_id": 13,
"text": "The first commercial recordings of what was considered instrumental music in the traditional country style were \"Arkansas Traveler\" and \"Turkey in the Straw\" by fiddlers Henry Gilliland & A.C. (Eck) Robertson on June 30, 1922, for Victor Records and released in April 1923. Columbia Records began issuing records with \"hillbilly\" music (series 15000D \"Old Familiar Tunes\") as early as 1924.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The first commercial recording of what is widely considered to be the first country song featuring vocals and lyrics was Fiddlin' John Carson with \"Little Log Cabin in the Lane\" for Okeh Records on June 14, 1923.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Vernon Dalhart was the first country singer to have a nationwide hit in May 1924 with \"Wreck of the Old 97\". The flip side of the record was \"Lonesome Road Blues\", which also became very popular. In April 1924, \"Aunt\" Samantha Bumgarner and Eva Davis became the first female musicians to record and release country songs. Many of the early country musicians, such as the yodeler Cliff Carlisle, recorded blues songs into the 1930s. Other important early recording artists were Riley Puckett, Don Richardson, Fiddlin' John Carson, Uncle Dave Macon, Al Hopkins, Ernest V. Stoneman, Blind Alfred Reed, Charlie Poole and the North Carolina Ramblers and the Skillet Lickers. The steel guitar entered country music as early as 1922, when Jimmie Tarlton met famed Hawaiian guitarist Frank Ferera on the West Coast.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Jimmie Rodgers and the Carter Family are widely considered to be important early country musicians. From Scott County, Virginia, the Carters had learned sight reading of hymnals and sheet music using solfege. Their songs were first captured at a historic recording session in Bristol, Tennessee, on August 1, 1927, where Ralph Peer was the talent scout and sound recordist. A scene in the movie O Brother, Where Art Thou? depicts a similar occurrence in the same timeframe.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Rodgers fused hillbilly country, gospel, jazz, blues, pop, cowboy, and folk, and many of his best songs were his compositions, including \"Blue Yodel\", which sold over a million records and established Rodgers as the premier singer of early country music. Beginning in 1927, and for the next 17 years, the Carters recorded some 300 old-time ballads, traditional tunes, country songs and gospel hymns, all representative of America's southeastern folklore and heritage. Maybelle Carter went on to continue the family tradition with her daughters as The Carter Sisters; her daughter June would marry (in succession) Carl Smith, Rip Nix and Johnny Cash, having children with each who would also become country singers.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Record sales declined during the Great Depression, but radio became a popular source of entertainment, and \"barn dance\" shows featuring country music were started by radio stations all over the South, as far north as Chicago, and as far west as California.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The most important was the Grand Ole Opry, aired starting in 1925 by WSM in Nashville and continuing to the present day. Some of the early stars on the Opry were Uncle Dave Macon, Roy Acuff and African American harmonica player DeFord Bailey. WSM's 50,000-watt signal (in 1934) could often be heard across the country. Many musicians performed and recorded songs in any number of styles. Moon Mullican, for example, played western swing but also recorded songs that can be called rockabilly. Between 1947 and 1949, country crooner Eddy Arnold placed eight songs in the top 10. From 1945 to 1955 Jenny Lou Carson was one of the most prolific songwriters in country music.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In the 1930s and 1940s, cowboy songs, or western music, which had been recorded since the 1920s, were popularized by films made in Hollywood. Some of the popular singing cowboys from the era were Gene Autry, the Sons of the Pioneers, and Roy Rogers. Country music and western music were frequently played together on the same radio stations, hence the term country and western music, despite country and western being two distinct genres.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Cowgirls contributed to the sound in various family groups. Patsy Montana opened the door for female artists with her history-making song \"I Want To Be a Cowboy's Sweetheart\". This would begin a movement toward opportunities for women to have successful solo careers. Bob Wills was another country musician from the Lower Great Plains who had become very popular as the leader of a \"hot string band,\" and who also appeared in Hollywood westerns. His mix of country and jazz, which started out as dance hall music, would become known as western swing. Cliff Bruner, Moon Mullican, Milton Brown and Adolph Hofner were other early western swing pioneers. Spade Cooley and Tex Williams also had very popular bands and appeared in films. At its height, western swing rivaled the popularity of big band swing music.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Drums were scorned by early country musicians as being \"too loud\" and \"not pure\", but by 1935 western swing big band leader Bob Wills had added drums to the Texas Playboys. In the mid-1940s, the Grand Ole Opry did not want the Playboys' drummer to appear on stage. Although drums were commonly used by rockabilly groups by 1955, the less-conservative-than-the-Grand-Ole-Opry Louisiana Hayride kept its infrequently used drummer backstage as late as 1956. By the early 1960s, however, it was rare for a country band not to have a drummer. Bob Wills was one of the first country musicians known to have added an electric guitar to his band, in 1938. A decade later (1948) Arthur Smith achieved top 10 US country chart success with his MGM Records recording of \"Guitar Boogie\", which crossed over to the US pop chart, introducing many people to the potential of the electric guitar. For several decades Nashville session players preferred the warm tones of the Gibson and Gretsch archtop electrics, but a \"hot\" Fender style, using guitars which became available beginning in the early 1950s, eventually prevailed as the signature guitar sound of country.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Country musicians began recording boogie in 1939, shortly after it had been played at Carnegie Hall, when Johnny Barfield recorded \"Boogie Woogie\". The trickle of what was initially called hillbilly boogie, or okie boogie (later to be renamed country boogie), became a flood beginning in late 1945. One notable release from this period was the Delmore Brothers' \"Freight Train Boogie\", considered to be part of the combined evolution of country music and blues towards rockabilly. In 1948, Arthur \"Guitar Boogie\" Smith achieved top ten US country chart success with his MGM Records recordings of \"Guitar Boogie\" and \"Banjo Boogie\", with the former crossing over to the US pop charts. Other country boogie artists included Moon Mullican, Merrill Moore and Tennessee Ernie Ford. The hillbilly boogie period lasted into the 1950s and remains one of many subgenres of country into the 21st century.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "By the end of World War II, \"mountaineer\" string band music known as bluegrass had emerged when Bill Monroe joined with Lester Flatt and Earl Scruggs, introduced by Roy Acuff at the Grand Ole Opry. That was the ordination of bluegrass music and how Bill Monroe came to be known as the \"Father of Bluegrass.\" Gospel music, too, remained a popular component of bluegrass and other sorts of country music. Red Foley, the biggest country star following World War II, had one of the first million-selling gospel hits (\"Peace in the Valley\") and also sang boogie, blues and rockabilly. In the post-war period, country music was called \"folk\" in the trades, and \"hillbilly\" within the industry. In 1944, Billboard replaced the term \"hillbilly\" with \"folk songs and blues,\" and switched to \"country and western\" in 1949.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Another type of stripped-down and raw music with a variety of moods and a basic ensemble of guitar, bass, dobro or steel guitar (and later) drums became popular, especially among rural residents in the three states of Texhomex, those being Texas, Oklahoma, and New Mexico. It became known as honky tonk and had its roots in western swing and the ranchera music of Mexico and the border states, particularly New Mexico and Texas, together with the blues of the American South. Bob Wills and His Texas Playboys personified this music which has been described as \"a little bit of this, and a little bit of that, a little bit of black and a little bit of white ... just loud enough to keep you from thinking too much and to go right on ordering the whiskey.\" East Texan Al Dexter had a hit with \"Honky Tonk Blues\", and seven years later \"Pistol Packin' Mama\". These \"honky tonk\" songs were associated with barrooms, and was performed by the likes of Ernest Tubb, Kitty Wells (the first major female country solo singer), Ted Daffan, Floyd Tillman, the Maddox Brothers and Rose, Lefty Frizzell and Hank Williams; the music of these artists would later be called \"traditional\" country. Williams' influence in particular would prove to be enormous, inspiring many of the pioneers of rock and roll, such as Elvis Presley, Jerry Lee Lewis, Chuck Berry and Ike Turner, while providing a framework for emerging honky tonk talents like George Jones. Webb Pierce was the top-charting country artist of the 1950s, with 13 of his singles spending 113 weeks at number one. He charted 48 singles during the decade; 31 reached the top ten and 26 reached the top four.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "By the early 1950s, a blend of western swing, country boogie, and honky tonk was played by most country bands, a mixture which followed in the footsteps of Gene Autry, Lydia Mendoza, Roy Rogers, and Patsy Montana. Western music, influenced by the cowboy ballads, New Mexico, Texas country and Tejano music rhythms of the Southwestern United States and Northern Mexico, reached its peak in popularity in the late 1950s, most notably with the song \"El Paso\", first recorded by Marty Robbins in September 1959. Western music's influence would continue to grow within the country music sphere, western musicians like Michael Martin Murphey, New Mexico music artists Al Hurricane and Antonia Apodaca, Tejano music performer Little Joe, and even folk revivalist John Denver, all first rose to prominence during this time. This western music influence largely kept the music of the folk revival and folk rock from influencing the country music genre much, despite the similarity in instrumentation and origins (see, for instance, the Byrds' negative reception during their appearance on the Grand Ole Opry). The main concern was largely political: most folk revival was largely driven by progressive activists, a stark contrast to the culturally conservative audiences of country music. John Denver was perhaps the only musician to have major success in both the country and folk revival genres throughout his career, later only a handful of artists like Burl Ives and Canadian musician Gordon Lightfoot successfully made the crossover to country after folk revival fell out of fashion. During the mid-1950s a new style of country music became popular, eventually to be referred to as rockabilly.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In 1953, the first all-country radio station was established in Lubbock, Texas. The music of the 1960s and 1970s targeted the American working class, and truckers in particular. As country radio became more popular, trucking songs like the 1963 hit song Six Days on the Road by Dave Dudley began to make up their own subgenre of country. These revamped songs sought to portray American truckers as a \"new folk hero\", marking a significant shift in sound from earlier country music. The song was written by actual truckers and contained numerous references to the trucker culture of the time like \"ICC\" for Interstate Commerce Commission and \"little white pills\" as a reference to amphetamines. Starday Records in Nashville followed up on Dudley's initial success with the release of Give Me 40 Acres by the Willis Brothers.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Rockabilly was most popular with country fans in the 1950s; one of the first rock and roll superstars was former western yodeler Bill Haley, who repurposed his Four Aces of Western Swing into a rockabilly band in the early 1950s and renamed it the Comets. Bill Haley & His Comets are credited with two of the first successful rock and roll records, \"Crazy Man, Crazy\" of 1953 and \"Rock Around the Clock\" in 1954.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "1956 could be called the year of rockabilly in country music. Rockabilly was an early form of rock and roll, an upbeat combination of blues and country music. The number two, three and four songs on Billboard's charts for that year were Elvis Presley, \"Heartbreak Hotel\"; Johnny Cash, \"I Walk the Line\"; and Carl Perkins, \"Blue Suede Shoes\". Reflecting this success, George Jones released a rockabilly record that year under the pseudonym \"Thumper Jones\", wanting to capitalize on the popularity of rockabilly without alienating his traditional country base. Cash and Presley placed songs in the top 5 in 1958 with No. 3 \"Guess Things Happen That Way/Come In, Stranger\" by Cash, and No. 5 by Presley \"Don't/I Beg of You.\" Presley acknowledged the influence of rhythm and blues artists and his style, saying \"The colored folk been singin' and playin' it just the way I'm doin' it now, man for more years than I know.\" Within a few years, many rockabilly musicians returned to a more mainstream style or had defined their own unique style.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Country music gained national television exposure through Ozark Jubilee on ABC-TV and radio from 1955 to 1960 from Springfield, Missouri. The program showcased top stars including several rockabilly artists, some from the Ozarks. As Webb Pierce put it in 1956, \"Once upon a time, it was almost impossible to sell country music in a place like New York City. Nowadays, television takes us everywhere, and country music records and sheet music sell as well in large cities as anywhere else.\"",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The Country Music Association was founded in 1958, in part because numerous country musicians were appalled by the increased influence of rock and roll on country music.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Beginning in the mid-1950s, and reaching its peak during the early 1960s, the Nashville sound turned country music into a multimillion-dollar industry centered in Nashville, Tennessee. Under the direction of producers such as Chet Atkins, Bill Porter, Paul Cohen, Owen Bradley, Bob Ferguson, and later Billy Sherrill, the sound brought country music to a diverse audience and helped revive country as it emerged from a commercially fallow period. This subgenre was notable for borrowing from 1950s pop stylings: a prominent and smooth vocal, backed by a string section (violins and other orchestral strings) and vocal chorus. Instrumental soloing was de-emphasized in favor of trademark \"licks\". Leading artists in this genre included Jim Reeves, Skeeter Davis, Connie Smith, the Browns, Patsy Cline, and Eddy Arnold. The \"slip note\" piano style of session musician Floyd Cramer was an important component of this style. The Nashville Sound collapsed in mainstream popularity in 1964, a victim of both the British Invasion and the deaths of Reeves and Cline in separate airplane crashes. By the mid-1960s, the genre had developed into countrypolitan. Countrypolitan was aimed straight at mainstream markets, and it sold well throughout the later 1960s into the early 1970s. Top artists included Tammy Wynette, Lynn Anderson and Charlie Rich, as well as such former \"hard country\" artists as Ray Price and Marty Robbins. Despite the appeal of the Nashville sound, many traditional country artists emerged during this period and dominated the genre: Loretta Lynn, Merle Haggard, Buck Owens, Porter Wagoner, George Jones, and Sonny James among them.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "In 1962, Ray Charles surprised the pop world by turning his attention to country and western music, topping the charts and rating number three for the year on Billboard's pop chart with the \"I Can't Stop Loving You\" single, and recording the landmark album Modern Sounds in Country and Western Music.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Another subgenre of country music grew out of hardcore honky tonk with elements of western swing and originated 112 miles (180 km) north-northwest of Los Angeles in Bakersfield, California, where many \"Okies\" and other Dust Bowl migrants had settled. Influenced by one-time West Coast residents Bob Wills and Lefty Frizzell, by 1966 it was known as the Bakersfield sound. It relied on electric instruments and amplification, in particular the Telecaster electric guitar, more than other subgenres of the country music of the era, and it can be described as having a sharp, hard, driving, no-frills, edgy flavor—hard guitars and honky-tonk harmonies. Leading practitioners of this style were Buck Owens, Merle Haggard, Tommy Collins, Dwight Yoakam, Gary Allan, and Wynn Stewart, each of whom had his own style.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Ken Nelson, who had produced Owens and Haggard and Rose Maddox became interested in the trucking song subgenre following the success of Six Days on the Road and asked Red Simpson to record an album of trucking songs. Haggard's White Line Fever was also part of the trucking subgenre.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The country music scene of the 1940s until the 1970s was largely dominated by western music influences, so much so that the genre began to be called \"country and western\". Even today, cowboy and frontier values continue to play a role in the larger country music, with western wear, cowboy boots, and cowboy hats continues to be in fashion for country artists.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "West of the Mississippi river, many of these western genres continue to flourish, including the Red Dirt of Oklahoma, New Mexico music of New Mexico, and both Texas country music and Tejano music of Texas. During the 1950s until the early 1970s, the latter part of the western heyday in country music, many of these genres featured popular artists that continue to influence both their distinctive genres and larger country music. Red Dirt featured Bob Childers and Steve Ripley; for New Mexico music Al Hurricane, Al Hurricane Jr., and Antonia Apodaca; and within the Texas scenes Willie Nelson, Freddie Fender, Johnny Rodriguez, and Little Joe.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "As Outlaw country music emerged as subgenre in its own right, Red Dirt, New Mexico, Texas country, and Tejano grew in popularity as a part of the Outlaw country movement. Originating in the bars, fiestas, and honky-tonks of Oklahoma, New Mexico, and Texas, their music supplemented outlaw country's singer-songwriter tradition as well as 21st-century rock-inspired alternative country and hip hop-inspired country rap artists.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Outlaw country was derived from the traditional western, including Red Dirt, New Mexico, Texas country, Tejano, and honky-tonk musical styles of the late 1950s and 1960s. Songs such as the 1963 Johnny Cash popularized \"Ring of Fire\" show clear influences from the likes of Al Hurricane and Little Joe, this influence just happened to culminate with artists such as Ray Price (whose band, the \"Cherokee Cowboys\", included Willie Nelson and Roger Miller) and mixed with the anger of an alienated subculture of the nation during the period, a collection of musicians that came to be known as the outlaw movement revolutionized the genre of country music in the early 1970s. \"After I left Nashville (the early 70s), I wanted to relax and play the music that I wanted to play, and just stay around Texas, maybe Oklahoma. Waylon and I had that outlaw image going, and when it caught on at colleges and we started selling records, we were O.K. The whole outlaw thing, it had nothing to do with the music, it was something that got written in an article, and the young people said, 'Well, that's pretty cool.' And started listening.\" (Willie Nelson) The term outlaw country is traditionally associated with Willie Nelson, Jerry Jeff Walker, Hank Williams, Jr., Merle Haggard, Waylon Jennings and Joe Ely. It was encapsulated in the 1976 album Wanted! The Outlaws.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Though the outlaw movement as a cultural fad had died down after the late 1970s (with Jennings noting in 1978 that it had gotten out of hand and led to real-life legal scrutiny), many western and outlaw country music artists maintained their popularity during the 1980s by forming supergroups, such as The Highwaymen, Texas Tornados, and Bandido.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Country pop or soft pop, with roots in the countrypolitan sound, folk music, and soft rock, is a subgenre that first emerged in the 1970s. Although the term first referred to country music songs and artists that crossed over to top 40 radio, country pop acts are now more likely to cross over to adult contemporary music. It started with pop music singers like Glen Campbell, Bobbie Gentry, John Denver, Olivia Newton-John, Anne Murray, B. J. Thomas, the Bellamy Brothers, and Linda Ronstadt having hits on the country charts. Between 1972 and 1975, singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles (\"Rocky Mountain High\", \"Sunshine on My Shoulders\", \"Annie's Song\", \"Thank God I'm a Country Boy\", and \"I'm Sorry\"), and was named Country Music Entertainer of the Year in 1975. The year before, Olivia Newton-John, an Australian pop singer, won the \"Best Female Country Vocal Performance\" as well as the Country Music Association's most coveted award for females, \"Female Vocalist of the Year\". In response George Jones, Tammy Wynette, Jean Shepard and other traditional Nashville country artists dissatisfied with the new trend formed the short-lived \"Association of Country Entertainers\" in 1974; the ACE soon unraveled in the wake of Jones and Wynette's bitter divorce and Shepard's realization that most others in the industry lacked her passion for the movement.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "During the mid-1970s, Dolly Parton, a successful mainstream country artist since the late 1960s, mounted a high-profile campaign to cross over to pop music, culminating in her 1977 hit \"Here You Come Again\", which topped the U.S. country singles chart, and also reached No. 3 on the pop singles charts. Parton's male counterpart, Kenny Rogers, came from the opposite direction, aiming his music at the country charts, after a successful career in pop, rock and folk music with the First Edition, achieving success the same year with \"Lucille\", which topped the country charts and reached No. 5 on the U.S. pop singles charts, as well as reaching Number 1 on the British all-genre chart. Parton and Rogers would both continue to have success on both country and pop charts simultaneously, well into the 1980s. Country music propelled Kenny Rogers’ career, making him a three-time Grammy Award winner and six-time Country Music Association Awards winner. Having sold more than 50 million albums in the US, one of his Song \"The Gambler,\" inspired several TV films, with Rogers as the main character. Artists like Crystal Gayle, Ronnie Milsap and Barbara Mandrell would also find success on the pop charts with their records. In 1975, author Paul Hemphill stated in the Saturday Evening Post, \"Country music isn't really country anymore; it is a hybrid of nearly every form of popular music in America.\"",
"title": "History"
},
{
"paragraph_id": 43,
"text": "During the early 1980s, country artists continued to see their records perform well on the pop charts. Willie Nelson and Juice Newton each had two songs in the top 5 of the Billboard Hot 100 in the early eighties: Nelson charted \"Always on My Mind\" (#5, 1982) and \"To All the Girls I've Loved Before\" (#5, 1984, a duet with Julio Iglesias), and Newton achieved success with \"Queen of Hearts\" (#2, 1981) and \"Angel of the Morning\" (#4, 1981). Four country songs topped the Billboard Hot 100 in the 1980s: \"Lady\" by Kenny Rogers, from the late fall of 1980; \"9 to 5\" by Dolly Parton, \"I Love a Rainy Night\" by Eddie Rabbitt (these two back-to-back at the top in early 1981); and \"Islands in the Stream\", a duet by Dolly Parton and Kenny Rogers in 1983, a pop-country crossover hit written by Barry, Robin, and Maurice Gibb of the Bee Gees. Newton's \"Queen of Hearts\" almost reached No. 1, but was kept out of the spot by the pop ballad juggernaut \"Endless Love\" by Diana Ross and Lionel Richie. The move of country music toward neotraditional styles led to a marked decline in country/pop crossovers in the late 1980s, and only one song in that period—Roy Orbison's \"You Got It\", from 1989—made the top 10 of both the Billboard Hot Country Singles\" and Hot 100 charts, due largely to a revival of interest in Orbison after his sudden death. The only song with substantial country airplay to reach number one on the pop charts in the late 1980s was \"At This Moment\" by Billy Vera and the Beaters, an R&B song with slide guitar embellishment that appeared at number 42 on the country charts from minor crossover airplay. The record-setting, multi-platinum group Alabama was named Artist of the Decade for the 1980s by the Academy of Country Music.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Country rock is a genre that started in the 1960s but became prominent in the 1970s. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the \"old values\" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock. Early innovators in this new style of music in the 1960s and 1970s included Bob Dylan, who was the first to revert to country music with his 1967 album John Wesley Harding (and even more so with that album's follow-up, Nashville Skyline), followed by Gene Clark, Clark's former band the Byrds (with Gram Parsons on Sweetheart of the Rodeo) and its spin-off the Flying Burrito Brothers (also featuring Gram Parsons), guitarist Clarence White, Michael Nesmith (the Monkees and the First National Band), the Grateful Dead, Neil Young, Commander Cody, the Allman Brothers Band, Charlie Daniels, the Marshall Tucker Band, Poco, Buffalo Springfield, Stephen Stills' band Manassas and Eagles, among many, even the former folk music duo Ian & Sylvia, who formed Great Speckled Bird in 1969. The Eagles would become the most successful of these country rock acts, and their compilation album Their Greatest Hits (1971–1975) remains the second-best-selling album in the US with 29 million copies sold. The Rolling Stones also got into the act with songs like \"Dead Flowers\"; the original recording of \"Honky Tonk Women\" was performed in a country style, but it was subsequently re-recorded in a hard rock style for the single version, and the band's preferred country version was later released on the album Let It Bleed, under the title \"Country Honk\".",
"title": "History"
},
{
"paragraph_id": 45,
"text": "Described by AllMusic as the \"father of country-rock\", Gram Parsons' work in the early 1970s was acclaimed for its purity and for his appreciation for aspects of traditional country music. Though his career was cut tragically short by his 1973 death, his legacy was carried on by his protégé and duet partner Emmylou Harris; Harris would release her debut solo in 1975, an amalgamation of country, rock and roll, folk, blues and pop. Subsequent to the initial blending of the two polar opposite genres, other offspring soon resulted, including Southern rock, heartland rock and in more recent years, alternative country. In the decades that followed, artists such as Juice Newton, Alabama, Hank Williams, Jr. (and, to an even greater extent, Hank Williams III), Gary Allan, Shania Twain, Brooks & Dunn, Faith Hill, Garth Brooks, Dwight Yoakam, Steve Earle, Dolly Parton, Rosanne Cash and Linda Ronstadt moved country further towards rock influence.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "In 1980, a style of \"neocountry disco music\" was popularized by the film Urban Cowboy. It was during this time that a glut of pop-country crossover artists began appearing on the country charts: former pop stars Bill Medley (of the Righteous Brothers), \"England Dan\" Seals (of England Dan and John Ford Coley), Tom Jones, and Merrill Osmond (both alone and with some of his brothers; his younger sister Marie Osmond was already an established country star) all recorded significant country hits in the early 1980s. Sales in record stores rocketed to $250 million in 1981; by 1984, 900 radio stations began programming country or neocountry pop full-time. As with most sudden trends, however, by 1984 sales had dropped below 1979 figures.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Truck driving country music is a genre of country music and is a fusion of honky-tonk, country rock and the Bakersfield sound. It has the tempo of country rock and the emotion of honky-tonk, and its lyrics focus on a truck driver's lifestyle. Truck driving country songs often deal with the profession of trucking and love. Well-known artists who sing truck driving country include Dave Dudley, Red Sovine, Dick Curless, Red Simpson, Del Reeves, the Willis Brothers and Jerry Reed, with C. W. McCall and Cledus Maggard (pseudonyms of Bill Fries and Jay Huguely, respectively) being more humorous entries in the subgenre. Dudley is known as the father of truck driving country.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, \"back-to-basics\" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Country music was aided by the U.S. Federal Communications Commission's (FCC) Docket 80–90, which led to a significant expansion of FM radio in the 1980s by adding numerous higher-fidelity FM signals to rural and suburban areas. At this point, country music was mainly heard on rural AM radio stations; the expansion of FM was particularly helpful to country music, which migrated to FM from the AM band as AM became overcome by talk radio (the country music stations that stayed on AM developed the classic country format for the AM audience). At the same time, beautiful music stations already in rural areas began abandoning the format (leading to its effective demise) to adopt country music as well. This wider availability of country music led to producers seeking to polish their product for a wider audience. In 1990, Billboard, which had published a country music chart since the 1940s, changed the methodology it used to compile the chart: singles sales were removed from the methodology, and only airplay on country radio determined a song's place on the chart.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "In the 1990s, country music became a worldwide phenomenon thanks to Garth Brooks, who enjoyed one of the most successful careers in popular music history, breaking records for both sales and concert attendance throughout the decade. The RIAA has certified his recordings at a combined (128× platinum), denoting roughly 113 million U.S. shipments. Other artists who experienced success during this time included Clint Black, John Michael Montgomery, Tracy Lawrence, Tim McGraw, Kenny Chesney, Travis Tritt, Alan Jackson and the newly formed duo of Brooks & Dunn; George Strait, whose career began in the 1980s, also continued to have widespread success in this decade and beyond. Toby Keith began his career as a more pop-oriented country singer in the 1990s, evolving into an outlaw persona in the early 2000s with Pull My Chain and its follow-up, Unleashed.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Female artists such as Reba McEntire, Patty Loveless, Faith Hill, Martina McBride, Deana Carter, LeAnn Rimes, Mindy McCready, Pam Tillis, Lorrie Morgan, Shania Twain, and Mary Chapin Carpenter all released platinum-selling albums in the 1990s. The Dixie Chicks became one of the most popular country bands in the 1990s and early 2000s. Their 1998 debut album Wide Open Spaces went on to become certified 12× platinum while their 1999 album Fly went on to become 10× platinum. After their third album, Home, was released in 2003, the band made political news in part because of lead singer Natalie Maines's comments disparaging then-President George W. Bush while the band was overseas (Maines stated that she and her bandmates were ashamed to be from the same state as Bush, who had just commenced the Iraq War a few days prior). The comments caused a rift between the band and the country music scene, and the band's fourth (and most recent) album, 2006's Taking the Long Way, took a more rock-oriented direction; the album was commercially successful overall among non-country audiences but largely ignored among country audiences. After Taking the Long Way, the band broke up for a decade (with two of its members continuing as the Court Yard Hounds) before reuniting in 2016 and releasing new material in 2020.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "Canadian artist Shania Twain became the best selling female country artist of the decade. This was primarily due to the success of her breakthrough sophomore 1995 album, The Woman in Me, which was certified 12× platinum sold over 20 million copies worldwide and its follow-up, 1997's Come On Over, which was certified 20× platinum and sold over 40 million copies. The album became a major worldwide phenomenon and became one of the world's best selling albums for three years (1998, 1999 and 2000); it also went on to become the best selling country album of all time.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Unlike the majority of her contemporaries, Twain enjoyed large international success that had been seen by very few country artists, before or after her. Critics have noted that Twain enjoyed much of her success due to breaking free of traditional country stereotypes and for incorporating elements of rock and pop into her music. In 2002, she released her successful fourth studio album, titled Up!, which was certified 11× platinum and sold over 15 million copies worldwide. Shania Twain has been nominated eighteen times for Grammy Awards and won five Grammys. [] She was the best-paid country music star in 2016 according to Forbes, with a net worth of $27.5 million. []Twain has been credited with breaking international boundaries for country music, as well as inspiring many country artists to incorporate different genres into their music in order to attract a wider audience. She is also credited with changing the way in which many female country performers would market themselves, as unlike many before her she used fashion and her sex appeal to get rid of the stereotypical 'honky-tonk' image the majority of country singers had in order to distinguish herself from many female country artists of the time.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "In the early-mid-1990s, country western music was influenced by the popularity of line dancing. This influence was so great that Chet Atkins was quoted as saying, \"The music has gotten pretty bad, I think. It's all that damn line dancing.\" By the end of the decade, however, at least one line dance choreographer complained that good country line dance music was no longer being released. In contrast, artists such as Don Williams and George Jones who had more or less had consistent chart success through the 1970s and 1980s suddenly had their fortunes fall rapidly around 1991 when the new chart rules took effect.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "Country influences combined with Punk rock and alternative rock to forge the \"cowpunk\" scene in Southern California during the 1980s, which included bands such as the Long Ryders, Lone Justice and the Beat Farmers, as well as the established punk group X, whose music had begun to include country and rockabilly influences. Simultaneously, a generation of diverse country artists outside of California emerged that rejected the perceived cultural and musical conservatism associated with Nashville's mainstream country musicians in favor of more countercultural outlaw country and the folk singer-songwriter traditions of artists such as Woody Guthrie, Gram Parsons and Bob Dylan.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "Artists from outside California who were associated with early alternative country included singer-songwriters such as Lucinda Williams, Lyle Lovett and Steve Earle, the Nashville country rock band Jason and the Scorchers, the Providence \"cowboy pop\" band Rubber Rodeo, and the British post-punk band the Mekons. Earle, in particular, was noted for his popularity with both country and college rock audiences: He promoted his 1986 debut album Guitar Town with a tour that saw him open for both country singer Dwight Yoakam and alternative rock band the Replacements. Yoakam also cultivated a fanbase spanning multiple genres through his stripped-down honky-tonk influenced sound, association with the cowpunk scene, and performances at Los Angeles punk rock clubs.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "These early styles had coalesced into a genre by the time the Illinois group Uncle Tupelo released their influential debut album No Depression in 1990. The album is widely credited as being the first \"alternative country\" album, and inspired the name of No Depression magazine, which exclusively covered the new genre. Following Uncle Tupelo's disbanding in 1994, its members formed two significant bands in genre: Wilco and Son Volt. Although Wilco's sound had moved away from country and towards indie rock by the time they released their critically acclaimed album Yankee Hotel Foxtrot in 2002, they have continued to be an influence on later alt-country artists.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "Other acts who became prominent in the alt-country genre during the 1990s and 2000s included the Bottle Rockets, the Handsome Family, Blue Mountain, Robbie Fulks, Blood Oranges, Bright Eyes, Drive-By Truckers, Old 97's, Old Crow Medicine Show, Nickel Creek, Neko Case, and Whiskeytown, whose lead singer Ryan Adams later had a successful solo-career. Alt-country, in various iterations overlapped with other genres, including Red Dirt country music (Cross Canadian Ragweed), jam bands (My Morning Jacket and the String Cheese Incident), and indie folk (the Avett Brothers).",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Despite the genre's growing popularity in the 1980s, 1990s and 2000s, alternative country and neo-traditionalist artists saw minimal support from country radio in those decades, despite strong sales and critical acclaim for albums such as the soundtrack to the 2000 film O Brother, Where Art Thou?. In 1987, the Beat Farmers gained airplay on country music stations with their song \"Make It Last\", but the single was pulled from the format when station programmers decreed the band's music was too rock-oriented for their audience. However, some alt-country songs have been crossover hits to mainstream country radio in cover versions by established artists on the format; Lucinda Williams' \"Passionate Kisses\" was a hit for Mary Chapin Carpenter in 1993, Ryan Adams' \"When the Stars Go Blue\" was a hit for Tim McGraw in 2007, and Old Crow Medicine Show's \"Wagon Wheel\" was a hit for Darius Rucker (member of Hootie & The Blowfish) in 2013.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "In the 2010s, the alt-country genre saw an increase in its critical and commercial popularity, owing to the success of artists such as the Civil Wars, Chris Stapleton, Sturgill Simpson, Jason Isbell, Lydia Loveless and Margo Price. In 2019, Kacey Musgraves – a country artist who had gained a following with indie rock fans and music critics despite minimal airplay on country radio – won the Grammy Award for Album of the Year for her album Golden Hour.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "The sixth generation of country music continued to be influenced by other genres such as pop, rock, and R&B. Richard Marx crossed over with his Days in Avalon album, which features five country songs and several singers and musicians. Alison Krauss sang background vocals to Marx's single \"Straight from My Heart.\" Also, Bon Jovi had a hit single, \"Who Says You Can't Go Home\", with Jennifer Nettles of Sugarland. Kid Rock's collaboration with Sheryl Crow, \"Picture,\" was a major crossover hit in 2001 and began Kid Rock's transition from hard rock to a country-rock hybrid that would later produce another major crossover hit, 2008's \"All Summer Long.\" (Crow, whose music had often incorporated country elements, would also officially cross over into country with her hit \"Easy\" from her debut country album Feels like Home). Darius Rucker, frontman for the 1990s pop-rock band Hootie & the Blowfish, began a country solo career in the late 2000s, one that to date has produced five albums and several hits on both the country charts and the Billboard Hot 100. Singer-songwriter Unknown Hinson became famous for his appearance in the Charlotte television show Wild, Wild, South, after which Hinson started his own band and toured in southern states. Other rock stars who featured a country song on their albums were Don Henley (who released Cass County in 2015, an album which featured collaborations with numerous country artists) and Poison.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "The back half of the 2010-2020 decade saw an increasing number of mainstream country acts collaborate with pop and R&B acts; many of these songs achieved commercial success by appealing to fans across multiple genres; examples include collaborations between Kane Brown and Marshmello and Maren Morris and Zedd. There has also been interest from pop singers in country music, including Beyoncé, Lady Gaga, Alicia Keys, Gwen Stefani, Justin Timberlake, Justin Bieber and Pink. Supporting this movement is the new generation of contemporary pop-country, including Taylor Swift, Miranda Lambert, Carrie Underwood, Kacey Musgraves, Miley Cyrus, Billy Ray Cyrus, Sam Hunt, Chris Young, who introduced new themes in their works, touching on fundamental rights, feminism, and controversies about racism and religion of the older generations.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "In 2005, country singer Carrie Underwood rose to fame as the winner of the fourth season of American Idol and has since become one of the most prominent recording artists in the genre, with worldwide sales of more than 65 million records and seven Grammy Awards. With her first single, \"Inside Your Heaven\", Underwood became the only solo country artist to have a number 1 hit on the Billboard Hot 100 chart in the 2000–2009 decade and also broke Billboard chart history as the first country music artist ever to debut at No. 1 on the Hot 100. Underwood's debut album, Some Hearts, became the best-selling solo female debut album in country music history, the fastest-selling debut country album in the history of the SoundScan era and the best-selling country album of the last 10 years, being ranked by Billboard as the number 1 Country Album of the 2000–2009 decade. She has also become the female country artist with the most number one hits on the Billboard Hot Country Songs chart in the Nielsen SoundScan era (1991–present), having 14 #1s and breaking her own Guinness Book record of ten. In 2007, Underwood won the Grammy Award for Best New Artist, becoming only the second Country artist in history (and the first in a decade) to win it. She also made history by becoming the seventh woman to win Entertainer of the Year at the Academy of Country Music Awards, and the first woman in history to win the award twice, as well as twice consecutively. Time has listed Underwood as one of the 100 most influential people in the world. In 2016, Underwood topped the Country Airplay chart for the 15th time, becoming the female artist with the most number ones on that chart.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Carrie Underwood was only one of several country stars produced by a television series in the 2000s. In addition to Underwood, American Idol launched the careers of Kellie Pickler, Josh Gracin, Bucky Covington, Kristy Lee Cook, Danny Gokey, Lauren Alaina and Scotty McCreery (as well as that of occasional country singer Kelly Clarkson) in the decade, and would continue to launch country careers in the 2010s. The series Nashville Star, while not nearly as successful as Idol, did manage to bring Miranda Lambert, Kacey Musgraves and Chris Young to mainstream success, also launching the careers of lower-profile musicians such as Buddy Jewell, Sean Patrick McGraw, and Canadian musician George Canyon. Can You Duet? produced the duos Steel Magnolia and Joey + Rory. Teen sitcoms also have influenced modern country music; in 2008, actress Jennette McCurdy (best known as the sidekick Sam on the teen sitcom iCarly) released her first single, \"So Close\", following that with the single \"Generation Love\" in 2011. Another teen sitcom star, Miley Cyrus (of Disney Channel's Hannah Montana), also had a crossover hit in the late 2000s with \"The Climb\" and another with a duet with her father, Billy Ray Cyrus, with \"Ready, Set, Don't Go.\" Jana Kramer, an actress in the teen drama One Tree Hill, released a country album in 2012 that has produced two hit singles as of 2013. Actresses Hayden Panettiere and Connie Britton began recording country songs as part of their roles in the TV shows Nashville and Pretty Little Liars star Lucy Hale released her debut album Road Between in 2014.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "In 2010, the group Lady Antebellum won five Grammys, including the coveted Song of the Year and Record of the Year for \"Need You Now\". A large number of duos and vocal groups emerged on the charts in the 2010s, many of which feature close harmony in the lead vocals. In addition to Lady A, groups such as Little Big Town, the Band Perry, Gloriana, Thompson Square, Eli Young Band, Zac Brown Band and British duo the Shires have emerged to occupy a large share of mainstream success alongside solo singers such as Kacey Musgraves and Miranda Lambert.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "One of the most commercially successful country artists of the late 2000s and early 2010s has been singer-songwriter Taylor Swift. Swift first became widely known in 2006 when her debut single, \"Tim McGraw\", was released when Swift was only 16 years old. In 2006, Swift released her self-titled debut studio album, which spent 275 weeks on Billboard 200, one of the longest runs of any album on that chart. In 2008, Taylor Swift released her second studio album, Fearless, which made her the second longest number-one charted on Billboard 200 and the second best-selling album (just behind Adele's 21) within the past 5 years. At the 2010 Grammys, Taylor Swift was 20 and won Album of the Year for Fearless, which made her the youngest artist to win this award. Swift has received twelve Grammys already.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "Buoyed by her teen idol status among girls and a change in the methodology of compiling the Billboard charts to favor pop-crossover songs, Swift's 2012 single \"We Are Never Ever Getting Back Together\" spent the most weeks at the top of Billboard's Hot 100 chart and Hot Country Songs chart of any song in nearly five decades. The song's long run at the top of the chart was somewhat controversial, as the song is largely a pop song without much country influence and its success on the charts driven by a change to the chart's criteria to include airplay on non-country radio stations, prompting disputes over what constitutes a country song; many of Swift's later releases, such as album 1989 (2014), Reputation (2017), and Lover (2019) were released solely to pop audiences. Swift returned to country music in her recent folk-inspired releases, Folklore (2020) and Evermore (2020), with songs like \"Betty\" and \"No Body, No Crime\".",
"title": "History"
},
{
"paragraph_id": 68,
"text": "In the mid to late 2010s, country music began to increasingly sound more like the style of modern-day Pop music, with more simple and repetitive lyrics, more electronic-based instrumentation, and experimentation with \"talk-singing\" and rap, pop-country pulled farther away from the traditional sounds of country music and received criticisms from country music purists while gaining in popularity with mainstream audiences. The topics addressed have also changed, turning controversial such as acceptance of the LGBT community, safe sex, recreational marijuana use, and questioning religious sentiment. Influences also come from some pop artists' interest in the country genre, including Justin Timberlake with the album Man of the Woods, Beyoncé's single \"Daddy Lessons\" from Lemonade, Gwen Stefani with \"Nobody but You\", Bruno Mars, Lady Gaga, Alicia Keys, Kelly Clarkson, and Pink.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "The influence of rock music in country has become more overt during the late 2000s and early 2010s as artists like Eric Church, Jason Aldean, and Brantley Gilbert have had success; Aaron Lewis, former frontman for the rock group Staind, had a moderately successful entry into country music in 2011 and 2012, as did Dallas Smith, former frontman of the band Default.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "Maren Morris success collaboration \"The Middle\" with EDM producer Zedd is considered one of the representations of the fusion of electro-pop with country music.",
"title": "History"
},
{
"paragraph_id": 71,
"text": "Lil Nas X song \"Old Town Road\" spent 19 weeks atop the US Billboard Hot 100 chart, becoming the longest-running number-one song since the chart debuted in 1958, winning Billboard Music Awards, MTV Video Music Awards and Grammy Award. Sam Hunt \"Leave the Night On\" peaked concurrently on the Hot Country Songs and Country Airplay charts, making Hunt the first country artist in 22 years, since Billy Ray Cyrus, to reach the top of three country charts simultaneously in the Nielsen SoundScan-era. With the fusion genre of \"country trap\"—a fusion of country/western themes to a hip hop beat, but usually with fully sung lyrics—emerging in the late 2010s, line dancing country had a minor revival, examples of the phenomenon include \"The Git Up\" by Blanco Brown. Blanco Brown has gone on to make more traditional country soul songs such as \"I Need Love\" and a rendition of \"Don't Take the Girl\" with Tim McGraw, and collaborations like \"Just the Way\" with Parmalee. Another country trap artist known as Breland has seen success with \"My Truck, \"Throw It Back\" with Keith Urban, and \"Praise the Lord\" featuring Thomas Rhett.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "Emo rap musician Sueco, released a cowpunk song in collaboration is country musician Warren Zeiders titled \"Ride It Hard\". Alex Melton, known for his music covers, blends pop punk with country music.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "In the early 2010s, \"bro-country\", a genre noted primarily for its themes on drinking and partying, girls, and pickup trucks became particularly popular. Notable artists associated with this genre are Luke Bryan, Jason Aldean, Blake Shelton, Jake Owen and Florida Georgia Line whose song \"Cruise\" became the best-selling country song of all time. Research in the mid-2010s suggested that about 45 percent of country's best-selling songs could be considered bro-country, with the top two artists being Luke Bryan and Florida Georgia Line. Albums by bro-country singers also sold very well—in 2013, Luke Bryan's Crash My Party was the third best-selling of all albums in the United States, with Florida Georgia Line's Here's to the Good Times at sixth, and Blake Shelton's Based on a True Story at ninth. It is also thought that the popularity of bro-country helped country music to surpass classic rock as the most popular genre in the American country in 2012. The genre however is controversial as it has been criticized by other country musicians and commentators over its themes and depiction of women, opening up a divide between the older generation of country singers and the younger bro country singers that was described as \"civil war\" by musicians, critics, and journalists.\" In 2014, Maddie & Tae's \"Girl in a Country Song\", addressing many of the controversial bro-country themes, peaked at number one on the Billboard Country Airplay chart.",
"title": "History"
},
{
"paragraph_id": 74,
"text": "is a genre that contain songs about going through hard times, country loving, and telling stories. Newer artists like Billy Strings, the Grascals, Molly Tuttle, Tyler Childers and the Infamous Stringdusters have been increasing the popularity of this genre, alongside some of the genres more established stars who still remain popular including Rhonda Vincent, Alison Krauss and Union Station, Ricky Skaggs and Del McCoury. The genre has developed in the Northern Kentucky and Cincinnati area. Other artists include New South (band), Doc Watson, Osborne Brothers, and many others.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "In an effort to combat the over-reliance of mainstream country music on pop-infused artists, the sister genre of Americana began to gain popularity and increase in prominence, receiving eight Grammy categories of its own in 2009. Americana music incorporates elements of country music, bluegrass, folk, blues, gospel, rhythm and blues, roots rock and southern soul and is overseen by the Americana Music Association and the Americana Music Honors & Awards. As a result of an increasingly pop-leaning mainstream, many more traditional-sounding artists such as Tyler Childers, Zach Bryan and Old Crow Medicine Show began to associate themselves more with Americana and the alternative country scene where their sound was more celebrated. Similarly, many established country acts who no longer received commercial airplay, including Emmylou Harris and Lyle Lovett, began to flourish again.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, \"back-to-basics\" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "Beginning in 1989, a confluence of events brought an unprecedented commercial boom to country music. New marketing strategies were used to engage fans, powered by technology that more accurately tracked the popularity of country music, and boosted by a political and economic climate that focused attention on the genre. Garth Brooks (\"Friends in Low Places\") in particular attracted fans with his fusion of neotraditionalist country and stadium rock. Other artists such as Brooks and Dunn (\"Boot Scootin' Boogie\") also combined conventional country with slick, rock elements, while Lorrie Morgan, Mary Chapin Carpenter, and Kathy Mattea updated neotraditionalist styles.",
"title": "History"
},
{
"paragraph_id": 78,
"text": "Roots of conservative country was Lee Greenwood's \"God Bless the USA\". The September 11 attacks of 2001 and the economic recession helped move country music back into the spotlight. Many country artists, such as Alan Jackson with his ballad on terrorist attacks, \"Where Were You (When the World Stopped Turning)\", wrote songs that celebrated the military, highlighted the gospel, and emphasized home and family values over wealth. Alt-Country singer Ryan Adams song \"New York, New York\" pays tribute to New York City, and its popular music video (which was shot 4 days before the attacks) shows Adams playing in front of the Manhattan skyline, Along with several shots of the city. In contrast, more rock-oriented country singers took more direct aim at the attacks' perpetrators; Toby Keith's \"Courtesy of the Red, White and Blue (The Angry American)\" threatened to \"a boot in\" the posterior of the enemy, while Charlie Daniels's \"This Ain't No Rag, It's a Flag\" promised to \"hunt\" the perpetrators \"down like a mad dog hound.\" These songs gained such recognition that it put country music back into popular culture. Darryl Worley recorded \"Have You Forgotten\" also. There have been numerous patriotic country songs throughout the years.",
"title": "History"
},
{
"paragraph_id": 79,
"text": "Some modern artists that primarily or entirely produce country pop music include Kacey Musgraves, Maren Morris, Kelsea Ballerini, Sam Hunt, Kane Brown, Chris Lane, and Dan + Shay. The singers who are part of this country movement are also defined as \"Nashville's new generation of country\".",
"title": "History"
},
{
"paragraph_id": 80,
"text": "Although the changes made by the new generation, it has been recognized by major music awards associations and successes in Billboard and international charts. Golden Hour by Kacey Musgraves won album of the year at 61st Annual Grammy Awards, Academy of Country Music Awards, Country Music Association Awards, although it has received widespread criticism from the more traditionalist public.",
"title": "History"
},
{
"paragraph_id": 81,
"text": "Australian country music has a long tradition. Influenced by US country music, it has developed a distinct style, shaped by British and Irish folk ballads and Australian bush balladeers like Henry Lawson and Banjo Paterson. Country instruments, including the guitar, banjo, fiddle and harmonica, create the distinctive sound of country music in Australia and accompany songs with strong storyline and memorable chorus.",
"title": "International"
},
{
"paragraph_id": 82,
"text": "Folk songs sung in Australia between the 1780s and 1920s, based around such themes as the struggle against government tyranny, or the lives of bushrangers, swagmen, drovers, stockmen and shearers, continue to influence the genre. This strain of Australian country, with lyrics focusing on Australian subjects, is generally known as \"bush music\" or \"bush band music\". \"Waltzing Matilda\", often regarded as Australia's unofficial national anthem, is a quintessential Australian country song, influenced more by British and Irish folk ballads than by US country and western music. The lyrics were composed by the poet Banjo Paterson in 1895. Other popular songs from this tradition include \"The Wild Colonial Boy\", \"Click Go the Shears\", \"The Queensland Drover\" and \"The Dying Stockman\". Later themes which endure to the present include the experiences of war, of droughts and flooding rains, of Aboriginality and of the railways and trucking routes which link Australia's vast distances.",
"title": "International"
},
{
"paragraph_id": 83,
"text": "Pioneers of a more Americanised popular country music in Australia included Tex Morton (known as \"The Father of Australian Country Music\") in the 1930s. Author Andrew Smith delivers a through research and engaged view of Tex Morton's life and his impact on the country music scene in Australia in the 1930s and 1940s. Other early stars included Buddy Williams, Shirley Thoms and Smoky Dawson. Buddy Williams (1918–1986) was the first Australian-born to record country music in Australia in the late 1930s and was the pioneer of a distinctly Australian style of country music called the bush ballad that others such as Slim Dusty would make popular in later years. During the Second World War, many of Buddy Williams recording sessions were done whilst on leave from the Army. At the end of the war, Williams would go on to operate some of the largest travelling tent rodeo shows Australia has ever seen.",
"title": "International"
},
{
"paragraph_id": 84,
"text": "In 1952, Dawson began a radio show and went on to national stardom as a singing cowboy of radio, TV and film. Slim Dusty (1927–2003) was known as the \"King of Australian Country Music\" and helped to popularise the Australian bush ballad. His successful career spanned almost six decades, and his 1957 hit \"A Pub with No Beer\" was the biggest-selling record by an Australian to that time, and with over seven million record sales in Australia he is the most successful artist in Australian musical history. Dusty recorded and released his one-hundredth album in the year 2000 and was given the honour of singing \"Waltzing Matilda\" in the closing ceremony of the Sydney 2000 Olympic Games. Dusty's wife Joy McKean penned several of his most popular songs.",
"title": "International"
},
{
"paragraph_id": 85,
"text": "Chad Morgan, who began recording in the 1950s, has represented a vaudeville style of comic Australian country; Frank Ifield achieved considerable success in the early 1960s, especially in the UK Singles Charts and Reg Lindsay was one of the first Australians to perform at Nashville's Grand Ole Opry in 1974. Eric Bogle's 1972 folk lament to the Gallipoli Campaign \"And the Band Played Waltzing Matilda\" recalled the British and Irish origins of Australian folk-country. Singer-songwriter Paul Kelly, whose music style straddles folk, rock and country, is often described as the poet laureate of Australian music.",
"title": "International"
},
{
"paragraph_id": 86,
"text": "By the 1990s, country music had attained crossover success in the pop charts, with artists like James Blundell and James Reyne singing \"Way Out West\", and country star Kasey Chambers winning the ARIA Award for Best Female Artist in three years (2000, 2002 and 2004), tying with pop stars Wendy Matthews and Sia for the most wins in that category. Furthermore, Chambers has gone on to win nine ARIA Awards for Best Country Album and, in 2018, became the youngest artist to ever be inducted into the ARIA Hall of Fame. The crossover influence of Australian country is also evident in the music of successful contemporary bands the Waifs and the John Butler Trio. Nick Cave has been heavily influenced by the country artist Johnny Cash. In 2000, Cash, covered Cave's \"The Mercy Seat\" on the album American III: Solitary Man, seemingly repaying Cave for the compliment he paid by covering Cash's \"The Singer\" (originally \"The Folk Singer\") on his Kicking Against the Pricks album. Subsequently, Cave cut a duet with Cash on a version of Hank Williams' \"I'm So Lonesome I Could Cry\" for Cash's American IV: The Man Comes Around album (2002).",
"title": "International"
},
{
"paragraph_id": 87,
"text": "Popular contemporary performers of Australian country music include John Williamson (who wrote the iconic \"True Blue\"), Lee Kernaghan (whose hits include \"Boys from the Bush\" and \"The Outback Club\"), Gina Jeffreys, Forever Road and Sara Storer. In the U.S., Olivia Newton-John, Sherrié Austin and Keith Urban have attained great success. During her time as a country singer in the 1970s, Newton-John became the first (and to date only) non-US winner of the Country Music Association Award for Female Vocalist of the Year which many considered a controversial decision by the CMA; after starring in the rock-and-roll musical film Grease in 1978, Newton-John (mirroring the character she played in the film) shifted to pop music in the 1980s. Urban is arguably considered the most successful international Australian country star, winning nine CMA Awards, including three Male Vocalist of the Year wins and two wins of the CMA's top honour Entertainer of the Year. Pop star Kylie Minogue found success with her 2018 country pop album Golden which she recorded in Nashville reaching number one in Scotland, the UK and her native Australia.",
"title": "International"
},
{
"paragraph_id": 88,
"text": "Country music has been a particularly popular form of musical expression among Indigenous Australians. Troy Cassar-Daley is among Australia's successful contemporary indigenous performers, and Kev Carmody and Archie Roach employ a combination of folk-rock and country music to sing about Aboriginal rights issues.",
"title": "International"
},
{
"paragraph_id": 89,
"text": "The Tamworth Country Music Festival began in 1973 and now attracts up to 100,000 visitors annually. Held in Tamworth, New South Wales (country music capital of Australia), it celebrates the culture and heritage of Australian country music. During the festival the CMAA holds the Country Music Awards of Australia ceremony awarding the Golden Guitar trophies. Other significant country music festivals include the Whittlesea Country Music Festival (near Melbourne) and the Mildura Country Music Festival for \"independent\" performers during October, and the Canberra Country Music Festival held in the national capital during November.",
"title": "International"
},
{
"paragraph_id": 90,
"text": "Country HQ showcases new talent on the rise in the country music scene down under. CMC (the Country Music Channel), a 24‑hour music channel dedicated to non-stop country music, can be viewed on pay TV and features once a year the Golden Guitar Awards, CMAs and CCMAs alongside international shows such as The Wilkinsons, The Road Hammers, and Country Music Across America.",
"title": "International"
},
{
"paragraph_id": 91,
"text": "Outside of the United States, Canada has the largest country music fan and artist base, something that is to be expected given the two countries' proximity and cultural parallels. Mainstream country music is culturally ingrained in the prairie provinces, the British Columbia Interior, Northern Ontario, and in Atlantic Canada. Celtic traditional music developed in Atlantic Canada in the form of Scottish, Acadian and Irish folk music popular amongst Irish, French and Scottish immigrants to Canada's Atlantic Provinces (Newfoundland, Nova Scotia, New Brunswick, and Prince Edward Island). Like the southern United States and Appalachia, all four regions are of heavy British Isles stock and rural; as such, the development of traditional music in the Maritimes somewhat mirrored the development of country music in the US South and Appalachia. Country and western music never really developed separately in Canada; however, after its introduction to Canada, following the spread of radio, it developed quite quickly out of the Atlantic Canadian traditional scene. While true Atlantic Canadian traditional music is very Celtic or \"sea shanty\" in nature, even today, the lines have often been blurred. Certain areas often are viewed as embracing one strain or the other more openly. For example, in Newfoundland the traditional music remains unique and Irish in nature, whereas traditional musicians in other parts of the region may play both genres interchangeably.",
"title": "International"
},
{
"paragraph_id": 92,
"text": "Don Messer's Jubilee was a Halifax, Nova Scotia-based country/folk variety television show that was broadcast nationally from 1957 to 1969. In Canada it out-performed The Ed Sullivan Show broadcast from the United States and became the top-rated television show throughout much of the 1960s. Don Messer's Jubilee followed a consistent format throughout its years, beginning with a tune named \"Goin' to the Barndance Tonight\", followed by fiddle tunes by Messer, songs from some of his \"Islanders\" including singers Marg Osburne and Charlie Chamberlain, the featured guest performance, and a closing hymn. It ended with \"Till We Meet Again\". The guest performance slot gave national exposure to numerous Canadian folk musicians, including Stompin' Tom Connors and Catherine McKinnon. Some Maritime country performers went on to further fame beyond Canada. Hank Snow, Wilf Carter (also known as Montana Slim), and Anne Murray are the three most notable. The cancellation of the show by the public broadcaster in 1969 caused a nationwide protest, including the raising of questions in the Parliament of Canada.",
"title": "International"
},
{
"paragraph_id": 93,
"text": "The Prairie provinces, due to their western cowboy and agrarian nature, are the true heartland of Canadian country music. While the Prairies never developed a traditional music culture anything like the Maritimes, the folk music of the Prairies often reflected the cultural origins of the settlers, who were a mix of Scottish, Ukrainian, German and others. For these reasons polkas and western music were always popular in the region, and with the introduction of the radio, mainstream country music flourished. As the culture of the region is western and frontier in nature, the specific genre of country and western is more popular today in the Prairies than in any other part of the country. No other area of the country embraces all aspects of the culture, from two-step dancing, to the cowboy dress, to rodeos, to the music itself, like the Prairies do. The Atlantic Provinces, on the other hand, produce far more traditional musicians, but they are not usually specifically country in nature, usually bordering more on the folk or Celtic genres.",
"title": "International"
},
{
"paragraph_id": 94,
"text": "Canadian country pop star Shania Twain is the best-selling female country artist of all time and one of the best-selling artists of all time in any genre. Furthermore, she is the only woman to have three consecutive albums be certified Diamond.",
"title": "International"
},
{
"paragraph_id": 95,
"text": "Country music artists from the U.S. have seen crossover with Latin American audiences, particularly in Mexico. Country music artists from throughout the U.S. have recorded renditions of Mexican folk songs, including \"El Rey\" which was performed on George Strait's Twang album and during Al Hurricane's tribute concert. American Latin pop crossover musicians, like Lorenzo Antonio's \"Ranchera Jam\" have also combined Mexican songs with country songs in a New Mexico music style.",
"title": "International"
},
{
"paragraph_id": 96,
"text": "While Tejano and New Mexico music is typically thought of as being Spanish language, the genres have also had charting musicians focused on English language music. During the 1970s, singer-songwriter Freddy Fender had two #1 country music singles, that were popular throughout North America, with \"Before the Next Teardrop Falls\" and \"Wasted Days and Wasted Nights\". Notable songs which have been influenced by Hispanic and Latin culture as performed by US country music artists include Marty Robbins' \"El Paso\" trilogy, Willie Nelson and Merle Haggard covering the Townes Van Zandt song \"Pancho and Lefty\", \"Toes\" by Zac Brown Band, and \"Sangria\" by Blake Shelton.",
"title": "International"
},
{
"paragraph_id": 97,
"text": "Regional Mexican is a radio format featuring many of Mexico's versions of country music. It includes a number of different styles, usually named after their region of origin. One specific song style, the Canción Ranchera, or simply Ranchera, literally meaning \"ranch song\", found its origins in the Mexican countryside and was first popularized with Mariachi. It has since also become popular with Grupero, Banda, Norteño, Tierra Caliente, Duranguense and other regional Mexican styles. The Corrido, a different song style with a similar history, is also performed in many other regional styles, and is most related to the western style of the United States and Canada. Other song styles performed in regional Mexican music include Ballads, Cumbias, Boleros, among others. Country en Español (Country in Spanish) is also popular in Mexico. Some Mexican artists began performing country songs in Spanish during the 1970s, and the genre became prominent mainly in the northern regions of the country during the 1980s. A Country en Español popularity boom also reached the central regions of Mexico during the 1990s. For most of its history, Country en Español mainly resembled Neotraditional country. However, in more modern times, some artists have incorporated influences from other country music subgenres.",
"title": "International"
},
{
"paragraph_id": 98,
"text": "In Brazil, there is Música Sertaneja, the most popular music genre in that country. It originated in the countryside of São Paulo state in the 1910s, before the development of U.S. country music.",
"title": "International"
},
{
"paragraph_id": 99,
"text": "In Argentina, on the last weekend of September, the yearly San Pedro Country Music Festival takes place in the town of San Pedro, Buenos Aires. The festival features bands from different places in Argentina, as well as international artists from Brazil, Uruguay, Chile, Peru and the U.S.",
"title": "International"
},
{
"paragraph_id": 100,
"text": "Country music is popular in the United Kingdom, although somewhat less so than in other English-speaking countries. There are some British country music acts and publications. Although radio stations devoted to country are among the most popular in other Anglophone nations, none of the top ten most-listened-to stations in the UK are country stations, and national broadcaster BBC Radio does not offer a full-time country station (BBC Radio 2 Country, a \"pop-up\" station, operated four days each year between 2015 and 2017). The BBC does offer a country show on BBC Radio 2 each week hosted by Bob Harris.",
"title": "International"
},
{
"paragraph_id": 101,
"text": "The most successful British country music act of the 21st century are Ward Thomas and the Shires. In 2015, the Shires' album Brave, became the first UK country act ever to chart in the Top 10 of the UK Albums Chart and they became the first UK country act to receive an award from the American Country Music Association. In 2016, Ward Thomas then became the first UK country act to hit number 1 in the UK Albums Chart with their album Cartwheels.",
"title": "International"
},
{
"paragraph_id": 102,
"text": "There is the C2C: Country to Country festival held every year, and for many years there was a festival at Wembley Arena, which was broadcast on the BBC, the International Festivals of Country Music, promoted by Mervyn Conn, held at the venue between 1969 and 1991. The shows were later taken into Europe, and featured such stars as Johnny Cash, Dolly Parton, Tammy Wynette, David Allan Coe, Emmylou Harris, Boxcar Willie, Johnny Russell and Jerry Lee Lewis. A handful of country musicians had even greater success in mainstream British music than they did in the U.S., despite a certain amount of disdain from the music press. Britain's largest music festival Glastonbury has featured major US country acts in recent years, such as Kenny Rogers in 2013 and Dolly Parton in 2014.",
"title": "International"
},
{
"paragraph_id": 103,
"text": "From within the UK, few country musicians achieved widespread mainstream success. Many British singers who performed the occasional country songs are of other genres. Tom Jones, by this point near the end of his peak success as a pop singer, had a string of country hits in the late 1970s and early 1980s. The Bee Gees had some fleeting success in the genre, with one country hit as artists (\"Rest Your Love on Me\") and a major hit as songwriters (\"Islands in the Stream\"); Barry Gibb, the band's usual lead singer and last surviving member, acknowledged that country music was a major influence on the band's style. Singer Engelbert Humperdinck, while charting only once in the U.S. country top 40 with \"After the Lovin'\", achieved widespread success on both the U.S. and British pop charts with his covers of Nashville country ballads such as \"Release Me\", \"Am I That Easy to Forget\" and \"There Goes My Everything\". Welsh singer Bonnie Tyler initially started her career making country records, and in 1978 her single \"It's a Heartache\" reached number four on the UK Singles Chart. In 2013, Tyler returned to her roots, blending the country elements of her early work with the rock of her successful material on her album Rocks and Honey which featured a duet with Vince Gill. The songwriting tandem of Roger Cook and Roger Greenaway wrote a number of country hits, in addition to their widespread success in pop songwriting; Cook is notable for being the only Briton to be inducted into the Nashville Songwriters Hall of Fame.",
"title": "International"
},
{
"paragraph_id": 104,
"text": "A niche country subgenre popular in the West Country is Scrumpy and Western, which consists mostly of novelty songs and comedy music recorded there (its name comes from scrumpy, an alcoholic beverage). A primarily local interest, the largest Scrumpy and Western hit in the UK and Ireland was \"The Combine Harvester\", which pioneered the genre and reached number one in both the UK and Ireland; Fred Wedlock had a number-six hit in 1981 with \"The Oldest Swinger in Town\". In 1975, comedian Billy Connolly topped the UK Singles Chart with \"D.I.V.O.R.C.E.\", a parody of the Tammy Wynette song \"D-I-V-O-R-C-E\".",
"title": "International"
},
{
"paragraph_id": 105,
"text": "The British Country Music Festival is an annual three-day festival held in the seaside resort of Blackpool. It uniquely promotes artists from the United Kingdom and Ireland to celebrate the impact that Celtic and British settlers to America had on the origins of country music. Past headline artists have included Amy Wadge, Ward Thomas, Tom Odell, Nathan Carter, Lisa McHugh, Catherine McGrath, Wildwood Kin, The Wandering Hearts and Henry Priestman.",
"title": "International"
},
{
"paragraph_id": 106,
"text": "In Ireland, Country and Irish is a music genre that combines traditional Irish folk music with US country music. Television channel TG4 began a quest for Ireland's next country star called Glór Tíre, translated as \"Country Voice\". It is now in its sixth season and is one of TG4's most-watched TV shows. Over the past ten years, country and gospel recording artist James Kilbane has reached multi-platinum success with his mix of Christian and traditional country influenced albums. James Kilbane like many other Irish artists is today working closer with Nashville. Daniel O'Donnell achieved international success with his brand of music crossing country, Irish folk and European easy listening, earning a strong following among older women both in the British Isles and in North America. A recent success in the Irish arena has been Crystal Swing.",
"title": "International"
},
{
"paragraph_id": 107,
"text": "In Japan, there are forms of J-country and J-western similar to other J-pop movements, J-hip hop and J-rock. One of the first J-western musicians was Biji Kuroda & The Chuck Wagon Boys, other vintage artists included Jimmie Tokita and His Mountain Playboys, The Blue Rangers, Wagon Aces, and Tomi Fujiyama. J-country continues to have a dedicated following in Japan, thanks to Charlie Nagatani, Katsuoshi Suga, J.T. Kanehira, Dicky Kitano, and Manami Sekiya. Country and western venues in Japan include the former annual Country Gold which were put together by Charlie Nagatani, and the modern honky tonks at Little Texas in Tokyo and Armadillo in Nagoya.",
"title": "International"
},
{
"paragraph_id": 108,
"text": "In India, there is an annual concert festival called \"Blazing Guitars\" held in Chennai brings together Anglo-Indian musicians from all over the country (including some who have emigrated to places like Australia). The year 2003 brought home-grown Indian, Bobby Cash to the forefront of the country music culture in India when he became India's first international country music artist to chart singles in Australia.",
"title": "International"
},
{
"paragraph_id": 109,
"text": "In the Philippines, country music has found their way into Cordilleran way of life, which often compares the Igorot lifestyle to that of US cowboys. Baguio City has an FM station that caters to country music, DZWR 99.9 Country, which is part of the Catholic Media Network. Bombo Radyo Baguio has a segment on its Sunday slot for Igorot, Ilocano and country music. And as of recently, DWUB occasionally plays country music. Many country music musicians tour the Philippines. Original Pinoy Music has influences from country.",
"title": "International"
},
{
"paragraph_id": 110,
"text": "Tom Roland, from the Country Music Association International, explains country music's global popularity: \"In this respect, at least, Country Music listeners around the globe have something in common with those in the United States. In Germany, for instance, Rohrbach identifies three general groups that gravitate to the genre: people intrigued with the US cowboy icon, middle-aged fans who seek an alternative to harder rock music and younger listeners drawn to the pop-influenced sound that underscores many current Country hits.\" One of the first US people to perform country music abroad was George Hamilton IV. He was the first country musician to perform in the Soviet Union; he also toured in Australia and the Middle East. He was deemed the \"International Ambassador of Country Music\" for his contributions to the globalization of country music. Johnny Cash, Emmylou Harris, Keith Urban, and Dwight Yoakam have also made numerous international tours. The Country Music Association undertakes various initiatives to promote country music internationally.",
"title": "International"
},
{
"paragraph_id": 111,
"text": "In Iran, country music has appeared in recent years. According to Melody Music Magazine, the pioneer of country music in Iran is the English-speaking country music band Dream Rovers, whose founder, singer and songwriter is Erfan Rezayatbakhsh (elf). The band was formed in 2007 in Tehran, and during this time they have been trying to introduce and popularize country music in Iran by releasing two studio albums and performing live at concerts, despite the difficulties that the Islamic regime in Iran makes for bands that are active in the western music field.",
"title": "International"
},
{
"paragraph_id": 112,
"text": "Musician Toby Keith performed alongside Saudi Arabian folk musician Rabeh Sager in 2017. This concert was similar to the performances of Jazz ambassadors that performed distinctively American style music internationally.",
"title": "International"
},
{
"paragraph_id": 113,
"text": "In Sweden, Rednex rose to stardom combining country music with electro-pop in the 1990s. In 1994, the group had a worldwide hit with their version of the traditional Southern tune \"Cotton-Eyed Joe\". Artists popularizing more traditional country music in Sweden have been Ann-Louise Hanson, Hasse Andersson, Kikki Danielsson, Elisabeth Andreassen and Jill Johnson. In Poland an international country music festival, known as Piknik Country, has been organised in Mrągowo in Masuria since 1983. The number of country music artists in France has increased. Some of the most important are Liane Edwards, Annabel, Rockie Mountains, Tahiana, and Lili West. French rock and roll singer Eddy Mitchell is also inspired by Americana and country music.",
"title": "International"
},
{
"paragraph_id": 114,
"text": "In the Netherlands there are many artists producing popular country and Americana music, which is mostly in the English language, as well as Dutch country and country-like music in the Dutch language. The latter is mainly popular on the countrysides in the northern and eastern parts of the Netherlands and is less associated with its US brethren, although it sounds sometimes very similar. Well-known popular artists mainly performing in English are Waylon, Danny Vera, Ilse DeLange, Douwe Bob and Henk Wijngaard.",
"title": "International"
},
{
"paragraph_id": 115,
"text": "Several US television networks are at least partly devoted to the genre: Country Music Television (the first channel devoted to country music) and CMT Music (both owned by Paramount Global), RFD-TV and The Cowboy Channel (both owned by Rural Media Group), Heartland (owned by Get After It Media), Circle (a joint venture of the Grand Ole Opry and Gray Television), The Country Network (owned by TCN Country, LLC), and Country Music Channel (the country-oriented sister channel of California Music Channel).",
"title": "Performers and shows"
},
{
"paragraph_id": 116,
"text": "The Nashville Network (TNN) was launched in 1983 as a channel devoted to country music, and later added sports and outdoor lifestyle programming. It actually launched just two days after CMT. In 2000, after TNN and CMT fell under the same corporate ownership, TNN was stripped of its country format and rebranded as The National Network, then Spike TV in 2003, Spike in 2006, and finally Paramount Network in 2018. TNN was later revived from 2012 to 2013 after Jim Owens Entertainment (the company responsible for prominent TNN hosts Crook & Chase) acquired the trademark and licensed it to Luken Communications; that channel renamed itself Heartland after Luken was embroiled in an unrelated dispute that left the company bankrupt.",
"title": "Performers and shows"
},
{
"paragraph_id": 117,
"text": "Great American Country (GAC) was launched in 1995, also as a country music-oriented channel that would later add lifestyle programming pertaining to the American Heartland and South. In Spring 2021, GAC's then-owner, Discovery, Inc. divested the network to GAC Media, which also acquired the equestrian network Ride TV. Later, in the summer of that year, GAC Media relaunched Great American Country as GAC Family, a family-oriented general entertainment network, while Ride TV was relaunched as GAC Living, a network devoted to programming pertaining to lifestyles of the American South. The GAC acronym which once stood for \"Great American Country\" now stands for \"Great American Channels\".",
"title": "Performers and shows"
},
{
"paragraph_id": 118,
"text": "Only one television channel was dedicated to country music in Canada: CMT owned by Corus Entertainment (90%) and Viacom (10%). However, the lifting of strict genre licensing restrictions saw the network remove the last of its music programming at the end of August 2017 for a schedule of generic off-network family sitcoms, Cancom-compliant lifestyle programming, and reality programming. In the past, the current-day Cottage Life network saw some country focus as Country Canada and later, CBC Country Canada before that network drifted into an alternate network for overflow CBC content as Bold. Stingray Music continues to maintain several country music audio-only channels on cable radio.",
"title": "Performers and shows"
},
{
"paragraph_id": 119,
"text": "In the past, country music had an extensive presence, especially on the Canadian national broadcaster, CBC Television. The show Don Messer's Jubilee significantly affected country music in Canada; for instance, it was the program that launched Anne Murray's career. Gordie Tapp's Country Hoedown and its successor, The Tommy Hunter Show, ran for a combined 36 years on the CBC, from 1956 to 1992; in its last nine years on air, the U.S. cable network TNN carried Hunter's show.",
"title": "Performers and shows"
},
{
"paragraph_id": 120,
"text": "The only network dedicated to country music in Australia was the Country Music Channel owned by Foxtel. It ceased operations in June 2020 and was replaced by CMT (owned by Network 10 parent company Paramount Networks UK & Australia).",
"title": "Performers and shows"
},
{
"paragraph_id": 121,
"text": "One music video channel is now dedicated to country music in the United Kingdom: Spotlight TV, owned by Canis Media.",
"title": "Performers and shows"
},
{
"paragraph_id": 122,
"text": "Computer science and music experts identified issues with algorithms on streaming services such as Spotify and Apple Music, specifically the categorical homogenization of music curation and metadata within larger genres such as country music. Musicians and songs from minority heritage styles, such as Appalachian, Cajun, New Mexico, and Tejano music, underperform on these platforms due to underrepresentation and miscategorization of these subgenres.",
"title": "Criticism"
},
{
"paragraph_id": 123,
"text": "The Country Music Association has awarded the New Artist award to a black American only twice in 63 years, and never to a Hispanic musician. The broader modern Nashville-based Country music industry has underrepresented significant black and Latino contributions within Country music, including popular subgenres such as Cajun, Creole, Tejano, and New Mexico music. A 2021 CNN article states, \"Some in country music have signaled that they are no longer content to be associated with a painful history of racism. \"",
"title": "Criticism"
},
{
"paragraph_id": 124,
"text": "Black country-music artist Mickey Guyton had been included among the nominees for the 2021 award, effectively creating a litmus-test for the genre. Guyton has expressed bewilderment that, despite substantial coverage by online platforms like Spotify and Apple Music, her music, like that of Valerie June, another black musician who embraces aspects of country in her Appalachian- and Gospel-tinged work and who has been embraced by international music audiences, is still effectively ignored by American broadcast country-music radio. Guyton's 2021 album Remember Her Name in part references the case of black health-care professional Breonna Taylor, who was killed in her home by police.",
"title": "Criticism"
},
{
"paragraph_id": 125,
"text": "In 2023, \"Try That in a Small Town\" by Jason Aldean became the subject of widespread controversy and media attention following the release of its music video. Tennessee state representative Justin Jones referred to the song as a \"heinous vile racist song\" which attempts to normalize \"racist, violence, vigilantism and white nationalism\". Others thought the lyrics were supportive of lynchings and sundown towns. Amanda Marie Martinez of NPR wrote that the song \"builds on a lineage of anti-city songs in country music that place the rural and urban along not only a moral versus immoral binary, but an implicitly racialized one as well...selective availability of home loans in suburbs and racially restrictive housing covenants in cities furthered white flight, making cities synonymous with non-whiteness.\" She concluded by stating that such songs are \"why country music continues to be a frightening space for marginalized communities\".",
"title": "Criticism"
}
] | Country is a music genre originating in the Southern and Southwestern United States. First produced in the 1920s, country music primarily focuses on working class Americans and blue-collar American life. Country music is known for its ballads and dance tunes with simple form, folk lyrics, and harmonies generally accompanied by instruments such as banjos, fiddles, harmonicas, and many types of guitar. Though it is primarily rooted in various forms of American folk music, such as old-time music and Appalachian music, many other traditions, including, Mexican, Irish, and Hawaiian music, have also had a formative influence on the genre. Blues modes have been used extensively throughout its history as well. The term country music gained popularity in the 1940s in preference to hillbilly music; it came to encompass western music, which evolved parallel to hillbilly music from similar roots, in the mid-20th century. Contemporary styles of western music include Texas country, red dirt, and Hispano- and Mexican American-led Tejano and New Mexico music, all extant alongside longstanding indigenous traditions. In 2009, in the United States, country music was the most listened to rush hour radio genre during the evening commute, and second most popular in the morning commute. | 2001-10-02T15:14:10Z | 2023-12-28T20:35:11Z | [
"Template:Thinsp",
"Template:Div col",
"Template:Refbegin",
"Template:More citations needed section",
"Template:Criticism section",
"Template:Cite magazine",
"Template:Cite journal",
"Template:ISBN",
"Template:Banjo",
"Template:About",
"Template:Infobox music genre",
"Template:Better source needed",
"Template:Cite book",
"Template:Refend",
"Template:Authority control",
"Template:More citations needed",
"Template:Use mdy dates",
"Template:Unreferenced section",
"Template:Div col end",
"Template:Cite web",
"Template:Wikiquote",
"Template:Americanrootsmusic",
"Template:Convert",
"Template:Cite encyclopedia",
"Template:Dead link",
"Template:Full citation needed",
"Template:Commons category",
"Template:See also",
"Template:Webarchive",
"Template:AllMusic",
"Template:Gilliland",
"Template:Country music",
"Template:Short description",
"Template:Main",
"Template:Citation needed",
"Template:According to whom",
"Template:Reflist",
"Template:Sfn",
"Template:Portal",
"Template:Cite news",
"Template:Rock music"
] | https://en.wikipedia.org/wiki/Country_music |
5,248 | Cold War (1948–1953) | The Cold War (1948–1953) is the period within the Cold War from the incapacitation of the Allied Control Council in 1948 to the conclusion of the Korean War in 1953.
The list of world leaders in these years is as follows:
After the Marshall Plan, the introduction of a new currency to Western Germany to replace the debased Reichsmark and massive electoral losses for communist parties in 1946, in June 1948, the Soviet Union cut off surface road access to Berlin.
On the day of the Berlin Blockade, a Soviet representative told the other occupying powers "We are warning both you and the population of Berlin that we shall apply economic and administrative sanctions that will lead to circulation in Berlin exclusively of the currency of the Soviet occupation zone."
Thereafter, street and water communications were severed, rail and barge traffic was stopped and the Soviets initially stopped supplying food to the civilian population in the non-Soviet sectors of Berlin. Because Berlin was located within the Soviet-occupied zone of Germany and the other occupying powers had previously relied on Soviet good will for access to Berlin, the only available methods of supplying the city were three limited air corridors.
By February 1948, because of massive post-war military cuts, the entire United States army had been reduced to 552,000 men. Military forces in non-Soviet Berlin sectors totaled only 8,973 Americans, 7,606 British and 6,100 French. Soviet military forces in the Soviet sector that surrounded Berlin totaled one and a half million men. The two United States regiments in Berlin would have provided little resistance against a Soviet attack. Believing that Britain, France and the United States had little option other than to acquiesce, the Soviet Military Administration in Germany celebrated the beginning of the blockade. Thereafter, a massive aerial supply campaign of food, water and other goods was initiated by the United States, Britain, France and other countries. The Soviets derided "the futile attempts of the Americans to save face and to maintain their untenable position in Berlin." The success of the airlift eventually caused the Soviets to lift their blockade in May 1949.
However, the Soviet Army was still capable of conquering Western Europe without much difficulty. In September 1948, US military intelligence experts estimated that the Soviets had about 485,000 troops in their German occupation zone and in Poland, and some 1.785 million troops in Europe in total. At the same time, the number of US troops in 1948 was about 140,000.
After disagreements between Yugoslavian leader Josip Broz Tito and the Soviet Union regarding Greece and the People's Republic of Albania, a Tito–Stalin Split occurred, followed by Yugoslavia being expelled from the Cominform in June 1948 and a brief failed Soviet putsch in Belgrade. The split created two separate communist forces in Europe. A vehement campaign against "Titoism" was immediately started in the Eastern Bloc, describing agents of both the West and Tito in all places engaging in subversive activity. This resulted in the persecution of many major party cadres, including those in East Germany.
was split up and dissolved in 1954 and 1975, also because of the détente between the West and Tito.
The United States joined Britain, France, Canada, Denmark, Portugal, Norway, Belgium, Iceland, Luxembourg, Italy, and the Netherlands in 1949 to form the North Atlantic Treaty Organization (NATO), the United States' first "entangling" European alliance in 170 years. West Germany, Spain, Greece, and Turkey would later join this alliance. The Eastern leaders retaliated against these steps by integrating the economies of their nations in Comecon, their version of the Marshall Plan; exploding the first Soviet atomic device in 1949; signing an alliance with People's Republic of China in February 1950; and forming the Warsaw Pact, Eastern Europe's counterpart to NATO, in 1955. The Soviet Union, Albania, Czechoslovakia, Hungary, East Germany, Bulgaria, Romania, and Poland founded this military alliance.
U.S. officials quickly moved to escalate and expand "containment." In a secret 1950 document, NSC 68, they proposed to strengthen their alliance systems, quadruple defense spending, and embark on an elaborate propaganda campaign to convince the U.S. public to fight this costly cold war. Truman ordered the development of a hydrogen bomb. In early 1950, the U.S. took its first efforts to oppose communist forces in Vietnam; planned to form a West German army, and prepared proposals for a peace treaty with Japan that would guarantee long-term U.S. military bases there.
The Cold War took place worldwide, but it had a partially different timing and trajectory outside Europe.
In Africa, decolonization took place first; it was largely accomplished in the 1950s. The main rivals then sought bases of support in the new national political alignments. In Latin America, the first major confrontation took place in Guatemala in 1954. When the new Castro government of Cuba turned to Soviets support in 1960, Cuba became the center of the anti-American Cold War forces, supported by the Soviet Union.
As Japan's empire collapsed in 1945 the civil war resumed in China between the Kuomintang (KMT) led by Generalissimo Chiang Kai-shek and the Chinese Communist Party led by Mao Zedong. The USSR had signed a Treaty of Friendship with the Kuomintang in 1945 and disavowed support for the Chinese Communists. The outcome was closely fought, with the Communists finally prevailing with superior military tactics. Although the Nationalists had an advantage in numbers of men and weapons, initially controlled a much larger territory and population than their adversaries, and enjoyed considerable international support, they were exhausted by the long war with Japan and the attendant internal responsibilities. In addition, the Chinese Communists were able to fill the political vacuum left in Manchuria after Soviet forces withdrew from the area and thus gained China's prime industrial base. The Chinese Communists were able to fight their way from the north and northeast, and virtually all of mainland China was taken by the end of 1949. On October 1, 1949, Mao Zedong proclaimed the People's Republic of China (PRC). Chiang Kai-shek and 600,000 Nationalist troops and 2 million refugees, predominantly from the government and business community, fled from the mainland to the island of Taiwan. In December 1949, Chiang proclaimed Taipei the temporary capital of the Republic of China (ROC) and continued to assert his government as the sole legitimate authority in China.
The continued hostility between the Communists on the mainland and the Nationalists on Taiwan continued throughout the Cold War. Though the United States refused to aide Chiang Kai-shek in his hope to "recover the mainland," it continued supporting the Republic of China with military supplies and expertise to prevent Taiwan from falling into PRC hands. Through the support of the Western bloc (most Western countries continued to recognize the ROC as the sole legitimate government of China), the Republic of China on Taiwan retained China's seat in the United Nations until 1971.
Madiun Affair took place on September 18, 1948 in the city of Madiun, East Java. This rebellion was carried out by the Front Demokrasi Rakyat (FDR, People's Democratic Front) which united all socialist and communist groups in Indonesia. This rebellion ended 3 months later after its leaders were arrested and executed by the TNI.
This revolt began with the fall of the Amir Syarifuddin Cabinet due to the signing of the Renville Agreement which benefited the Dutch and was eventually replaced by the Hatta Cabinet which did not belong to the left wing. This led Amir Syarifuddin to declare opposition to the Hatta Cabinet government and to declare the formation of the People's Democratic Front.
Before it, In the PKI Politburo session on August 13–14, 1948, Musso, an Indonesian communist figure, introduced a political concept called "Jalan Baru". He also wanted a single Marxism party called the PKI (Communist Party of Indonesia) consisting of illegal communists, the Labour Party of Indonesia, and Partai Sosialis(Socialist Party).
On September 18, 1948, the FDR declared the formation of the Republic of Soviet-Indonesia. In addition, the communists also carried out a rebellion in the Pati Residency and the kidnapping of groups who were considered to be against communists. Even this rebellion resulted in the murder of the Governor of East Java at the time, Raden Mas Tumenggung Ario Soerjo.
The crackdown operation against this movement began. This operation was led by A.H. Nasution. The Indonesian government also applied Commander General Sudirman to the Military Operations Movement I where General Sudirman ordered Colonel Gatot Soebroto and Colonel Sungkono to mobilize the TNI and police to crush the rebellion.
On September 30, 1948, Madiun was captured again by the Republic of Indonesia. Musso was shot dead on his escape in Sumoroto and Amir Syarifuddin was executed after being captured in Central Java. In early December 1948, the Madiun Affair crackdown was declared complete.
In early 1950, the United States made its first commitment to form a peace treaty with Japan that would guarantee long-term U.S. military bases. Some observers (including George Kennan) believed that the Japanese treaty led Stalin to approve a plan to invade U.S.-supported South Korea on June 25, 1950. Korea had been divided at the end of World War II along the 38th parallel into Soviet and U.S. occupation zones, in which a communist government was installed in the North by the Soviets, and an elected government in the South came to power after UN-supervised elections in 1948.
In June 1950, Kim Il Sung's North Korean People's Army invaded South Korea. Fearing that communist Korea under a Kim Il Sung dictatorship could threaten Japan and foster other communist movements in Asia, Truman committed U.S. forces and obtained help from the United Nations to counter the North Korean invasion. The Soviets boycotted UN Security Council meetings while protesting the Council's failure to seat the People's Republic of China and, thus, did not veto the Council's approval of UN action to oppose the North Korean invasion. A joint UN force of personnel from South Korea, the United States, Britain, Turkey, Canada, Australia, France, the Philippines, the Netherlands, Belgium, New Zealand and other countries joined to stop the invasion. After a Chinese invasion to assist the North Koreans, fighting stabilized along the 38th parallel, which had separated the Koreas. Truman faced a hostile China, a Sino-Soviet partnership, and a defense budget that had quadrupled in eighteen months.
The Korean Armistice Agreement was signed in July 1953 after the death of Stalin, who had been insisting that the North Koreans continue fighting. In North Korea, Kim Il Sung created a highly centralized and brutal dictatorship, according himself unlimited power and generating a formidable cult of personality.
A hydrogen bomb—which produced nuclear fusion instead of nuclear fission—was first tested by the United States in November 1952 and the Soviet Union in August 1953. Such bombs were first deployed in the 1960s.
Fear of a nuclear war spurred the production of public safety films by the United States federal government's Civil Defense branch that demonstrated ways on protecting oneself from a Soviet nuclear attack. The 1951 children's film Duck and Cover is a prime example.
George Orwell's classic dystopia Nineteen Eighty-Four was published in 1949. The novel explores life in an imagined future world where a totalitarian government has achieved terrifying levels of power and control. With Nineteen Eighty-Four, Orwell taps into the anti-communist fears that would continue to haunt so many in the West for decades to come. In a Cold War setting his descriptions could hardly fail to evoke comparison to Soviet communism and the seeming willingness of Stalin and his successors to control those within the Soviet bloc by whatever means necessary. Orwell's famous allegory of totalitarian rule, Animal Farm, published in 1945, provoked similar anti-communist sentiments. | [
{
"paragraph_id": 0,
"text": "The Cold War (1948–1953) is the period within the Cold War from the incapacitation of the Allied Control Council in 1948 to the conclusion of the Korean War in 1953.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The list of world leaders in these years is as follows:",
"title": ""
},
{
"paragraph_id": 2,
"text": "After the Marshall Plan, the introduction of a new currency to Western Germany to replace the debased Reichsmark and massive electoral losses for communist parties in 1946, in June 1948, the Soviet Union cut off surface road access to Berlin.",
"title": "Europe"
},
{
"paragraph_id": 3,
"text": "On the day of the Berlin Blockade, a Soviet representative told the other occupying powers \"We are warning both you and the population of Berlin that we shall apply economic and administrative sanctions that will lead to circulation in Berlin exclusively of the currency of the Soviet occupation zone.\"",
"title": "Europe"
},
{
"paragraph_id": 4,
"text": "Thereafter, street and water communications were severed, rail and barge traffic was stopped and the Soviets initially stopped supplying food to the civilian population in the non-Soviet sectors of Berlin. Because Berlin was located within the Soviet-occupied zone of Germany and the other occupying powers had previously relied on Soviet good will for access to Berlin, the only available methods of supplying the city were three limited air corridors.",
"title": "Europe"
},
{
"paragraph_id": 5,
"text": "By February 1948, because of massive post-war military cuts, the entire United States army had been reduced to 552,000 men. Military forces in non-Soviet Berlin sectors totaled only 8,973 Americans, 7,606 British and 6,100 French. Soviet military forces in the Soviet sector that surrounded Berlin totaled one and a half million men. The two United States regiments in Berlin would have provided little resistance against a Soviet attack. Believing that Britain, France and the United States had little option other than to acquiesce, the Soviet Military Administration in Germany celebrated the beginning of the blockade. Thereafter, a massive aerial supply campaign of food, water and other goods was initiated by the United States, Britain, France and other countries. The Soviets derided \"the futile attempts of the Americans to save face and to maintain their untenable position in Berlin.\" The success of the airlift eventually caused the Soviets to lift their blockade in May 1949.",
"title": "Europe"
},
{
"paragraph_id": 6,
"text": "However, the Soviet Army was still capable of conquering Western Europe without much difficulty. In September 1948, US military intelligence experts estimated that the Soviets had about 485,000 troops in their German occupation zone and in Poland, and some 1.785 million troops in Europe in total. At the same time, the number of US troops in 1948 was about 140,000.",
"title": "Europe"
},
{
"paragraph_id": 7,
"text": "After disagreements between Yugoslavian leader Josip Broz Tito and the Soviet Union regarding Greece and the People's Republic of Albania, a Tito–Stalin Split occurred, followed by Yugoslavia being expelled from the Cominform in June 1948 and a brief failed Soviet putsch in Belgrade. The split created two separate communist forces in Europe. A vehement campaign against \"Titoism\" was immediately started in the Eastern Bloc, describing agents of both the West and Tito in all places engaging in subversive activity. This resulted in the persecution of many major party cadres, including those in East Germany.",
"title": "Europe"
},
{
"paragraph_id": 8,
"text": "was split up and dissolved in 1954 and 1975, also because of the détente between the West and Tito.",
"title": "Europe"
},
{
"paragraph_id": 9,
"text": "The United States joined Britain, France, Canada, Denmark, Portugal, Norway, Belgium, Iceland, Luxembourg, Italy, and the Netherlands in 1949 to form the North Atlantic Treaty Organization (NATO), the United States' first \"entangling\" European alliance in 170 years. West Germany, Spain, Greece, and Turkey would later join this alliance. The Eastern leaders retaliated against these steps by integrating the economies of their nations in Comecon, their version of the Marshall Plan; exploding the first Soviet atomic device in 1949; signing an alliance with People's Republic of China in February 1950; and forming the Warsaw Pact, Eastern Europe's counterpart to NATO, in 1955. The Soviet Union, Albania, Czechoslovakia, Hungary, East Germany, Bulgaria, Romania, and Poland founded this military alliance.",
"title": "Europe"
},
{
"paragraph_id": 10,
"text": "U.S. officials quickly moved to escalate and expand \"containment.\" In a secret 1950 document, NSC 68, they proposed to strengthen their alliance systems, quadruple defense spending, and embark on an elaborate propaganda campaign to convince the U.S. public to fight this costly cold war. Truman ordered the development of a hydrogen bomb. In early 1950, the U.S. took its first efforts to oppose communist forces in Vietnam; planned to form a West German army, and prepared proposals for a peace treaty with Japan that would guarantee long-term U.S. military bases there.",
"title": "Europe"
},
{
"paragraph_id": 11,
"text": "The Cold War took place worldwide, but it had a partially different timing and trajectory outside Europe.",
"title": "Outside Europe"
},
{
"paragraph_id": 12,
"text": "In Africa, decolonization took place first; it was largely accomplished in the 1950s. The main rivals then sought bases of support in the new national political alignments. In Latin America, the first major confrontation took place in Guatemala in 1954. When the new Castro government of Cuba turned to Soviets support in 1960, Cuba became the center of the anti-American Cold War forces, supported by the Soviet Union.",
"title": "Outside Europe"
},
{
"paragraph_id": 13,
"text": "As Japan's empire collapsed in 1945 the civil war resumed in China between the Kuomintang (KMT) led by Generalissimo Chiang Kai-shek and the Chinese Communist Party led by Mao Zedong. The USSR had signed a Treaty of Friendship with the Kuomintang in 1945 and disavowed support for the Chinese Communists. The outcome was closely fought, with the Communists finally prevailing with superior military tactics. Although the Nationalists had an advantage in numbers of men and weapons, initially controlled a much larger territory and population than their adversaries, and enjoyed considerable international support, they were exhausted by the long war with Japan and the attendant internal responsibilities. In addition, the Chinese Communists were able to fill the political vacuum left in Manchuria after Soviet forces withdrew from the area and thus gained China's prime industrial base. The Chinese Communists were able to fight their way from the north and northeast, and virtually all of mainland China was taken by the end of 1949. On October 1, 1949, Mao Zedong proclaimed the People's Republic of China (PRC). Chiang Kai-shek and 600,000 Nationalist troops and 2 million refugees, predominantly from the government and business community, fled from the mainland to the island of Taiwan. In December 1949, Chiang proclaimed Taipei the temporary capital of the Republic of China (ROC) and continued to assert his government as the sole legitimate authority in China.",
"title": "Outside Europe"
},
{
"paragraph_id": 14,
"text": "The continued hostility between the Communists on the mainland and the Nationalists on Taiwan continued throughout the Cold War. Though the United States refused to aide Chiang Kai-shek in his hope to \"recover the mainland,\" it continued supporting the Republic of China with military supplies and expertise to prevent Taiwan from falling into PRC hands. Through the support of the Western bloc (most Western countries continued to recognize the ROC as the sole legitimate government of China), the Republic of China on Taiwan retained China's seat in the United Nations until 1971.",
"title": "Outside Europe"
},
{
"paragraph_id": 15,
"text": "Madiun Affair took place on September 18, 1948 in the city of Madiun, East Java. This rebellion was carried out by the Front Demokrasi Rakyat (FDR, People's Democratic Front) which united all socialist and communist groups in Indonesia. This rebellion ended 3 months later after its leaders were arrested and executed by the TNI.",
"title": "Outside Europe"
},
{
"paragraph_id": 16,
"text": "This revolt began with the fall of the Amir Syarifuddin Cabinet due to the signing of the Renville Agreement which benefited the Dutch and was eventually replaced by the Hatta Cabinet which did not belong to the left wing. This led Amir Syarifuddin to declare opposition to the Hatta Cabinet government and to declare the formation of the People's Democratic Front.",
"title": "Outside Europe"
},
{
"paragraph_id": 17,
"text": "Before it, In the PKI Politburo session on August 13–14, 1948, Musso, an Indonesian communist figure, introduced a political concept called \"Jalan Baru\". He also wanted a single Marxism party called the PKI (Communist Party of Indonesia) consisting of illegal communists, the Labour Party of Indonesia, and Partai Sosialis(Socialist Party).",
"title": "Outside Europe"
},
{
"paragraph_id": 18,
"text": "On September 18, 1948, the FDR declared the formation of the Republic of Soviet-Indonesia. In addition, the communists also carried out a rebellion in the Pati Residency and the kidnapping of groups who were considered to be against communists. Even this rebellion resulted in the murder of the Governor of East Java at the time, Raden Mas Tumenggung Ario Soerjo.",
"title": "Outside Europe"
},
{
"paragraph_id": 19,
"text": "The crackdown operation against this movement began. This operation was led by A.H. Nasution. The Indonesian government also applied Commander General Sudirman to the Military Operations Movement I where General Sudirman ordered Colonel Gatot Soebroto and Colonel Sungkono to mobilize the TNI and police to crush the rebellion.",
"title": "Outside Europe"
},
{
"paragraph_id": 20,
"text": "On September 30, 1948, Madiun was captured again by the Republic of Indonesia. Musso was shot dead on his escape in Sumoroto and Amir Syarifuddin was executed after being captured in Central Java. In early December 1948, the Madiun Affair crackdown was declared complete.",
"title": "Outside Europe"
},
{
"paragraph_id": 21,
"text": "In early 1950, the United States made its first commitment to form a peace treaty with Japan that would guarantee long-term U.S. military bases. Some observers (including George Kennan) believed that the Japanese treaty led Stalin to approve a plan to invade U.S.-supported South Korea on June 25, 1950. Korea had been divided at the end of World War II along the 38th parallel into Soviet and U.S. occupation zones, in which a communist government was installed in the North by the Soviets, and an elected government in the South came to power after UN-supervised elections in 1948.",
"title": "Korean War"
},
{
"paragraph_id": 22,
"text": "In June 1950, Kim Il Sung's North Korean People's Army invaded South Korea. Fearing that communist Korea under a Kim Il Sung dictatorship could threaten Japan and foster other communist movements in Asia, Truman committed U.S. forces and obtained help from the United Nations to counter the North Korean invasion. The Soviets boycotted UN Security Council meetings while protesting the Council's failure to seat the People's Republic of China and, thus, did not veto the Council's approval of UN action to oppose the North Korean invasion. A joint UN force of personnel from South Korea, the United States, Britain, Turkey, Canada, Australia, France, the Philippines, the Netherlands, Belgium, New Zealand and other countries joined to stop the invasion. After a Chinese invasion to assist the North Koreans, fighting stabilized along the 38th parallel, which had separated the Koreas. Truman faced a hostile China, a Sino-Soviet partnership, and a defense budget that had quadrupled in eighteen months.",
"title": "Korean War"
},
{
"paragraph_id": 23,
"text": "The Korean Armistice Agreement was signed in July 1953 after the death of Stalin, who had been insisting that the North Koreans continue fighting. In North Korea, Kim Il Sung created a highly centralized and brutal dictatorship, according himself unlimited power and generating a formidable cult of personality.",
"title": "Korean War"
},
{
"paragraph_id": 24,
"text": "A hydrogen bomb—which produced nuclear fusion instead of nuclear fission—was first tested by the United States in November 1952 and the Soviet Union in August 1953. Such bombs were first deployed in the 1960s.",
"title": "Hydrogen bomb"
},
{
"paragraph_id": 25,
"text": "Fear of a nuclear war spurred the production of public safety films by the United States federal government's Civil Defense branch that demonstrated ways on protecting oneself from a Soviet nuclear attack. The 1951 children's film Duck and Cover is a prime example.",
"title": "Culture and media"
},
{
"paragraph_id": 26,
"text": "George Orwell's classic dystopia Nineteen Eighty-Four was published in 1949. The novel explores life in an imagined future world where a totalitarian government has achieved terrifying levels of power and control. With Nineteen Eighty-Four, Orwell taps into the anti-communist fears that would continue to haunt so many in the West for decades to come. In a Cold War setting his descriptions could hardly fail to evoke comparison to Soviet communism and the seeming willingness of Stalin and his successors to control those within the Soviet bloc by whatever means necessary. Orwell's famous allegory of totalitarian rule, Animal Farm, published in 1945, provoked similar anti-communist sentiments.",
"title": "Culture and media"
}
] | The Cold War (1948–1953) is the period within the Cold War from the incapacitation of the Allied Control Council in 1948 to the conclusion of the Korean War in 1953. The list of world leaders in these years is as follows: 1948–49: Clement Attlee (UK); Harry Truman (US); Vincent Auriol (France); Joseph Stalin (USSR); Chiang Kai-shek
1950–51: Clement Attlee (UK); Harry Truman (US); Vincent Auriol (France); Joseph Stalin (USSR); Mao Zedong (China)
1952–53: Winston Churchill (UK); Harry Truman (US); Vincent Auriol (France); Joseph Stalin (USSR); Mao Zedong (China) | 2001-07-04T15:32:59Z | 2023-12-28T22:24:00Z | [
"Template:Unreferenced section",
"Template:ISBN",
"Template:Citation",
"Template:History Of The Cold War",
"Template:Further",
"Template:Reflist",
"Template:Harvnb",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Webarchive",
"Template:Cold War",
"Template:Short description",
"Template:Main"
] | https://en.wikipedia.org/wiki/Cold_War_(1948%E2%80%931953) |
5,249 | Crony capitalism | Crony capitalism, sometimes also called simply cronyism, is a pejorative term used in political discourse to describe a situation in which businesses profit from a close relationship with state power, either through an anti-competitive regulatory environment, direct government largesse, and/or corruption. Examples given for crony capitalism include obtainment of permits, government grants, tax breaks, or other undue influence from businesses over the state's deployment of public goods, for example, mining concessions for primary commodities or contracts for public works. In other words, it is used to describe a situation where businesses thrive not as a result of free enterprise, but rather collusion between a business class and the political class.
Money is then made not merely by making a profit in the market, but through profiteering by rent seeking using this monopoly or oligopoly. Entrepreneurship and innovative practices which seek to reward risk are stifled since the value-added is little by crony businesses, as hardly anything of significant value is created by them, with transactions taking the form of trading. Crony capitalism spills over into the government, the politics, and the media, when this nexus distorts the economy and affects society to an extent it corrupts public-serving economic, political, and social ideals.
The first extensive use of the term "crony capitalism" came about in the 1980s, to characterize the Philippine economy under the dictatorship of Ferdinand Marcos. Early uses of this term to describe the economic practices of the Marcos regime included that of Ricardo Manapat, who introduced it in his 1979 pamphlet "Some are Smarter than Others", which was later published in 1991; former Time magazine business editor George M. Taber, who used the term in a Time magazine article in 1980, and activist (and later Finance Minister) Jaime Ongpin, who used the term extensively in his writing and is sometimes credited for having coined it.
The term crony capitalism made a significant impact in the public as an explanation of the Asian financial crisis.
It is also used to describe governmental decisions favoring cronies of governmental officials.
The term is used largely interchangeably with the related term corporate welfare, although the latter is by definition specific to corporations.
Crony capitalism exists along a continuum. In its lightest form, crony capitalism consists of collusion among market players which is officially tolerated or encouraged by the government. While perhaps lightly competing against each other, they will present a unified front (sometimes called a trade association or industry trade group) to the government in requesting subsidies or aid or regulation. For instance, newcomers to a market then need to surmount significant barriers to entry in seeking loans, acquiring shelf space, or receiving official sanction. Some such systems are very formalized, such as sports leagues and the Medallion System of the taxicabs of New York City, but often the process is more subtle, such as expanding training and certification exams to make it more expensive for new entrants to enter a market and thereby limiting potential competition. In technological fields, there may evolve a system whereby new entrants may be accused of infringing on patents that the established competitors never assert against each other. In spite of this, some competitors may succeed when the legal barriers are light. The term crony capitalism is generally used when these practices either come to dominate the economy as a whole, or come to dominate the most valuable industries in an economy. Intentionally ambiguous laws and regulations are common in such systems. Taken strictly, such laws would greatly impede practically all business activity, but in practice they are only erratically enforced. The specter of having such laws suddenly brought down upon a business provides an incentive to stay in the good graces of political officials. Troublesome rivals who have overstepped their bounds can have these laws suddenly enforced against them, leading to fines or even jail time. Even in high-income democracies with well-established legal systems and freedom of the press in place, a larger state is generally associated with increased political corruption.
The term crony capitalism was initially applied to states involved in the 1997 Asian financial crisis such as Indonesia, South Korea and Thailand. In these cases, the term was used to point out how family members of the ruling leaders become extremely wealthy with no non-political justification. Southeast Asian nations, such as Hong Kong and Malaysia, still score very poorly in rankings measuring this. It was also used in this context as part of a broader liberal critique of economic dirigisme. The term has also been applied to the system of oligarchs in Russia. Other states to which the term has been applied include India, in particular the system after the 1990s liberalization, whereby land and other resources were given at throwaway prices in the name of public private partnerships, the more recent coal-gate scam and cheap allocation of land and resources to Adani SEZ under the Congress and BJP governments. Similar references to crony capitalism have been made to other countries such as Argentina and Greece. Wu Jinglian, one of China's leading economists and a longtime advocate of its transition to free markets, says that it faces two starkly contrasting futures, namely a market economy under the rule of law or crony capitalism. A dozen years later, prominent political scientist Pei Minxin had concluded that the latter course had become deeply embedded in China. The anti-corruption campaign under Xi Jinping (2012–) has seen more than 100,000 high- and low-ranking Chinese officials indicted and jailed.
Many prosperous nations have also had varying amounts of cronyism throughout their history, including the United Kingdom especially in the 1600s and 1700s, the United States and Japan.
The Economist benchmarks countries based on a crony-capitalism index calculated via how much economic activity occurs in industries prone to cronyism. Its 2014 Crony Capitalism Index ranking listed Hong Kong, Russia and Malaysia in the top three spots.
Crony capitalism in finance was found in the Second Bank of the United States. It was a private company, but its largest stockholder was the federal government which owned 20%. It was an early bank regulator and grew to be one being the most powerful organizations in the country due largely to being the depository of the government's revenue.
The Gramm–Leach–Bliley Act in 1999 completely removed Glass–Steagall’s separation between commercial banks and investment banks. After this repeal, commercial banks, investment banks and insurance companies combined their lobbying efforts. Critics claim this was instrumental in the passage of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005.
More direct government involvement in a specific sector can also lead to specific areas of crony capitalism, even if the economy as a whole may be competitive. This is most common in natural resource sectors through the granting of mining or drilling concessions, but it is also possible through a process known as regulatory capture where the government agencies in charge of regulating an industry come to be controlled by that industry. Governments will often establish in good faith government agencies to regulate an industry. However, the members of an industry have a very strong interest in the actions of that regulatory body while the rest of the citizenry are only lightly affected. As a result, it is not uncommon for current industry players to gain control of the watchdog and to use it against competitors. This typically takes the form of making it very expensive for a new entrant to enter the market. An 1824 landmark United States Supreme Court ruling overturned a New York State-granted monopoly ("a veritable model of state munificence" facilitated by Robert R. Livingston, one of the Founding Fathers) for the then-revolutionary technology of steamboats. Leveraging the Supreme Court's establishment of Congressional supremacy over commerce, the Interstate Commerce Commission was established in 1887 with the intent of regulating railroad robber barons. President Grover Cleveland appointed Thomas M. Cooley, a railroad ally, as its first chairman and a permit system was used to deny access to new entrants and legalize price fixing.
The defense industry in the United States is often described as an example of crony capitalism in an industry. Connections with the Pentagon and lobbyists in Washington are described by critics as more important than actual competition due to the political and secretive nature of defense contracts. In the Airbus-Boeing WTO dispute, Airbus (which receives outright subsidies from European governments) has stated Boeing receives similar subsidies which are hidden as inefficient defense contracts. Other American defense companies were put under scrutiny for no-bid contracts for Iraq War and Hurricane Katrina related contracts purportedly due to having cronies in the Bush administration.
Gerald P. O'Driscoll, former vice president at the Federal Reserve Bank of Dallas, stated that Fannie Mae and Freddie Mac became examples of crony capitalism as government backing let Fannie and Freddie dominate mortgage underwriting, saying. "The politicians created the mortgage giants, which then returned some of the profits to the pols—sometimes directly, as campaign funds; sometimes as "contributions" to favored constituents".
In its worst form, crony capitalism can devolve into simple corruption where any pretense of a free market is dispensed with, bribes to government officials are considered de rigueur and tax evasion is common. This is seen in many parts of Africa and is sometimes called plutocracy (rule by wealth) or kleptocracy (rule by theft). Kenyan economist David Ndii has repeatedly brought to light how this system has manifested over time, occasioned by the reign of Uhuru Kenyatta as president.
Corrupt governments may favor one set of business owners who have close ties to the government over others. This may also be done with, religious, or ethnic favoritism. For instance, Alawites in Syria have a disproportionate share of power in the government and business there (President Assad himself is an Alawite). This can be explained by considering personal relationships as a social network. As government and business leaders try to accomplish various things, they naturally turn to other powerful people for support in their endeavors. These people form hubs in the network. In a developing country those hubs may be very few, thus concentrating economic and political power in a small interlocking group.
Normally, this will be untenable to maintain in business as new entrants will affect the market. However, if business and government are entwined, then the government can maintain the small-hub network.
Raymond Vernon, specialist in economics and international affairs, wrote that the Industrial Revolution began in Great Britain because they were the first to successfully limit the power of veto groups (typically cronies of those with power in government) to block innovations, writing: "Unlike most other national environments, the British environment of the early 19th century contained relatively few threats to those who improved and applied existing inventions, whether from business competitors, labor, or the government itself. In other European countries, by contrast, the merchant guilds ... were a pervasive source of veto for many centuries. This power was typically bestowed upon them by government." For example, a Russian inventor produced a steam engine in 1766 and disappeared without a trace. Vermon further stated that "a steam powered horseless carriage produced in France in 1769 was officially suppressed." James Watt began experimenting with steam in 1763, got a patent in 1769 and began commercial production in 1775.
Raghuram Rajan, former governor of the Reserve Bank of India, has said: "One of the greatest dangers to the growth of developing countries is the middle income trap, where crony capitalism creates oligarchies that slow down growth. If the debate during the elections is any pointer, this is a very real concern of the public in India today". Tavleen Singh, columnist for The Indian Express, has disagreed. According to Singh, India's corporate success is not a product of crony capitalism, but because India is no longer under the influence of crony socialism.
While the problem is generally accepted across the political spectrum, ideology shades the view of the problem's causes and therefore its solutions. Political views mostly fall into two camps which might be called the socialist and capitalist critique. The socialist position is that crony capitalism is the inevitable result of any strictly capitalist system and thus broadly democratic government must regulate economic, or wealthy, interests to restrict monopoly. The capitalist position is that natural monopolies are rare, therefore governmental regulations generally abet established wealthy interests by restricting competition.
Critics of crony capitalism including socialists and anti-capitalists often assert that so-called crony capitalism is simply the inevitable result of any strictly capitalist system. Jane Jacobs described it as a natural consequence of collusion between those managing power and trade while Noam Chomsky has argued that the word crony is superfluous when describing capitalism. Since businesses make money and money leads to political power, business will inevitably use their power to influence governments. Much of the impetus behind campaign finance reform in the United States and in other countries is an attempt to prevent economic power being used to take political power.
Ravi Batra argues that "all official economic measures adopted since 1981 ... have devastated the middle class" and that the Occupy Wall Street movement should push for their repeal and thus end the influence of the super wealthy in the political process which he considers a manifestation of crony capitalism.
Socialist economists, such as Robin Hahnel, have criticized the term as an ideologically motivated attempt to cast what is in their view the fundamental problems of capitalism as avoidable irregularities. Socialist economists dismiss the term as an apologetic for failures of neoliberal policy and more fundamentally their perception of the weaknesses of market allocation.
Supporters of capitalism also generally oppose crony capitalism. Further, supporters such as classical liberals, neoliberals and right-libertarians consider it an aberration brought on by governmental favors incompatible with free market.. In the capitalist view, cronyism is the result of an excess of interference in the market which inevitably will result in a toxic combination of corporations and government officials running sectors of the economy. For instance, the Financial Times observed that, in Vietnam during the 2010s, the primary beneficiaries of cronyism were Communist party officials, noting also the "common practice of employing only party members and their family members and associates to government jobs or to jobs in state-owned enterprises."
Conservative commentator Ben Shapiro prefers to equate this problem with terms such as corporatocracy or corporatism, considered "a modern form of mercantilism", to emphasize that the only way to run a profitable business in such a system is to have help from corrupt government officials. Likewise, Hernando de Soto said that mercantilism "is also known as 'crony' or 'noninclusive' capitalism".
Even if the initial regulation was well-intentioned (to curb actual abuses) and even if the initial lobbying by corporations was well-intentioned (to reduce illogical regulations), the mixture of business and government stifles competition, a collusive result called regulatory capture. Burton W. Folsom Jr. distinguishes those that engage in crony capitalism—designated by him political entrepreneurs—from those who compete in the marketplace without special aid from government, whom he calls market entrepreneurs. The market entrepreneurs such as James J. Hill, Cornelius Vanderbilt and John D. Rockefeller succeeded by producing a quality product at a competitive price. For example, the political entrepreneurs such as Edward Collins in steamships and the leaders of the Union Pacific Railroad in railroads were men who used the power of government to succeed. They tried to gain subsidies or in some way use government to stop competitors. | [
{
"paragraph_id": 0,
"text": "Crony capitalism, sometimes also called simply cronyism, is a pejorative term used in political discourse to describe a situation in which businesses profit from a close relationship with state power, either through an anti-competitive regulatory environment, direct government largesse, and/or corruption. Examples given for crony capitalism include obtainment of permits, government grants, tax breaks, or other undue influence from businesses over the state's deployment of public goods, for example, mining concessions for primary commodities or contracts for public works. In other words, it is used to describe a situation where businesses thrive not as a result of free enterprise, but rather collusion between a business class and the political class.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Money is then made not merely by making a profit in the market, but through profiteering by rent seeking using this monopoly or oligopoly. Entrepreneurship and innovative practices which seek to reward risk are stifled since the value-added is little by crony businesses, as hardly anything of significant value is created by them, with transactions taking the form of trading. Crony capitalism spills over into the government, the politics, and the media, when this nexus distorts the economy and affects society to an extent it corrupts public-serving economic, political, and social ideals.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first extensive use of the term \"crony capitalism\" came about in the 1980s, to characterize the Philippine economy under the dictatorship of Ferdinand Marcos. Early uses of this term to describe the economic practices of the Marcos regime included that of Ricardo Manapat, who introduced it in his 1979 pamphlet \"Some are Smarter than Others\", which was later published in 1991; former Time magazine business editor George M. Taber, who used the term in a Time magazine article in 1980, and activist (and later Finance Minister) Jaime Ongpin, who used the term extensively in his writing and is sometimes credited for having coined it.",
"title": "Historical usage"
},
{
"paragraph_id": 3,
"text": "The term crony capitalism made a significant impact in the public as an explanation of the Asian financial crisis.",
"title": "Historical usage"
},
{
"paragraph_id": 4,
"text": "It is also used to describe governmental decisions favoring cronies of governmental officials.",
"title": "Historical usage"
},
{
"paragraph_id": 5,
"text": "The term is used largely interchangeably with the related term corporate welfare, although the latter is by definition specific to corporations.",
"title": "Historical usage"
},
{
"paragraph_id": 6,
"text": "Crony capitalism exists along a continuum. In its lightest form, crony capitalism consists of collusion among market players which is officially tolerated or encouraged by the government. While perhaps lightly competing against each other, they will present a unified front (sometimes called a trade association or industry trade group) to the government in requesting subsidies or aid or regulation. For instance, newcomers to a market then need to surmount significant barriers to entry in seeking loans, acquiring shelf space, or receiving official sanction. Some such systems are very formalized, such as sports leagues and the Medallion System of the taxicabs of New York City, but often the process is more subtle, such as expanding training and certification exams to make it more expensive for new entrants to enter a market and thereby limiting potential competition. In technological fields, there may evolve a system whereby new entrants may be accused of infringing on patents that the established competitors never assert against each other. In spite of this, some competitors may succeed when the legal barriers are light. The term crony capitalism is generally used when these practices either come to dominate the economy as a whole, or come to dominate the most valuable industries in an economy. Intentionally ambiguous laws and regulations are common in such systems. Taken strictly, such laws would greatly impede practically all business activity, but in practice they are only erratically enforced. The specter of having such laws suddenly brought down upon a business provides an incentive to stay in the good graces of political officials. Troublesome rivals who have overstepped their bounds can have these laws suddenly enforced against them, leading to fines or even jail time. Even in high-income democracies with well-established legal systems and freedom of the press in place, a larger state is generally associated with increased political corruption.",
"title": "In practice"
},
{
"paragraph_id": 7,
"text": "The term crony capitalism was initially applied to states involved in the 1997 Asian financial crisis such as Indonesia, South Korea and Thailand. In these cases, the term was used to point out how family members of the ruling leaders become extremely wealthy with no non-political justification. Southeast Asian nations, such as Hong Kong and Malaysia, still score very poorly in rankings measuring this. It was also used in this context as part of a broader liberal critique of economic dirigisme. The term has also been applied to the system of oligarchs in Russia. Other states to which the term has been applied include India, in particular the system after the 1990s liberalization, whereby land and other resources were given at throwaway prices in the name of public private partnerships, the more recent coal-gate scam and cheap allocation of land and resources to Adani SEZ under the Congress and BJP governments. Similar references to crony capitalism have been made to other countries such as Argentina and Greece. Wu Jinglian, one of China's leading economists and a longtime advocate of its transition to free markets, says that it faces two starkly contrasting futures, namely a market economy under the rule of law or crony capitalism. A dozen years later, prominent political scientist Pei Minxin had concluded that the latter course had become deeply embedded in China. The anti-corruption campaign under Xi Jinping (2012–) has seen more than 100,000 high- and low-ranking Chinese officials indicted and jailed.",
"title": "In practice"
},
{
"paragraph_id": 8,
"text": "Many prosperous nations have also had varying amounts of cronyism throughout their history, including the United Kingdom especially in the 1600s and 1700s, the United States and Japan.",
"title": "In practice"
},
{
"paragraph_id": 9,
"text": "The Economist benchmarks countries based on a crony-capitalism index calculated via how much economic activity occurs in industries prone to cronyism. Its 2014 Crony Capitalism Index ranking listed Hong Kong, Russia and Malaysia in the top three spots.",
"title": "In practice"
},
{
"paragraph_id": 10,
"text": "Crony capitalism in finance was found in the Second Bank of the United States. It was a private company, but its largest stockholder was the federal government which owned 20%. It was an early bank regulator and grew to be one being the most powerful organizations in the country due largely to being the depository of the government's revenue.",
"title": "In finance"
},
{
"paragraph_id": 11,
"text": "The Gramm–Leach–Bliley Act in 1999 completely removed Glass–Steagall’s separation between commercial banks and investment banks. After this repeal, commercial banks, investment banks and insurance companies combined their lobbying efforts. Critics claim this was instrumental in the passage of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005.",
"title": "In finance"
},
{
"paragraph_id": 12,
"text": "More direct government involvement in a specific sector can also lead to specific areas of crony capitalism, even if the economy as a whole may be competitive. This is most common in natural resource sectors through the granting of mining or drilling concessions, but it is also possible through a process known as regulatory capture where the government agencies in charge of regulating an industry come to be controlled by that industry. Governments will often establish in good faith government agencies to regulate an industry. However, the members of an industry have a very strong interest in the actions of that regulatory body while the rest of the citizenry are only lightly affected. As a result, it is not uncommon for current industry players to gain control of the watchdog and to use it against competitors. This typically takes the form of making it very expensive for a new entrant to enter the market. An 1824 landmark United States Supreme Court ruling overturned a New York State-granted monopoly (\"a veritable model of state munificence\" facilitated by Robert R. Livingston, one of the Founding Fathers) for the then-revolutionary technology of steamboats. Leveraging the Supreme Court's establishment of Congressional supremacy over commerce, the Interstate Commerce Commission was established in 1887 with the intent of regulating railroad robber barons. President Grover Cleveland appointed Thomas M. Cooley, a railroad ally, as its first chairman and a permit system was used to deny access to new entrants and legalize price fixing.",
"title": "In sections of an economy"
},
{
"paragraph_id": 13,
"text": "The defense industry in the United States is often described as an example of crony capitalism in an industry. Connections with the Pentagon and lobbyists in Washington are described by critics as more important than actual competition due to the political and secretive nature of defense contracts. In the Airbus-Boeing WTO dispute, Airbus (which receives outright subsidies from European governments) has stated Boeing receives similar subsidies which are hidden as inefficient defense contracts. Other American defense companies were put under scrutiny for no-bid contracts for Iraq War and Hurricane Katrina related contracts purportedly due to having cronies in the Bush administration.",
"title": "In sections of an economy"
},
{
"paragraph_id": 14,
"text": "Gerald P. O'Driscoll, former vice president at the Federal Reserve Bank of Dallas, stated that Fannie Mae and Freddie Mac became examples of crony capitalism as government backing let Fannie and Freddie dominate mortgage underwriting, saying. \"The politicians created the mortgage giants, which then returned some of the profits to the pols—sometimes directly, as campaign funds; sometimes as \"contributions\" to favored constituents\".",
"title": "In sections of an economy"
},
{
"paragraph_id": 15,
"text": "In its worst form, crony capitalism can devolve into simple corruption where any pretense of a free market is dispensed with, bribes to government officials are considered de rigueur and tax evasion is common. This is seen in many parts of Africa and is sometimes called plutocracy (rule by wealth) or kleptocracy (rule by theft). Kenyan economist David Ndii has repeatedly brought to light how this system has manifested over time, occasioned by the reign of Uhuru Kenyatta as president.",
"title": "In developing economies"
},
{
"paragraph_id": 16,
"text": "Corrupt governments may favor one set of business owners who have close ties to the government over others. This may also be done with, religious, or ethnic favoritism. For instance, Alawites in Syria have a disproportionate share of power in the government and business there (President Assad himself is an Alawite). This can be explained by considering personal relationships as a social network. As government and business leaders try to accomplish various things, they naturally turn to other powerful people for support in their endeavors. These people form hubs in the network. In a developing country those hubs may be very few, thus concentrating economic and political power in a small interlocking group.",
"title": "In developing economies"
},
{
"paragraph_id": 17,
"text": "Normally, this will be untenable to maintain in business as new entrants will affect the market. However, if business and government are entwined, then the government can maintain the small-hub network.",
"title": "In developing economies"
},
{
"paragraph_id": 18,
"text": "Raymond Vernon, specialist in economics and international affairs, wrote that the Industrial Revolution began in Great Britain because they were the first to successfully limit the power of veto groups (typically cronies of those with power in government) to block innovations, writing: \"Unlike most other national environments, the British environment of the early 19th century contained relatively few threats to those who improved and applied existing inventions, whether from business competitors, labor, or the government itself. In other European countries, by contrast, the merchant guilds ... were a pervasive source of veto for many centuries. This power was typically bestowed upon them by government.\" For example, a Russian inventor produced a steam engine in 1766 and disappeared without a trace. Vermon further stated that \"a steam powered horseless carriage produced in France in 1769 was officially suppressed.\" James Watt began experimenting with steam in 1763, got a patent in 1769 and began commercial production in 1775.",
"title": "In developing economies"
},
{
"paragraph_id": 19,
"text": "Raghuram Rajan, former governor of the Reserve Bank of India, has said: \"One of the greatest dangers to the growth of developing countries is the middle income trap, where crony capitalism creates oligarchies that slow down growth. If the debate during the elections is any pointer, this is a very real concern of the public in India today\". Tavleen Singh, columnist for The Indian Express, has disagreed. According to Singh, India's corporate success is not a product of crony capitalism, but because India is no longer under the influence of crony socialism.",
"title": "In developing economies"
},
{
"paragraph_id": 20,
"text": "While the problem is generally accepted across the political spectrum, ideology shades the view of the problem's causes and therefore its solutions. Political views mostly fall into two camps which might be called the socialist and capitalist critique. The socialist position is that crony capitalism is the inevitable result of any strictly capitalist system and thus broadly democratic government must regulate economic, or wealthy, interests to restrict monopoly. The capitalist position is that natural monopolies are rare, therefore governmental regulations generally abet established wealthy interests by restricting competition.",
"title": "Political viewpoints"
},
{
"paragraph_id": 21,
"text": "Critics of crony capitalism including socialists and anti-capitalists often assert that so-called crony capitalism is simply the inevitable result of any strictly capitalist system. Jane Jacobs described it as a natural consequence of collusion between those managing power and trade while Noam Chomsky has argued that the word crony is superfluous when describing capitalism. Since businesses make money and money leads to political power, business will inevitably use their power to influence governments. Much of the impetus behind campaign finance reform in the United States and in other countries is an attempt to prevent economic power being used to take political power.",
"title": "Political viewpoints"
},
{
"paragraph_id": 22,
"text": "Ravi Batra argues that \"all official economic measures adopted since 1981 ... have devastated the middle class\" and that the Occupy Wall Street movement should push for their repeal and thus end the influence of the super wealthy in the political process which he considers a manifestation of crony capitalism.",
"title": "Political viewpoints"
},
{
"paragraph_id": 23,
"text": "Socialist economists, such as Robin Hahnel, have criticized the term as an ideologically motivated attempt to cast what is in their view the fundamental problems of capitalism as avoidable irregularities. Socialist economists dismiss the term as an apologetic for failures of neoliberal policy and more fundamentally their perception of the weaknesses of market allocation.",
"title": "Political viewpoints"
},
{
"paragraph_id": 24,
"text": "Supporters of capitalism also generally oppose crony capitalism. Further, supporters such as classical liberals, neoliberals and right-libertarians consider it an aberration brought on by governmental favors incompatible with free market.. In the capitalist view, cronyism is the result of an excess of interference in the market which inevitably will result in a toxic combination of corporations and government officials running sectors of the economy. For instance, the Financial Times observed that, in Vietnam during the 2010s, the primary beneficiaries of cronyism were Communist party officials, noting also the \"common practice of employing only party members and their family members and associates to government jobs or to jobs in state-owned enterprises.\"",
"title": "Political viewpoints"
},
{
"paragraph_id": 25,
"text": "Conservative commentator Ben Shapiro prefers to equate this problem with terms such as corporatocracy or corporatism, considered \"a modern form of mercantilism\", to emphasize that the only way to run a profitable business in such a system is to have help from corrupt government officials. Likewise, Hernando de Soto said that mercantilism \"is also known as 'crony' or 'noninclusive' capitalism\".",
"title": "Political viewpoints"
},
{
"paragraph_id": 26,
"text": "Even if the initial regulation was well-intentioned (to curb actual abuses) and even if the initial lobbying by corporations was well-intentioned (to reduce illogical regulations), the mixture of business and government stifles competition, a collusive result called regulatory capture. Burton W. Folsom Jr. distinguishes those that engage in crony capitalism—designated by him political entrepreneurs—from those who compete in the marketplace without special aid from government, whom he calls market entrepreneurs. The market entrepreneurs such as James J. Hill, Cornelius Vanderbilt and John D. Rockefeller succeeded by producing a quality product at a competitive price. For example, the political entrepreneurs such as Edward Collins in steamships and the leaders of the Union Pacific Railroad in railroads were men who used the power of government to succeed. They tried to gain subsidies or in some way use government to stop competitors.",
"title": "Political viewpoints"
}
] | Crony capitalism, sometimes also called simply cronyism, is a pejorative term used in political discourse to describe a situation in which businesses profit from a close relationship with state power, either through an anti-competitive regulatory environment, direct government largesse, and/or corruption. Examples given for crony capitalism include obtainment of permits, government grants, tax breaks, or other undue influence from businesses over the state's deployment of public goods, for example, mining concessions for primary commodities or contracts for public works. In other words, it is used to describe a situation where businesses thrive not as a result of free enterprise, but rather collusion between a business class and the political class. Money is then made not merely by making a profit in the market, but through profiteering by rent seeking using this monopoly or oligopoly. Entrepreneurship and innovative practices which seek to reward risk are stifled since the value-added is little by crony businesses, as hardly anything of significant value is created by them, with transactions taking the form of trading. Crony capitalism spills over into the government, the politics, and the media, when this nexus distorts the economy and affects society to an extent it corrupts public-serving economic, political, and social ideals. | 2001-03-11T15:34:30Z | 2023-12-10T21:04:54Z | [
"Template:Discrimination sidebar",
"Template:Clarification needed",
"Template:Clarify",
"Template:Use mdy dates",
"Template:Seealso",
"Template:Cbignore",
"Template:Short description",
"Template:See also",
"Template:Dead link",
"Template:Misleading",
"Template:Cite news",
"Template:Refbegin",
"Template:Refend",
"Template:Cn",
"Template:Cite journal",
"Template:Cite web",
"Template:Primary source inline",
"Template:Cols",
"Template:Colend",
"Template:Reflist",
"Template:Cite book",
"Template:Multiple issues",
"Template:Political corruption sidebar",
"Template:Sisterlinks",
"Template:Capitalism",
"Template:Citation needed",
"Template:Citation",
"Template:Corruption"
] | https://en.wikipedia.org/wiki/Crony_capitalism |
5,252 | Lists of universities and colleges | This is a list of lists of universities and colleges. | [
{
"paragraph_id": 0,
"text": "This is a list of lists of universities and colleges.",
"title": ""
}
] | This is a list of lists of universities and colleges. | 2023-04-09T12:43:10Z | [
"Template:Maincat",
"Template:Portal",
"Template:In title",
"Template:Look from",
"Template:List of lists",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Lists_of_universities_and_colleges |
|
5,253 | Constitution | A constitution is the aggregate of fundamental principles or established precedents that constitute the legal basis of a polity, organization or other type of entity, and commonly determines how that entity is to be governed.
When these principles are written down into a single document or set of legal documents, those documents may be said to embody a written constitution; if they are encompassed in a single comprehensive document, it is said to embody a codified constitution. The Constitution of the United Kingdom is a notable example of an uncodified constitution; it is instead written in numerous fundamental Acts of a legislature, court cases, or treaties.
Constitutions concern different levels of organizations, from sovereign countries to companies and unincorporated associations. A treaty that establishes an international organization is also its constitution, in that it would define how that organization is constituted. Within states, a constitution defines the principles upon which the state is based, the procedure in which laws are made and by whom. Some constitutions, especially codified constitutions, also act as limiters of state power, by establishing lines which a state's rulers cannot cross, such as fundamental rights.
The Constitution of India is the longest written constitution of any country in the world, with 146,385 words in its English-language version, while the Constitution of Monaco is the shortest written constitution with 3,814 words. The Constitution of San Marino might be the world's oldest active written constitution, since some of its core documents have been in operation since 1600, while the Constitution of the United States is the oldest active codified constitution. The historical life expectancy of a constitution since 1789 is approximately 19 years.
The term constitution comes through French from the Latin word constitutio, used for regulations and orders, such as the imperial enactments (constitutiones principis: edicta, mandata, decreta, rescripta). Later, the term was widely used in canon law for an important determination, especially a decree issued by the Pope, now referred to as an apostolic constitution.
William Blackstone used the term for significant and egregious violations of public trust, of a nature and extent that the transgression would justify a revolutionary response. The term as used by Blackstone was not for a legal text, nor did he intend to include the later American concept of judicial review: "for that were to set the judicial power above that of the legislature, which would be subversive of all government".
Generally, every modern written constitution confers specific powers on an organization or institutional entity, established upon the primary condition that it abides by the constitution's limitations. According to Scott Gordon, a political organization is constitutional to the extent that it "contain[s] institutionalized mechanisms of power control for the protection of the interests and liberties of the citizenry, including those that may be in the minority".
Activities of officials within an organization or polity that fall within the constitutional or statutory authority of those officials are termed "within power" (or, in Latin, intra vires); if they do not, they are termed "beyond power" (or, in Latin, ultra vires). For example, a students' union may be prohibited as an organization from engaging in activities not concerning students; if the union becomes involved in non-student activities, these activities are considered to be ultra vires of the union's charter, and nobody would be compelled by the charter to follow them. An example from the constitutional law of sovereign states would be a provincial parliament in a federal state trying to legislate in an area that the constitution allocates exclusively to the federal parliament, such as ratifying a treaty. Action that appears to be beyond power may be judicially reviewed and, if found to be beyond power, must cease. Legislation that is found to be beyond power will be "invalid" and of no force; this applies to primary legislation, requiring constitutional authorization, and secondary legislation, ordinarily requiring statutory authorization. In this context, "within power", intra vires, "authorized" and "valid" have the same meaning; as do "beyond power", ultra vires, "not authorized" and "invalid".
In most but not all modern states the constitution has supremacy over ordinary statutory law (see Uncodified constitution below); in such states when an official act is unconstitutional, i.e. it is not a power granted to the government by the constitution, that act is null and void, and the nullification is ab initio, that is, from inception, not from the date of the finding. It was never "law", even though, if it had been a statute or statutory provision, it might have been adopted according to the procedures for adopting legislation. Sometimes the problem is not that a statute is unconstitutional, but that the application of it is, on a particular occasion, and a court may decide that while there are ways it could be applied that are constitutional, that instance was not allowed or legitimate. In such a case, only that application may be ruled unconstitutional. Historically, the remedies for such violations have been petitions for common law writs, such as quo warranto.
Scholars debate whether a constitution must necessarily be autochthonous, resulting from the nations "spirit". Hegel said "A constitution...is the work of centuries; it is the idea, the consciousness of rationality so far as that consciousness is developed in a particular nation."
Since 1789, along with the Constitution of the United States of America (U.S. Constitution), which is the oldest and shortest written constitution still in force, close to 800 constitutions have been adopted and subsequently amended around the world by independent states.
In the late 18th century, Thomas Jefferson predicted that a period of 20 years would be the optimal time for any constitution to be still in force, since "the earth belongs to the living, and not to the dead". Indeed, according to recent studies, the average life of any new written constitution is around 19 years. However, a great number of constitutions do not last more than 10 years, and around 10% do not last more than one year, as was the case of the French Constitution of 1791. By contrast, some constitutions, notably that of the United States, have remained in force for several centuries, often without major revision for long periods of time.
The most common reasons for these frequent changes are the political desire for an immediate outcome and the short time devoted to the constitutional drafting process. A study in 2009 showed that the average time taken to draft a constitution is around 16 months, however there were also some extreme cases registered. For example, the Myanmar 2008 Constitution was being secretly drafted for more than 17 years, whereas at the other extreme, during the drafting of Japan's 1946 Constitution, the bureaucrats drafted everything in no more than a week. Japan has the oldest unamended constitution in the world. The record for the shortest overall process of drafting, adoption, and ratification of a national constitution belongs to the Romania's 1938 constitution, which installed a royal dictatorship in less than a month. Studies showed that typically extreme cases where the constitution-making process either takes too long or is extremely short were non-democracies.
In principle, constitutional rights are not a specific characteristic of democratic countries. Autocratic states have constitutions, such as that of North Korea, which officially grants every citizen, among other things, the freedom of expression. However, the extent to which governments abide by their own constitutional provisions varies. In North Korea, for example, the Ten Principles for the Establishment of a Monolithic Ideological System are said to have eclipsed the constitution in importance as a frame of government in practice. Developing a legal and political tradition of strict adherence to constitutional provisions is considered foundational to the rule of law.
Excavations in modern-day Iraq by Ernest de Sarzec in 1877 found evidence of the earliest known code of justice, issued by the Sumerian king Urukagina of Lagash c. 2300 BC. Perhaps the earliest prototype for a law of government, this document itself has not yet been discovered; however it is known that it allowed some rights to his citizens. For example, it is known that it relieved tax for widows and orphans, and protected the poor from the usury of the rich.
After that, many governments ruled by special codes of written laws. The oldest such document still known to exist seems to be the Code of Ur-Nammu of Ur (c. 2050 BC). Some of the better-known ancient law codes are the code of Lipit-Ishtar of Isin, the code of Hammurabi of Babylonia, the Hittite code, the Assyrian code, and Mosaic law.
In 621 BC, a scribe named Draco codified the oral laws of the city-state of Athens; this code prescribed the death penalty for many offenses (thus creating the modern term "draconian" for very strict rules). In 594 BC, Solon, the ruler of Athens, created the new Solonian Constitution. It eased the burden of the workers, and determined that membership of the ruling class was to be based on wealth (plutocracy), rather than on birth (aristocracy). Cleisthenes again reformed the Athenian constitution and set it on a democratic footing in 508 BC.
Aristotle (c. 350 BC) was the first to make a formal distinction between ordinary law and constitutional law, establishing ideas of constitution and constitutionalism, and attempting to classify different forms of constitutional government. The most basic definition he used to describe a constitution in general terms was "the arrangement of the offices in a state". In his works Constitution of Athens, Politics, and Nicomachean Ethics, he explores different constitutions of his day, including those of Athens, Sparta, and Carthage. He classified both what he regarded as good and what he regarded as bad constitutions, and came to the conclusion that the best constitution was a mixed system including monarchic, aristocratic, and democratic elements. He also distinguished between citizens, who had the right to participate in the state, and non-citizens and slaves, who did not.
The Romans initially codified their constitution in 450 BC as the Twelve Tables. They operated under a series of laws that were added from time to time, but Roman law was not reorganized into a single code until the Codex Theodosianus (438 AD); later, in the Eastern Empire, the Codex repetitæ prælectionis (534) was highly influential throughout Europe. This was followed in the east by the Ecloga of Leo III the Isaurian (740) and the Basilica of Basil I (878).
The Edicts of Ashoka established constitutional principles for the 3rd century BC Maurya king's rule in India. For constitutional principles almost lost to antiquity, see the code of Manu.
Many of the Germanic peoples that filled the power vacuum left by the Western Roman Empire in the Early Middle Ages codified their laws. One of the first of these Germanic law codes to be written was the Visigothic Code of Euric (471 AD). This was followed by the Lex Burgundionum, applying separate codes for Germans and for Romans; the Pactus Alamannorum; and the Salic Law of the Franks, all written soon after 500. In 506, the Breviarum or "Lex Romana" of Alaric II, king of the Visigoths, adopted and consolidated the Codex Theodosianus together with assorted earlier Roman laws. Systems that appeared somewhat later include the Edictum Rothari of the Lombards (643), the Lex Visigothorum (654), the Lex Alamannorum (730), and the Lex Frisionum (c. 785). These continental codes were all composed in Latin, while Anglo-Saxon was used for those of England, beginning with the Code of Æthelberht of Kent (602). Around 893, Alfred the Great combined this and two other earlier Saxon codes, with various Mosaic and Christian precepts, to produce the Doom book code of laws for England.
Japan's Seventeen-article constitution written in 604, reportedly by Prince Shōtoku, is an early example of a constitution in Asian political history. Influenced by Buddhist teachings, the document focuses more on social morality than on institutions of government, and remains a notable early attempt at a government constitution.
The Constitution of Medina (Arabic: صحیفة المدینه, Ṣaḥīfat al-Madīna), also known as the Charter of Medina, was drafted by the Islamic prophet Muhammad after his flight (hijra) to Yathrib where he became political leader. It constituted a formal agreement between Muhammad and all of the significant tribes and families of Yathrib (later known as Medina), including Muslims, Jews, and pagans. The document was drawn up with the explicit concern of bringing to an end the bitter intertribal fighting between the clans of the Aws (Aus) and Khazraj within Medina. To this effect it instituted a number of rights and responsibilities for the Muslim, Jewish, and pagan communities of Medina bringing them within the fold of one community – the Ummah. The precise dating of the Constitution of Medina remains debated, but generally scholars agree it was written shortly after the Hijra (622).
In Wales, the Cyfraith Hywel (Law of Hywel) was codified by Hywel Dda c. 942–950.
The Pravda Yaroslava, originally combined by Yaroslav the Wise the Grand Prince of Kiev, was granted to Great Novgorod around 1017, and in 1054 was incorporated into the Russkaya Pravda; it became the law for all of Kievan Rus'. It survived only in later editions of the 15th century.
In England, Henry I's proclamation of the Charter of Liberties in 1100 bound the king for the first time in his treatment of the clergy and the nobility. This idea was extended and refined by the English barony when they forced King John to sign Magna Carta in 1215. The most important single article of the Magna Carta, related to "habeas corpus", provided that the king was not permitted to imprison, outlaw, exile or kill anyone at a whim – there must be due process of law first. This article, Article 39, of the Magna Carta read:
No free man shall be arrested, or imprisoned, or deprived of his property, or outlawed, or exiled, or in any way destroyed, nor shall we go against him or send against him, unless by legal judgement of his peers, or by the law of the land.
This provision became the cornerstone of English liberty after that point. The social contract in the original case was between the king and the nobility, but was gradually extended to all of the people. It led to the system of Constitutional Monarchy, with further reforms shifting the balance of power from the monarchy and nobility to the House of Commons.
The Nomocanon of Saint Sava (Serbian: Законоправило/Zakonopravilo) was the first Serbian constitution from 1219. St. Sava's Nomocanon was the compilation of civil law, based on Roman Law, and canon law, based on Ecumenical Councils. Its basic purpose was to organize the functioning of the young Serbian kingdom and the Serbian church. Saint Sava began the work on the Serbian Nomocanon in 1208 while he was at Mount Athos, using The Nomocanon in Fourteen Titles, Synopsis of Stefan the Efesian, Nomocanon of John Scholasticus, and Ecumenical Council documents, which he modified with the canonical commentaries of Aristinos and Joannes Zonaras, local church meetings, rules of the Holy Fathers, the law of Moses, the translation of Prohiron, and the Byzantine emperors' Novellae (most were taken from Justinian's Novellae). The Nomocanon was a completely new compilation of civil and canonical regulations, taken from Byzantine sources but completed and reformed by St. Sava to function properly in Serbia. Besides decrees that organized the life of church, there are various norms regarding civil life; most of these were taken from Prohiron. Legal transplants of Roman-Byzantine law became the basis of the Serbian medieval law. The essence of Zakonopravilo was based on Corpus Iuris Civilis.
Stefan Dušan, emperor of Serbs and Greeks, enacted Dušan's Code (Serbian: Душанов Законик/Dušanov Zakonik) in Serbia, in two state congresses: in 1349 in Skopje and in 1354 in Serres. It regulated all social spheres, so it was the second Serbian constitution, after St. Sava's Nomocanon (Zakonopravilo). The Code was based on Roman-Byzantine law. The legal transplanting within articles 171 and 172 of Dušan's Code, which regulated the juridical independence, is notable. They were taken from the Byzantine code Basilika (book VII, 1, 16–17).
In 1222, Hungarian King Andrew II issued the Golden Bull of 1222.
Between 1220 and 1230, a Saxon administrator, Eike von Repgow, composed the Sachsenspiegel, which became the supreme law used in parts of Germany as late as 1900.
Around 1240, the Coptic Egyptian Christian writer, 'Abul Fada'il Ibn al-'Assal, wrote the Fetha Negest in Arabic. 'Ibn al-Assal took his laws partly from apostolic writings and Mosaic law and partly from the former Byzantine codes. There are a few historical records claiming that this law code was translated into Ge'ez and entered Ethiopia around 1450 in the reign of Zara Yaqob. Even so, its first recorded use in the function of a constitution (supreme law of the land) is with Sarsa Dengel beginning in 1563. The Fetha Negest remained the supreme law in Ethiopia until 1931, when a modern-style Constitution was first granted by Emperor Haile Selassie I.
In the Principality of Catalonia, the Catalan constitutions were promulgated by the Court from 1283 (or even two centuries before, if Usatges of Barcelona is considered part of the compilation of Constitutions) until 1716, when Philip V of Spain gave the Nueva Planta decrees, finishing with the historical laws of Catalonia. These Constitutions were usually made formally as a royal initiative, but required for its approval or repeal the favorable vote of the Catalan Courts, the medieval antecedent of the modern Parliaments. These laws, like other modern constitutions, had preeminence over other laws, and they could not be contradicted by mere decrees or edicts of the king.
The Kouroukan Founga was a 13th-century charter of the Mali Empire, reconstructed from oral tradition in 1988 by Siriman Kouyaté.
The Golden Bull of 1356 was a decree issued by a Reichstag in Nuremberg headed by Emperor Charles IV that fixed, for a period of more than four hundred years, an important aspect of the constitutional structure of the Holy Roman Empire.
In China, the Hongwu Emperor created and refined a document he called Ancestral Injunctions (first published in 1375, revised twice more before his death in 1398). These rules served as a constitution for the Ming Dynasty for the next 250 years.
The oldest written document still governing a sovereign nation today is that of San Marino. The Leges Statutae Republicae Sancti Marini was written in Latin and consists of six books. The first book, with 62 articles, establishes councils, courts, various executive officers, and the powers assigned to them. The remaining books cover criminal and civil law and judicial procedures and remedies. Written in 1600, the document was based upon the Statuti Comunali (Town Statute) of 1300, itself influenced by the Codex Justinianus, and it remains in force today.
In 1392 the Carta de Logu was legal code of the Giudicato of Arborea promulgated by the giudicessa Eleanor. It was in force in Sardinia until it was superseded by the code of Charles Felix in April 1827. The Carta was a work of great importance in Sardinian history. It was an organic, coherent, and systematic work of legislation encompassing the civil and penal law.
The Gayanashagowa, the oral constitution of the Haudenosaunee nation also known as the Great Law of Peace, established a system of governance as far back as 1190 AD (though perhaps more recently at 1451) in which the Sachems, or tribal chiefs, of the Iroquois League's member nations made decisions on the basis of universal consensus of all chiefs following discussions that were initiated by a single nation. The position of Sachem descends through families and are allocated by the senior female clan heads, though, prior to the filling of the position, candidacy is ultimately democratically decided by the community itself.
In 1634 the Kingdom of Sweden adopted the 1634 Instrument of Government, drawn up under the Lord High Chancellor of Sweden Axel Oxenstierna after the death of king Gustavus Adolphus, it can be seen as the first written constitution adopted by a modern state.
In 1639, the Colony of Connecticut adopted the Fundamental Orders, which was the first North American constitution, and is the basis for every new Connecticut constitution since, and is also the reason for Connecticut's nickname, "the Constitution State".
The English Protectorate that was set up by Oliver Cromwell after the English Civil War promulgated the first detailed written constitution adopted by a modern state; it was called the Instrument of Government. This formed the basis of government for the short-lived republic from 1653 to 1657 by providing a legal rationale for the increasing power of Cromwell after Parliament consistently failed to govern effectively. Most of the concepts and ideas embedded into modern constitutional theory, especially bicameralism, separation of powers, the written constitution, and judicial review, can be traced back to the experiments of that period.
Drafted by Major-General John Lambert in 1653, the Instrument of Government included elements incorporated from an earlier document "Heads of Proposals", which had been agreed to by the Army Council in 1647, as a set of propositions intended to be a basis for a constitutional settlement after King Charles I was defeated in the First English Civil War. Charles had rejected the propositions, but before the start of the Second Civil War, the Grandees of the New Model Army had presented the Heads of Proposals as their alternative to the more radical Agreement of the People presented by the Agitators and their civilian supporters at the Putney Debates.
On 4 January 1649, the Rump Parliament declared "that the people are, under God, the original of all just power; that the Commons of England, being chosen by and representing the people, have the supreme power in this nation".
The Instrument of Government was adopted by Parliament on 15 December 1653, and Oliver Cromwell was installed as Lord Protector on the following day. The constitution set up a state council consisting of 21 members while executive authority was vested in the office of "Lord Protector of the Commonwealth." This position was designated as a non-hereditary life appointment. The Instrument also required the calling of triennial Parliaments, with each sitting for at least five months.
The Instrument of Government was replaced in May 1657 by England's second, and last, codified constitution, the Humble Petition and Advice, proposed by Sir Christopher Packe. The Petition offered hereditary monarchy to Oliver Cromwell, asserted Parliament's control over issuing new taxation, provided an independent council to advise the king and safeguarded "Triennial" meetings of Parliament. A modified version of the Humble Petition with the clause on kingship removed was ratified on 25 May. This finally met its demise in conjunction with the death of Cromwell and the Restoration of the monarchy.
Other examples of European constitutions of this era were the Corsican Constitution of 1755 and the Swedish Constitution of 1772.
All of the British colonies in North America that were to become the 13 original United States, adopted their own constitutions in 1776 and 1777, during the American Revolution (and before the later Articles of Confederation and United States Constitution), with the exceptions of Massachusetts, Connecticut and Rhode Island. The Commonwealth of Massachusetts adopted its Constitution in 1780, the oldest still-functioning constitution of any U.S. state; while Connecticut and Rhode Island officially continued to operate under their old colonial charters, until they adopted their first state constitutions in 1818 and 1843, respectively.
What is sometimes called the "enlightened constitution" model was developed by philosophers of the Age of Enlightenment such as Thomas Hobbes, Jean-Jacques Rousseau, and John Locke. The model proposed that constitutional governments should be stable, adaptable, accountable, open and should represent the people (i.e., support democracy).
Agreements and Constitutions of Laws and Freedoms of the Zaporizian Host was written in 1710 by Pylyp Orlyk, hetman of the Zaporozhian Host. It was written to establish a free Zaporozhian-Ukrainian Republic, with the support of Charles XII of Sweden. It is notable in that it established a democratic standard for the separation of powers in government between the legislative, executive, and judiciary branches, well before the publication of Montesquieu's Spirit of the Laws. This Constitution also limited the executive authority of the hetman, and established a democratically elected Cossack parliament called the General Council. However, Orlyk's project for an independent Ukrainian State never materialized, and his constitution, written in exile, never went into effect.
Corsican Constitutions of 1755 and 1794 were inspired by Jean-Jacques Rousseau. The latter introduced universal suffrage for property owners.
The Swedish constitution of 1772 was enacted under King Gustavus III and was inspired by the separation of powers by Montesquieu. The king also cherished other enlightenment ideas (as an enlighted despot) and repealed torture, liberated agricultural trade, diminished the use of the death penalty and instituted a form of religious freedom. The constitution was commended by Voltaire.
The United States Constitution, ratified 21 June 1788, was influenced by the writings of Polybius, Locke, Montesquieu, and others. The document became a benchmark for republicanism and codified constitutions written thereafter.
The Polish–Lithuanian Commonwealth Constitution was passed on 3 May 1791. Its draft was developed by the leading minds of the Enlightenment in Poland such as King Stanislaw August Poniatowski, Stanisław Staszic, Scipione Piattoli, Julian Ursyn Niemcewicz, Ignacy Potocki and Hugo Kołłątaj. It was adopted by the Great Sejm and is considered the first constitution of its kind in Europe and the world's second oldest one after the American Constitution.
Another landmark document was the French Constitution of 1791.
The 1811 Constitution of Venezuela was the first Constitution of Venezuela and Latin America, promulgated and drafted by Cristóbal Mendoza and Juan Germán Roscio and in Caracas. It established a federal government but was repealed one year later.
On 19 March 1812, the Spanish Constitution of 1812 was ratified by a parliament gathered in Cadiz, the only Spanish continental city which was safe from French occupation. The Spanish Constitution served as a model for other liberal constitutions of several South European and Latin American nations, for example, the Portuguese Constitution of 1822, constitutions of various Italian states during Carbonari revolts (i.e., in the Kingdom of the Two Sicilies), the Norwegian constitution of 1814, or the Mexican Constitution of 1824.
In Brazil, the Constitution of 1824 expressed the option for the monarchy as political system after Brazilian Independence. The leader of the national emancipation process was the Portuguese prince Pedro I, elder son of the king of Portugal. Pedro was crowned in 1822 as first emperor of Brazil. The country was ruled by Constitutional monarchy until 1889, when it adopted the Republican model.
In Denmark, as a result of the Napoleonic Wars, the absolute monarchy lost its personal possession of Norway to Sweden. Sweden had already enacted its 1809 Instrument of Government, which saw the division of power between the Riksdag, the king and the judiciary. However the Norwegians managed to infuse a radically democratic and liberal constitution in 1814, adopting many facets from the American constitution and the revolutionary French ones, but maintaining a hereditary monarch limited by the constitution, like the Spanish one.
The first Swiss Federal Constitution was put in force in September 1848 (with official revisions in 1878, 1891, 1949, 1971, 1982 and 1999).
The Serbian revolution initially led to a proclamation of a proto-constitution in 1811; the full-fledged Constitution of Serbia followed few decades later, in 1835. The first Serbian constitution (Sretenjski ustav) was adopted at the national assembly in Kragujevac on 15 February 1835.
The Constitution of Canada came into force on 1 July 1867, as the British North America Act, an act of the British Parliament. Over a century later, the BNA Act was patriated to the Canadian Parliament and augmented with the Canadian Charter of Rights and Freedoms. Apart from the Constitution Acts, 1867 to 1982, Canada's constitution also has unwritten elements based in common law and convention.
After tribal people first began to live in cities and establish nations, many of these functioned according to unwritten customs, while some developed autocratic, even tyrannical monarchs, who ruled by decree, or mere personal whim. Such rule led some thinkers to take the position that what mattered was not the design of governmental institutions and operations, as much as the character of the rulers. This view can be seen in Plato, who called for rule by "philosopher-kings". Later writers, such as Aristotle, Cicero and Plutarch, would examine designs for government from a legal and historical standpoint.
The Renaissance brought a series of political philosophers who wrote implied criticisms of the practices of monarchs and sought to identify principles of constitutional design that would be likely to yield more effective and just governance from their viewpoints. This began with revival of the Roman law of nations concept and its application to the relations among nations, and they sought to establish customary "laws of war and peace" to ameliorate wars and make them less likely. This led to considerations of what authority monarchs or other officials have and don't have, from where that authority derives, and the remedies for the abuse of such authority.
A seminal juncture in this line of discourse arose in England from the Civil War, the Cromwellian Protectorate, the writings of Thomas Hobbes, Samuel Rutherford, the Levellers, John Milton, and James Harrington, leading to the debate between Robert Filmer, arguing for the divine right of monarchs, on the one side, and on the other, Henry Neville, James Tyrrell, Algernon Sidney, and John Locke. What arose from the latter was a concept of government being erected on the foundations of first, a state of nature governed by natural laws, then a state of society, established by a social contract or compact, which bring underlying natural or social laws, before governments are formally established on them as foundations.
Along the way several writers examined how the design of government was important, even if the government were headed by a monarch. They also classified various historical examples of governmental designs, typically into democracies, aristocracies, or monarchies, and considered how just and effective each tended to be and why, and how the advantages of each might be obtained by combining elements of each into a more complex design that balanced competing tendencies. Some, such as Montesquieu, also examined how the functions of government, such as legislative, executive, and judicial, might appropriately be separated into branches. The prevailing theme among these writers was that the design of constitutions is not completely arbitrary or a matter of taste. They generally held that there are underlying principles of design that constrain all constitutions for every polity or organization. Each built on the ideas of those before concerning what those principles might be.
The later writings of Orestes Brownson would try to explain what constitutional designers were trying to do. According to Brownson there are, in a sense, three "constitutions" involved: The first the constitution of nature that includes all of what was called "natural law". The second is the constitution of society, an unwritten and commonly understood set of rules for the society formed by a social contract before it establishes a government, by which it establishes the third, a constitution of government. The second would include such elements as the making of decisions by public conventions called by public notice and conducted by established rules of procedure. Each constitution must be consistent with, and derive its authority from, the ones before it, as well as from a historical act of society formation or constitutional ratification. Brownson argued that a state is a society with effective dominion over a well-defined territory, that consent to a well-designed constitution of government arises from presence on that territory, and that it is possible for provisions of a written constitution of government to be "unconstitutional" if they are inconsistent with the constitutions of nature or society. Brownson argued that it is not ratification alone that makes a written constitution of government legitimate, but that it must also be competently designed and applied.
Other writers have argued that such considerations apply not only to all national constitutions of government, but also to the constitutions of private organizations, that it is not an accident that the constitutions that tend to satisfy their members contain certain elements, as a minimum, or that their provisions tend to become very similar as they are amended after experience with their use. Provisions that give rise to certain kinds of questions are seen to need additional provisions for how to resolve those questions, and provisions that offer no course of action may best be omitted and left to policy decisions. Provisions that conflict with what Brownson and others can discern are the underlying "constitutions" of nature and society tend to be difficult or impossible to execute, or to lead to unresolvable disputes.
Constitutional design has been treated as a kind of metagame in which play consists of finding the best design and provisions for a written constitution that will be the rules for the game of government, and that will be most likely to optimize a balance of the utilities of justice, liberty, and security. An example is the metagame Nomic.
Political economy theory regards constitutions as coordination devices that help citizens to prevent rulers from abusing power. If the citizenry can coordinate a response to police government officials in the face of a constitutional fault, then the government have the incentives to honor the rights that the constitution guarantees. An alternative view considers that constitutions are not enforced by the citizens at-large, but rather by the administrative powers of the state. Because rulers cannot themselves implement their policies, they need to rely on a set of organizations (armies, courts, police agencies, tax collectors) to implement it. In this position, they can directly sanction the government by refusing to cooperate, disabling the authority of the rulers. Therefore, constitutions could be characterized by a self-enforcing equilibria between the rulers and powerful administrators.
Most commonly, the term constitution refers to a set of rules and principles that define the nature and extent of government. Most constitutions seek to regulate the relationship between institutions of the state, in a basic sense the relationship between the executive, legislature and the judiciary, but also the relationship of institutions within those branches. For example, executive branches can be divided into a head of government, government departments/ministries, executive agencies and a civil service/administration. Most constitutions also attempt to define the relationship between individuals and the state, and to establish the broad rights of individual citizens. It is thus the most basic law of a territory from which all the other laws and rules are hierarchically derived; in some territories it is in fact called "Basic Law".
A fundamental classification is codification or lack of codification. A codified constitution is one that is contained in a single document, which is the single source of constitutional law in a state. An uncodified constitution is one that is not contained in a single document, consisting of several different sources, which may be written or unwritten; see constitutional convention.
Most states in the world have codified constitutions.
Codified constitutions are often the product of some dramatic political change, such as a revolution. The process by which a country adopts a constitution is closely tied to the historical and political context driving this fundamental change. The legitimacy (and often the longevity) of codified constitutions has often been tied to the process by which they are initially adopted and some scholars have pointed out that high constitutional turnover within a given country may itself be detrimental to separation of powers and the rule of law.
States that have codified constitutions normally give the constitution supremacy over ordinary statute law. That is, if there is any conflict between a legal statute and the codified constitution, all or part of the statute can be declared ultra vires by a court, and struck down as unconstitutional. In addition, exceptional procedures are often required to amend a constitution. These procedures may include: convocation of a special constituent assembly or constitutional convention, requiring a supermajority of legislators' votes, approval in two terms of parliament, the consent of regional legislatures, a referendum process, and/or other procedures that make amending a constitution more difficult than passing a simple law.
Constitutions may also provide that their most basic principles can never be abolished, even by amendment. In case a formally valid amendment of a constitution infringes these principles protected against any amendment, it may constitute a so-called unconstitutional constitutional law.
Codified constitutions normally consist of a ceremonial preamble, which sets forth the goals of the state and the motivation for the constitution, and several articles containing the substantive provisions. The preamble, which is omitted in some constitutions, may contain a reference to God and/or to fundamental values of the state such as liberty, democracy or human rights. In ethnic nation-states such as Estonia, the mission of the state can be defined as preserving a specific nation, language and culture.
As of 2017 only two sovereign states, New Zealand and the United Kingdom, have wholly uncodified constitutions. The Basic Laws of Israel have since 1950 been intended to be the basis for a constitution, but as of 2017 it had not been drafted. The various Laws are considered to have precedence over other laws, and give the procedure by which they can be amended, typically by a simple majority of members of the Knesset (parliament).
Uncodified constitutions are the product of an "evolution" of laws and conventions over centuries (such as in the Westminster System that developed in Britain). By contrast to codified constitutions, uncodified constitutions include both written sources – e.g. constitutional statutes enacted by the Parliament – and unwritten sources – constitutional conventions, observation of precedents, royal prerogatives, customs and traditions, such as holding general elections on Thursdays; together these constitute British constitutional law.
Some constitutions are largely, but not wholly, codified. For example, in the Constitution of Australia, most of its fundamental political principles and regulations concerning the relationship between branches of government, and concerning the government and the individual are codified in a single document, the Constitution of the Commonwealth of Australia. However, the presence of statutes with constitutional significance, namely the Statute of Westminster, as adopted by the Commonwealth in the Statute of Westminster Adoption Act 1942, and the Australia Act 1986 means that Australia's constitution is not contained in a single constitutional document. It means the Constitution of Australia is uncodified, it also contains constitutional conventions, thus is partially unwritten.
The Constitution of Canada resulted from the passage of several British North America Acts from 1867 to the Canada Act 1982, the act that formally severed British Parliament's ability to amend the Canadian constitution. The Canadian constitution includes specific legislative acts as mentioned in section 52(2) of the Constitution Act, 1982. However, some documents not explicitly listed in section 52(2) are also considered constitutional documents in Canada, entrenched via reference; such as the Proclamation of 1763. Although Canada's constitution includes a number of different statutes, amendments, and references, some constitutional rules that exist in Canada is derived from unwritten sources and constitutional conventions.
The terms written constitution and codified constitution are often used interchangeably, as are unwritten constitution and uncodified constitution, although this usage is technically inaccurate. A codified constitution is a single document; states that do not have such a document have uncodified, but not entirely unwritten, constitutions, since much of an uncodified constitution is usually written in laws such as the Basic Laws of Israel and the Parliament Acts of the United Kingdom. Uncodified constitutions largely lack protection against amendment by the government of the time. For example, the U.K. Fixed-term Parliaments Act 2011 legislated by simple majority for strictly fixed-term parliaments; until then the ruling party could call a general election at any convenient time up to the maximum term of five years. This change would require a constitutional amendment in most nations.
A constitutional amendment is a modification of the constitution of a polity, organization or other type of entity. Amendments are often interwoven into the relevant sections of an existing constitution, directly altering the text. Conversely, they can be appended to the constitution as supplemental additions (codicils), thus changing the frame of government without altering the existing text of the document.
Most constitutions require that amendments cannot be enacted unless they have passed a special procedure that is more stringent than that required of ordinary legislation.
Some countries are listed under more than one method because alternative procedures may be used.
An entrenched clause or entrenchment clause of a basic law or constitution is a provision that makes certain amendments either more difficult or impossible to pass, making such amendments inadmissible. Overriding an entrenched clause may require a supermajority, a referendum, or the consent of the minority party. For example, the U.S. Constitution has an entrenched clause that prohibits abolishing equal suffrage of the States within the Senate without their consent. The term eternity clause is used in a similar manner in the constitutions of the Czech Republic, Germany, Turkey, Greece, Italy, Morocco, the Islamic Republic of Iran, Brazil and Norway. India's constitution does not contain specific provisions on entrenched clauses but the basic structure doctrine makes it impossible for certain basic features of the Constitution to be altered or destroyed by the Parliament of India through an amendment. The Constitution of Colombia also lacks explicit entrenched clauses, but has a similar substantive limit on amending its fundamental principles through judicial interpretations.
Constitutions include various rights and duties. These include the following:
Constitutions usually explicitly divide power between various branches of government. The standard model, described by the Baron de Montesquieu, involves three branches of government: executive, legislative and judicial. Some constitutions include additional branches, such as an auditory branch. Constitutions vary extensively as to the degree of separation of powers between these branches.
In presidential and semi-presidential systems of government, department secretaries/ministers are accountable to the president, who has patronage powers to appoint and dismiss ministers. The president is accountable to the people in an election.
In parliamentary systems, Cabinet Ministers are accountable to Parliament, but it is the prime minister who appoints and dismisses them. In the case of the United Kingdom and other countries with a monarchy, it is the monarch who appoints and dismisses ministers, on the advice of the prime minister. In turn the prime minister will resign if the government loses the confidence of the parliament (or a part of it). Confidence can be lost if the government loses a vote of no confidence or, depending on the country, loses a particularly important vote in parliament, such as vote on the budget. When a government loses confidence, it stays in office until a new government is formed; something which normally but not necessarily required the holding of a general election.
Other independent institutions which some constitutions have set out include a central bank, an anti-corruption commission, an electoral commission, a judicial oversight body, a human rights commission, a media commission, an ombudsman, and a truth and reconciliation commission.
Constitutions also establish where sovereignty is located in the state. There are three basic types of distribution of sovereignty according to the degree of centralisation of power: unitary, federal, and confederal. The distinction is not absolute.
In a unitary state, sovereignty resides in the state itself, and the constitution determines this. The territory of the state may be divided into regions, but they are not sovereign and are subordinate to the state. In the UK, the constitutional doctrine of Parliamentary sovereignty dictates that sovereignty is ultimately contained at the centre. Some powers have been devolved to Northern Ireland, Scotland, and Wales (but not England). Some unitary states (Spain is an example) devolve more and more power to sub-national governments until the state functions in practice much like a federal state.
A federal state has a central structure with at most a small amount of territory mainly containing the institutions of the federal government, and several regions (called states, provinces, etc.) which compose the territory of the whole state. Sovereignty is divided between the centre and the constituent regions. The constitutions of Canada and the United States establish federal states, with power divided between the federal government and the provinces or states. Each of the regions may in turn have its own constitution (of unitary nature).
A confederal state comprises again several regions, but the central structure has only limited coordinating power, and sovereignty is located in the regions. Confederal constitutions are rare, and there is often dispute to whether so-called "confederal" states are actually federal.
To some extent a group of states which do not constitute a federation as such may by treaties and accords give up parts of their sovereignty to a supranational entity. For example, the countries constituting the European Union have agreed to abide by some Union-wide measures which restrict their absolute sovereignty in some ways, e.g., the use of the metric system of measurement instead of national units previously used.
Many constitutions allow the declaration under exceptional circumstances of some form of state of emergency during which some rights and guarantees are suspended. This provision can be and has been abused to allow a government to suppress dissent without regard for human rights – see the article on state of emergency.
Italian political theorist Giovanni Sartori noted the existence of national constitutions which are a facade for authoritarian sources of power. While such documents may express respect for human rights or establish an independent judiciary, they may be ignored when the government feels threatened, or never put into practice. An extreme example was the Constitution of the Soviet Union that on paper supported freedom of assembly and freedom of speech; however, citizens who transgressed unwritten limits were summarily imprisoned. The example demonstrates that the protections and benefits of a constitution are ultimately provided not through its written terms but through deference by government and society to its principles. A constitution may change from being real to a facade and back again as democratic and autocratic governments succeed each other.
Constitutions are often, but by no means always, protected by a legal body whose job it is to interpret those constitutions and, where applicable, declare void executive and legislative acts which infringe the constitution. In some countries, such as Germany, this function is carried out by a dedicated constitutional court which performs this (and only this) function. In other countries, such as Ireland, the ordinary courts may perform this function in addition to their other responsibilities. While elsewhere, like in the United Kingdom, the concept of declaring an act to be unconstitutional does not exist.
A constitutional violation is an action or legislative act that is judged by a constitutional court to be contrary to the constitution, that is, unconstitutional. An example of constitutional violation by the executive could be a public office holder who acts outside the powers granted to that office by a constitution. An example of constitutional violation by the legislature is an attempt to pass a law that would contradict the constitution, without first going through the proper constitutional amendment process.
Some countries, mainly those with uncodified constitutions, have no such courts at all. For example, the United Kingdom has traditionally operated under the principle of parliamentary sovereignty under which the laws passed by United Kingdom Parliament could not be questioned by the courts.
Judicial philosophies of constitutional interpretation (note: generally specific to United States constitutional law) | [
{
"paragraph_id": 0,
"text": "A constitution is the aggregate of fundamental principles or established precedents that constitute the legal basis of a polity, organization or other type of entity, and commonly determines how that entity is to be governed.",
"title": ""
},
{
"paragraph_id": 1,
"text": "When these principles are written down into a single document or set of legal documents, those documents may be said to embody a written constitution; if they are encompassed in a single comprehensive document, it is said to embody a codified constitution. The Constitution of the United Kingdom is a notable example of an uncodified constitution; it is instead written in numerous fundamental Acts of a legislature, court cases, or treaties.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Constitutions concern different levels of organizations, from sovereign countries to companies and unincorporated associations. A treaty that establishes an international organization is also its constitution, in that it would define how that organization is constituted. Within states, a constitution defines the principles upon which the state is based, the procedure in which laws are made and by whom. Some constitutions, especially codified constitutions, also act as limiters of state power, by establishing lines which a state's rulers cannot cross, such as fundamental rights.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Constitution of India is the longest written constitution of any country in the world, with 146,385 words in its English-language version, while the Constitution of Monaco is the shortest written constitution with 3,814 words. The Constitution of San Marino might be the world's oldest active written constitution, since some of its core documents have been in operation since 1600, while the Constitution of the United States is the oldest active codified constitution. The historical life expectancy of a constitution since 1789 is approximately 19 years.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term constitution comes through French from the Latin word constitutio, used for regulations and orders, such as the imperial enactments (constitutiones principis: edicta, mandata, decreta, rescripta). Later, the term was widely used in canon law for an important determination, especially a decree issued by the Pope, now referred to as an apostolic constitution.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "William Blackstone used the term for significant and egregious violations of public trust, of a nature and extent that the transgression would justify a revolutionary response. The term as used by Blackstone was not for a legal text, nor did he intend to include the later American concept of judicial review: \"for that were to set the judicial power above that of the legislature, which would be subversive of all government\".",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "Generally, every modern written constitution confers specific powers on an organization or institutional entity, established upon the primary condition that it abides by the constitution's limitations. According to Scott Gordon, a political organization is constitutional to the extent that it \"contain[s] institutionalized mechanisms of power control for the protection of the interests and liberties of the citizenry, including those that may be in the minority\".",
"title": "General features"
},
{
"paragraph_id": 7,
"text": "Activities of officials within an organization or polity that fall within the constitutional or statutory authority of those officials are termed \"within power\" (or, in Latin, intra vires); if they do not, they are termed \"beyond power\" (or, in Latin, ultra vires). For example, a students' union may be prohibited as an organization from engaging in activities not concerning students; if the union becomes involved in non-student activities, these activities are considered to be ultra vires of the union's charter, and nobody would be compelled by the charter to follow them. An example from the constitutional law of sovereign states would be a provincial parliament in a federal state trying to legislate in an area that the constitution allocates exclusively to the federal parliament, such as ratifying a treaty. Action that appears to be beyond power may be judicially reviewed and, if found to be beyond power, must cease. Legislation that is found to be beyond power will be \"invalid\" and of no force; this applies to primary legislation, requiring constitutional authorization, and secondary legislation, ordinarily requiring statutory authorization. In this context, \"within power\", intra vires, \"authorized\" and \"valid\" have the same meaning; as do \"beyond power\", ultra vires, \"not authorized\" and \"invalid\".",
"title": "General features"
},
{
"paragraph_id": 8,
"text": "In most but not all modern states the constitution has supremacy over ordinary statutory law (see Uncodified constitution below); in such states when an official act is unconstitutional, i.e. it is not a power granted to the government by the constitution, that act is null and void, and the nullification is ab initio, that is, from inception, not from the date of the finding. It was never \"law\", even though, if it had been a statute or statutory provision, it might have been adopted according to the procedures for adopting legislation. Sometimes the problem is not that a statute is unconstitutional, but that the application of it is, on a particular occasion, and a court may decide that while there are ways it could be applied that are constitutional, that instance was not allowed or legitimate. In such a case, only that application may be ruled unconstitutional. Historically, the remedies for such violations have been petitions for common law writs, such as quo warranto.",
"title": "General features"
},
{
"paragraph_id": 9,
"text": "Scholars debate whether a constitution must necessarily be autochthonous, resulting from the nations \"spirit\". Hegel said \"A constitution...is the work of centuries; it is the idea, the consciousness of rationality so far as that consciousness is developed in a particular nation.\"",
"title": "General features"
},
{
"paragraph_id": 10,
"text": "Since 1789, along with the Constitution of the United States of America (U.S. Constitution), which is the oldest and shortest written constitution still in force, close to 800 constitutions have been adopted and subsequently amended around the world by independent states.",
"title": "History and development"
},
{
"paragraph_id": 11,
"text": "In the late 18th century, Thomas Jefferson predicted that a period of 20 years would be the optimal time for any constitution to be still in force, since \"the earth belongs to the living, and not to the dead\". Indeed, according to recent studies, the average life of any new written constitution is around 19 years. However, a great number of constitutions do not last more than 10 years, and around 10% do not last more than one year, as was the case of the French Constitution of 1791. By contrast, some constitutions, notably that of the United States, have remained in force for several centuries, often without major revision for long periods of time.",
"title": "History and development"
},
{
"paragraph_id": 12,
"text": "The most common reasons for these frequent changes are the political desire for an immediate outcome and the short time devoted to the constitutional drafting process. A study in 2009 showed that the average time taken to draft a constitution is around 16 months, however there were also some extreme cases registered. For example, the Myanmar 2008 Constitution was being secretly drafted for more than 17 years, whereas at the other extreme, during the drafting of Japan's 1946 Constitution, the bureaucrats drafted everything in no more than a week. Japan has the oldest unamended constitution in the world. The record for the shortest overall process of drafting, adoption, and ratification of a national constitution belongs to the Romania's 1938 constitution, which installed a royal dictatorship in less than a month. Studies showed that typically extreme cases where the constitution-making process either takes too long or is extremely short were non-democracies.",
"title": "History and development"
},
{
"paragraph_id": 13,
"text": "In principle, constitutional rights are not a specific characteristic of democratic countries. Autocratic states have constitutions, such as that of North Korea, which officially grants every citizen, among other things, the freedom of expression. However, the extent to which governments abide by their own constitutional provisions varies. In North Korea, for example, the Ten Principles for the Establishment of a Monolithic Ideological System are said to have eclipsed the constitution in importance as a frame of government in practice. Developing a legal and political tradition of strict adherence to constitutional provisions is considered foundational to the rule of law.",
"title": "History and development"
},
{
"paragraph_id": 14,
"text": "Excavations in modern-day Iraq by Ernest de Sarzec in 1877 found evidence of the earliest known code of justice, issued by the Sumerian king Urukagina of Lagash c. 2300 BC. Perhaps the earliest prototype for a law of government, this document itself has not yet been discovered; however it is known that it allowed some rights to his citizens. For example, it is known that it relieved tax for widows and orphans, and protected the poor from the usury of the rich.",
"title": "History and development"
},
{
"paragraph_id": 15,
"text": "After that, many governments ruled by special codes of written laws. The oldest such document still known to exist seems to be the Code of Ur-Nammu of Ur (c. 2050 BC). Some of the better-known ancient law codes are the code of Lipit-Ishtar of Isin, the code of Hammurabi of Babylonia, the Hittite code, the Assyrian code, and Mosaic law.",
"title": "History and development"
},
{
"paragraph_id": 16,
"text": "In 621 BC, a scribe named Draco codified the oral laws of the city-state of Athens; this code prescribed the death penalty for many offenses (thus creating the modern term \"draconian\" for very strict rules). In 594 BC, Solon, the ruler of Athens, created the new Solonian Constitution. It eased the burden of the workers, and determined that membership of the ruling class was to be based on wealth (plutocracy), rather than on birth (aristocracy). Cleisthenes again reformed the Athenian constitution and set it on a democratic footing in 508 BC.",
"title": "History and development"
},
{
"paragraph_id": 17,
"text": "Aristotle (c. 350 BC) was the first to make a formal distinction between ordinary law and constitutional law, establishing ideas of constitution and constitutionalism, and attempting to classify different forms of constitutional government. The most basic definition he used to describe a constitution in general terms was \"the arrangement of the offices in a state\". In his works Constitution of Athens, Politics, and Nicomachean Ethics, he explores different constitutions of his day, including those of Athens, Sparta, and Carthage. He classified both what he regarded as good and what he regarded as bad constitutions, and came to the conclusion that the best constitution was a mixed system including monarchic, aristocratic, and democratic elements. He also distinguished between citizens, who had the right to participate in the state, and non-citizens and slaves, who did not.",
"title": "History and development"
},
{
"paragraph_id": 18,
"text": "The Romans initially codified their constitution in 450 BC as the Twelve Tables. They operated under a series of laws that were added from time to time, but Roman law was not reorganized into a single code until the Codex Theodosianus (438 AD); later, in the Eastern Empire, the Codex repetitæ prælectionis (534) was highly influential throughout Europe. This was followed in the east by the Ecloga of Leo III the Isaurian (740) and the Basilica of Basil I (878).",
"title": "History and development"
},
{
"paragraph_id": 19,
"text": "The Edicts of Ashoka established constitutional principles for the 3rd century BC Maurya king's rule in India. For constitutional principles almost lost to antiquity, see the code of Manu.",
"title": "History and development"
},
{
"paragraph_id": 20,
"text": "Many of the Germanic peoples that filled the power vacuum left by the Western Roman Empire in the Early Middle Ages codified their laws. One of the first of these Germanic law codes to be written was the Visigothic Code of Euric (471 AD). This was followed by the Lex Burgundionum, applying separate codes for Germans and for Romans; the Pactus Alamannorum; and the Salic Law of the Franks, all written soon after 500. In 506, the Breviarum or \"Lex Romana\" of Alaric II, king of the Visigoths, adopted and consolidated the Codex Theodosianus together with assorted earlier Roman laws. Systems that appeared somewhat later include the Edictum Rothari of the Lombards (643), the Lex Visigothorum (654), the Lex Alamannorum (730), and the Lex Frisionum (c. 785). These continental codes were all composed in Latin, while Anglo-Saxon was used for those of England, beginning with the Code of Æthelberht of Kent (602). Around 893, Alfred the Great combined this and two other earlier Saxon codes, with various Mosaic and Christian precepts, to produce the Doom book code of laws for England.",
"title": "History and development"
},
{
"paragraph_id": 21,
"text": "Japan's Seventeen-article constitution written in 604, reportedly by Prince Shōtoku, is an early example of a constitution in Asian political history. Influenced by Buddhist teachings, the document focuses more on social morality than on institutions of government, and remains a notable early attempt at a government constitution.",
"title": "History and development"
},
{
"paragraph_id": 22,
"text": "The Constitution of Medina (Arabic: صحیفة المدینه, Ṣaḥīfat al-Madīna), also known as the Charter of Medina, was drafted by the Islamic prophet Muhammad after his flight (hijra) to Yathrib where he became political leader. It constituted a formal agreement between Muhammad and all of the significant tribes and families of Yathrib (later known as Medina), including Muslims, Jews, and pagans. The document was drawn up with the explicit concern of bringing to an end the bitter intertribal fighting between the clans of the Aws (Aus) and Khazraj within Medina. To this effect it instituted a number of rights and responsibilities for the Muslim, Jewish, and pagan communities of Medina bringing them within the fold of one community – the Ummah. The precise dating of the Constitution of Medina remains debated, but generally scholars agree it was written shortly after the Hijra (622).",
"title": "History and development"
},
{
"paragraph_id": 23,
"text": "In Wales, the Cyfraith Hywel (Law of Hywel) was codified by Hywel Dda c. 942–950.",
"title": "History and development"
},
{
"paragraph_id": 24,
"text": "The Pravda Yaroslava, originally combined by Yaroslav the Wise the Grand Prince of Kiev, was granted to Great Novgorod around 1017, and in 1054 was incorporated into the Russkaya Pravda; it became the law for all of Kievan Rus'. It survived only in later editions of the 15th century.",
"title": "History and development"
},
{
"paragraph_id": 25,
"text": "In England, Henry I's proclamation of the Charter of Liberties in 1100 bound the king for the first time in his treatment of the clergy and the nobility. This idea was extended and refined by the English barony when they forced King John to sign Magna Carta in 1215. The most important single article of the Magna Carta, related to \"habeas corpus\", provided that the king was not permitted to imprison, outlaw, exile or kill anyone at a whim – there must be due process of law first. This article, Article 39, of the Magna Carta read:",
"title": "History and development"
},
{
"paragraph_id": 26,
"text": "No free man shall be arrested, or imprisoned, or deprived of his property, or outlawed, or exiled, or in any way destroyed, nor shall we go against him or send against him, unless by legal judgement of his peers, or by the law of the land.",
"title": "History and development"
},
{
"paragraph_id": 27,
"text": "This provision became the cornerstone of English liberty after that point. The social contract in the original case was between the king and the nobility, but was gradually extended to all of the people. It led to the system of Constitutional Monarchy, with further reforms shifting the balance of power from the monarchy and nobility to the House of Commons.",
"title": "History and development"
},
{
"paragraph_id": 28,
"text": "The Nomocanon of Saint Sava (Serbian: Законоправило/Zakonopravilo) was the first Serbian constitution from 1219. St. Sava's Nomocanon was the compilation of civil law, based on Roman Law, and canon law, based on Ecumenical Councils. Its basic purpose was to organize the functioning of the young Serbian kingdom and the Serbian church. Saint Sava began the work on the Serbian Nomocanon in 1208 while he was at Mount Athos, using The Nomocanon in Fourteen Titles, Synopsis of Stefan the Efesian, Nomocanon of John Scholasticus, and Ecumenical Council documents, which he modified with the canonical commentaries of Aristinos and Joannes Zonaras, local church meetings, rules of the Holy Fathers, the law of Moses, the translation of Prohiron, and the Byzantine emperors' Novellae (most were taken from Justinian's Novellae). The Nomocanon was a completely new compilation of civil and canonical regulations, taken from Byzantine sources but completed and reformed by St. Sava to function properly in Serbia. Besides decrees that organized the life of church, there are various norms regarding civil life; most of these were taken from Prohiron. Legal transplants of Roman-Byzantine law became the basis of the Serbian medieval law. The essence of Zakonopravilo was based on Corpus Iuris Civilis.",
"title": "History and development"
},
{
"paragraph_id": 29,
"text": "Stefan Dušan, emperor of Serbs and Greeks, enacted Dušan's Code (Serbian: Душанов Законик/Dušanov Zakonik) in Serbia, in two state congresses: in 1349 in Skopje and in 1354 in Serres. It regulated all social spheres, so it was the second Serbian constitution, after St. Sava's Nomocanon (Zakonopravilo). The Code was based on Roman-Byzantine law. The legal transplanting within articles 171 and 172 of Dušan's Code, which regulated the juridical independence, is notable. They were taken from the Byzantine code Basilika (book VII, 1, 16–17).",
"title": "History and development"
},
{
"paragraph_id": 30,
"text": "In 1222, Hungarian King Andrew II issued the Golden Bull of 1222.",
"title": "History and development"
},
{
"paragraph_id": 31,
"text": "Between 1220 and 1230, a Saxon administrator, Eike von Repgow, composed the Sachsenspiegel, which became the supreme law used in parts of Germany as late as 1900.",
"title": "History and development"
},
{
"paragraph_id": 32,
"text": "Around 1240, the Coptic Egyptian Christian writer, 'Abul Fada'il Ibn al-'Assal, wrote the Fetha Negest in Arabic. 'Ibn al-Assal took his laws partly from apostolic writings and Mosaic law and partly from the former Byzantine codes. There are a few historical records claiming that this law code was translated into Ge'ez and entered Ethiopia around 1450 in the reign of Zara Yaqob. Even so, its first recorded use in the function of a constitution (supreme law of the land) is with Sarsa Dengel beginning in 1563. The Fetha Negest remained the supreme law in Ethiopia until 1931, when a modern-style Constitution was first granted by Emperor Haile Selassie I.",
"title": "History and development"
},
{
"paragraph_id": 33,
"text": "In the Principality of Catalonia, the Catalan constitutions were promulgated by the Court from 1283 (or even two centuries before, if Usatges of Barcelona is considered part of the compilation of Constitutions) until 1716, when Philip V of Spain gave the Nueva Planta decrees, finishing with the historical laws of Catalonia. These Constitutions were usually made formally as a royal initiative, but required for its approval or repeal the favorable vote of the Catalan Courts, the medieval antecedent of the modern Parliaments. These laws, like other modern constitutions, had preeminence over other laws, and they could not be contradicted by mere decrees or edicts of the king.",
"title": "History and development"
},
{
"paragraph_id": 34,
"text": "The Kouroukan Founga was a 13th-century charter of the Mali Empire, reconstructed from oral tradition in 1988 by Siriman Kouyaté.",
"title": "History and development"
},
{
"paragraph_id": 35,
"text": "The Golden Bull of 1356 was a decree issued by a Reichstag in Nuremberg headed by Emperor Charles IV that fixed, for a period of more than four hundred years, an important aspect of the constitutional structure of the Holy Roman Empire.",
"title": "History and development"
},
{
"paragraph_id": 36,
"text": "In China, the Hongwu Emperor created and refined a document he called Ancestral Injunctions (first published in 1375, revised twice more before his death in 1398). These rules served as a constitution for the Ming Dynasty for the next 250 years.",
"title": "History and development"
},
{
"paragraph_id": 37,
"text": "The oldest written document still governing a sovereign nation today is that of San Marino. The Leges Statutae Republicae Sancti Marini was written in Latin and consists of six books. The first book, with 62 articles, establishes councils, courts, various executive officers, and the powers assigned to them. The remaining books cover criminal and civil law and judicial procedures and remedies. Written in 1600, the document was based upon the Statuti Comunali (Town Statute) of 1300, itself influenced by the Codex Justinianus, and it remains in force today.",
"title": "History and development"
},
{
"paragraph_id": 38,
"text": "In 1392 the Carta de Logu was legal code of the Giudicato of Arborea promulgated by the giudicessa Eleanor. It was in force in Sardinia until it was superseded by the code of Charles Felix in April 1827. The Carta was a work of great importance in Sardinian history. It was an organic, coherent, and systematic work of legislation encompassing the civil and penal law.",
"title": "History and development"
},
{
"paragraph_id": 39,
"text": "The Gayanashagowa, the oral constitution of the Haudenosaunee nation also known as the Great Law of Peace, established a system of governance as far back as 1190 AD (though perhaps more recently at 1451) in which the Sachems, or tribal chiefs, of the Iroquois League's member nations made decisions on the basis of universal consensus of all chiefs following discussions that were initiated by a single nation. The position of Sachem descends through families and are allocated by the senior female clan heads, though, prior to the filling of the position, candidacy is ultimately democratically decided by the community itself.",
"title": "History and development"
},
{
"paragraph_id": 40,
"text": "In 1634 the Kingdom of Sweden adopted the 1634 Instrument of Government, drawn up under the Lord High Chancellor of Sweden Axel Oxenstierna after the death of king Gustavus Adolphus, it can be seen as the first written constitution adopted by a modern state.",
"title": "History and development"
},
{
"paragraph_id": 41,
"text": "In 1639, the Colony of Connecticut adopted the Fundamental Orders, which was the first North American constitution, and is the basis for every new Connecticut constitution since, and is also the reason for Connecticut's nickname, \"the Constitution State\".",
"title": "History and development"
},
{
"paragraph_id": 42,
"text": "The English Protectorate that was set up by Oliver Cromwell after the English Civil War promulgated the first detailed written constitution adopted by a modern state; it was called the Instrument of Government. This formed the basis of government for the short-lived republic from 1653 to 1657 by providing a legal rationale for the increasing power of Cromwell after Parliament consistently failed to govern effectively. Most of the concepts and ideas embedded into modern constitutional theory, especially bicameralism, separation of powers, the written constitution, and judicial review, can be traced back to the experiments of that period.",
"title": "History and development"
},
{
"paragraph_id": 43,
"text": "Drafted by Major-General John Lambert in 1653, the Instrument of Government included elements incorporated from an earlier document \"Heads of Proposals\", which had been agreed to by the Army Council in 1647, as a set of propositions intended to be a basis for a constitutional settlement after King Charles I was defeated in the First English Civil War. Charles had rejected the propositions, but before the start of the Second Civil War, the Grandees of the New Model Army had presented the Heads of Proposals as their alternative to the more radical Agreement of the People presented by the Agitators and their civilian supporters at the Putney Debates.",
"title": "History and development"
},
{
"paragraph_id": 44,
"text": "On 4 January 1649, the Rump Parliament declared \"that the people are, under God, the original of all just power; that the Commons of England, being chosen by and representing the people, have the supreme power in this nation\".",
"title": "History and development"
},
{
"paragraph_id": 45,
"text": "The Instrument of Government was adopted by Parliament on 15 December 1653, and Oliver Cromwell was installed as Lord Protector on the following day. The constitution set up a state council consisting of 21 members while executive authority was vested in the office of \"Lord Protector of the Commonwealth.\" This position was designated as a non-hereditary life appointment. The Instrument also required the calling of triennial Parliaments, with each sitting for at least five months.",
"title": "History and development"
},
{
"paragraph_id": 46,
"text": "The Instrument of Government was replaced in May 1657 by England's second, and last, codified constitution, the Humble Petition and Advice, proposed by Sir Christopher Packe. The Petition offered hereditary monarchy to Oliver Cromwell, asserted Parliament's control over issuing new taxation, provided an independent council to advise the king and safeguarded \"Triennial\" meetings of Parliament. A modified version of the Humble Petition with the clause on kingship removed was ratified on 25 May. This finally met its demise in conjunction with the death of Cromwell and the Restoration of the monarchy.",
"title": "History and development"
},
{
"paragraph_id": 47,
"text": "Other examples of European constitutions of this era were the Corsican Constitution of 1755 and the Swedish Constitution of 1772.",
"title": "History and development"
},
{
"paragraph_id": 48,
"text": "All of the British colonies in North America that were to become the 13 original United States, adopted their own constitutions in 1776 and 1777, during the American Revolution (and before the later Articles of Confederation and United States Constitution), with the exceptions of Massachusetts, Connecticut and Rhode Island. The Commonwealth of Massachusetts adopted its Constitution in 1780, the oldest still-functioning constitution of any U.S. state; while Connecticut and Rhode Island officially continued to operate under their old colonial charters, until they adopted their first state constitutions in 1818 and 1843, respectively.",
"title": "History and development"
},
{
"paragraph_id": 49,
"text": "What is sometimes called the \"enlightened constitution\" model was developed by philosophers of the Age of Enlightenment such as Thomas Hobbes, Jean-Jacques Rousseau, and John Locke. The model proposed that constitutional governments should be stable, adaptable, accountable, open and should represent the people (i.e., support democracy).",
"title": "History and development"
},
{
"paragraph_id": 50,
"text": "Agreements and Constitutions of Laws and Freedoms of the Zaporizian Host was written in 1710 by Pylyp Orlyk, hetman of the Zaporozhian Host. It was written to establish a free Zaporozhian-Ukrainian Republic, with the support of Charles XII of Sweden. It is notable in that it established a democratic standard for the separation of powers in government between the legislative, executive, and judiciary branches, well before the publication of Montesquieu's Spirit of the Laws. This Constitution also limited the executive authority of the hetman, and established a democratically elected Cossack parliament called the General Council. However, Orlyk's project for an independent Ukrainian State never materialized, and his constitution, written in exile, never went into effect.",
"title": "History and development"
},
{
"paragraph_id": 51,
"text": "Corsican Constitutions of 1755 and 1794 were inspired by Jean-Jacques Rousseau. The latter introduced universal suffrage for property owners.",
"title": "History and development"
},
{
"paragraph_id": 52,
"text": "The Swedish constitution of 1772 was enacted under King Gustavus III and was inspired by the separation of powers by Montesquieu. The king also cherished other enlightenment ideas (as an enlighted despot) and repealed torture, liberated agricultural trade, diminished the use of the death penalty and instituted a form of religious freedom. The constitution was commended by Voltaire.",
"title": "History and development"
},
{
"paragraph_id": 53,
"text": "The United States Constitution, ratified 21 June 1788, was influenced by the writings of Polybius, Locke, Montesquieu, and others. The document became a benchmark for republicanism and codified constitutions written thereafter.",
"title": "History and development"
},
{
"paragraph_id": 54,
"text": "The Polish–Lithuanian Commonwealth Constitution was passed on 3 May 1791. Its draft was developed by the leading minds of the Enlightenment in Poland such as King Stanislaw August Poniatowski, Stanisław Staszic, Scipione Piattoli, Julian Ursyn Niemcewicz, Ignacy Potocki and Hugo Kołłątaj. It was adopted by the Great Sejm and is considered the first constitution of its kind in Europe and the world's second oldest one after the American Constitution.",
"title": "History and development"
},
{
"paragraph_id": 55,
"text": "Another landmark document was the French Constitution of 1791.",
"title": "History and development"
},
{
"paragraph_id": 56,
"text": "The 1811 Constitution of Venezuela was the first Constitution of Venezuela and Latin America, promulgated and drafted by Cristóbal Mendoza and Juan Germán Roscio and in Caracas. It established a federal government but was repealed one year later.",
"title": "History and development"
},
{
"paragraph_id": 57,
"text": "On 19 March 1812, the Spanish Constitution of 1812 was ratified by a parliament gathered in Cadiz, the only Spanish continental city which was safe from French occupation. The Spanish Constitution served as a model for other liberal constitutions of several South European and Latin American nations, for example, the Portuguese Constitution of 1822, constitutions of various Italian states during Carbonari revolts (i.e., in the Kingdom of the Two Sicilies), the Norwegian constitution of 1814, or the Mexican Constitution of 1824.",
"title": "History and development"
},
{
"paragraph_id": 58,
"text": "In Brazil, the Constitution of 1824 expressed the option for the monarchy as political system after Brazilian Independence. The leader of the national emancipation process was the Portuguese prince Pedro I, elder son of the king of Portugal. Pedro was crowned in 1822 as first emperor of Brazil. The country was ruled by Constitutional monarchy until 1889, when it adopted the Republican model.",
"title": "History and development"
},
{
"paragraph_id": 59,
"text": "In Denmark, as a result of the Napoleonic Wars, the absolute monarchy lost its personal possession of Norway to Sweden. Sweden had already enacted its 1809 Instrument of Government, which saw the division of power between the Riksdag, the king and the judiciary. However the Norwegians managed to infuse a radically democratic and liberal constitution in 1814, adopting many facets from the American constitution and the revolutionary French ones, but maintaining a hereditary monarch limited by the constitution, like the Spanish one.",
"title": "History and development"
},
{
"paragraph_id": 60,
"text": "The first Swiss Federal Constitution was put in force in September 1848 (with official revisions in 1878, 1891, 1949, 1971, 1982 and 1999).",
"title": "History and development"
},
{
"paragraph_id": 61,
"text": "The Serbian revolution initially led to a proclamation of a proto-constitution in 1811; the full-fledged Constitution of Serbia followed few decades later, in 1835. The first Serbian constitution (Sretenjski ustav) was adopted at the national assembly in Kragujevac on 15 February 1835.",
"title": "History and development"
},
{
"paragraph_id": 62,
"text": "The Constitution of Canada came into force on 1 July 1867, as the British North America Act, an act of the British Parliament. Over a century later, the BNA Act was patriated to the Canadian Parliament and augmented with the Canadian Charter of Rights and Freedoms. Apart from the Constitution Acts, 1867 to 1982, Canada's constitution also has unwritten elements based in common law and convention.",
"title": "History and development"
},
{
"paragraph_id": 63,
"text": "After tribal people first began to live in cities and establish nations, many of these functioned according to unwritten customs, while some developed autocratic, even tyrannical monarchs, who ruled by decree, or mere personal whim. Such rule led some thinkers to take the position that what mattered was not the design of governmental institutions and operations, as much as the character of the rulers. This view can be seen in Plato, who called for rule by \"philosopher-kings\". Later writers, such as Aristotle, Cicero and Plutarch, would examine designs for government from a legal and historical standpoint.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 64,
"text": "The Renaissance brought a series of political philosophers who wrote implied criticisms of the practices of monarchs and sought to identify principles of constitutional design that would be likely to yield more effective and just governance from their viewpoints. This began with revival of the Roman law of nations concept and its application to the relations among nations, and they sought to establish customary \"laws of war and peace\" to ameliorate wars and make them less likely. This led to considerations of what authority monarchs or other officials have and don't have, from where that authority derives, and the remedies for the abuse of such authority.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 65,
"text": "A seminal juncture in this line of discourse arose in England from the Civil War, the Cromwellian Protectorate, the writings of Thomas Hobbes, Samuel Rutherford, the Levellers, John Milton, and James Harrington, leading to the debate between Robert Filmer, arguing for the divine right of monarchs, on the one side, and on the other, Henry Neville, James Tyrrell, Algernon Sidney, and John Locke. What arose from the latter was a concept of government being erected on the foundations of first, a state of nature governed by natural laws, then a state of society, established by a social contract or compact, which bring underlying natural or social laws, before governments are formally established on them as foundations.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 66,
"text": "Along the way several writers examined how the design of government was important, even if the government were headed by a monarch. They also classified various historical examples of governmental designs, typically into democracies, aristocracies, or monarchies, and considered how just and effective each tended to be and why, and how the advantages of each might be obtained by combining elements of each into a more complex design that balanced competing tendencies. Some, such as Montesquieu, also examined how the functions of government, such as legislative, executive, and judicial, might appropriately be separated into branches. The prevailing theme among these writers was that the design of constitutions is not completely arbitrary or a matter of taste. They generally held that there are underlying principles of design that constrain all constitutions for every polity or organization. Each built on the ideas of those before concerning what those principles might be.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 67,
"text": "The later writings of Orestes Brownson would try to explain what constitutional designers were trying to do. According to Brownson there are, in a sense, three \"constitutions\" involved: The first the constitution of nature that includes all of what was called \"natural law\". The second is the constitution of society, an unwritten and commonly understood set of rules for the society formed by a social contract before it establishes a government, by which it establishes the third, a constitution of government. The second would include such elements as the making of decisions by public conventions called by public notice and conducted by established rules of procedure. Each constitution must be consistent with, and derive its authority from, the ones before it, as well as from a historical act of society formation or constitutional ratification. Brownson argued that a state is a society with effective dominion over a well-defined territory, that consent to a well-designed constitution of government arises from presence on that territory, and that it is possible for provisions of a written constitution of government to be \"unconstitutional\" if they are inconsistent with the constitutions of nature or society. Brownson argued that it is not ratification alone that makes a written constitution of government legitimate, but that it must also be competently designed and applied.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 68,
"text": "Other writers have argued that such considerations apply not only to all national constitutions of government, but also to the constitutions of private organizations, that it is not an accident that the constitutions that tend to satisfy their members contain certain elements, as a minimum, or that their provisions tend to become very similar as they are amended after experience with their use. Provisions that give rise to certain kinds of questions are seen to need additional provisions for how to resolve those questions, and provisions that offer no course of action may best be omitted and left to policy decisions. Provisions that conflict with what Brownson and others can discern are the underlying \"constitutions\" of nature and society tend to be difficult or impossible to execute, or to lead to unresolvable disputes.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 69,
"text": "Constitutional design has been treated as a kind of metagame in which play consists of finding the best design and provisions for a written constitution that will be the rules for the game of government, and that will be most likely to optimize a balance of the utilities of justice, liberty, and security. An example is the metagame Nomic.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 70,
"text": "Political economy theory regards constitutions as coordination devices that help citizens to prevent rulers from abusing power. If the citizenry can coordinate a response to police government officials in the face of a constitutional fault, then the government have the incentives to honor the rights that the constitution guarantees. An alternative view considers that constitutions are not enforced by the citizens at-large, but rather by the administrative powers of the state. Because rulers cannot themselves implement their policies, they need to rely on a set of organizations (armies, courts, police agencies, tax collectors) to implement it. In this position, they can directly sanction the government by refusing to cooperate, disabling the authority of the rulers. Therefore, constitutions could be characterized by a self-enforcing equilibria between the rulers and powerful administrators.",
"title": "Principles of constitutional design"
},
{
"paragraph_id": 71,
"text": "Most commonly, the term constitution refers to a set of rules and principles that define the nature and extent of government. Most constitutions seek to regulate the relationship between institutions of the state, in a basic sense the relationship between the executive, legislature and the judiciary, but also the relationship of institutions within those branches. For example, executive branches can be divided into a head of government, government departments/ministries, executive agencies and a civil service/administration. Most constitutions also attempt to define the relationship between individuals and the state, and to establish the broad rights of individual citizens. It is thus the most basic law of a territory from which all the other laws and rules are hierarchically derived; in some territories it is in fact called \"Basic Law\".",
"title": "Key features"
},
{
"paragraph_id": 72,
"text": "A fundamental classification is codification or lack of codification. A codified constitution is one that is contained in a single document, which is the single source of constitutional law in a state. An uncodified constitution is one that is not contained in a single document, consisting of several different sources, which may be written or unwritten; see constitutional convention.",
"title": "Key features"
},
{
"paragraph_id": 73,
"text": "Most states in the world have codified constitutions.",
"title": "Key features"
},
{
"paragraph_id": 74,
"text": "Codified constitutions are often the product of some dramatic political change, such as a revolution. The process by which a country adopts a constitution is closely tied to the historical and political context driving this fundamental change. The legitimacy (and often the longevity) of codified constitutions has often been tied to the process by which they are initially adopted and some scholars have pointed out that high constitutional turnover within a given country may itself be detrimental to separation of powers and the rule of law.",
"title": "Key features"
},
{
"paragraph_id": 75,
"text": "States that have codified constitutions normally give the constitution supremacy over ordinary statute law. That is, if there is any conflict between a legal statute and the codified constitution, all or part of the statute can be declared ultra vires by a court, and struck down as unconstitutional. In addition, exceptional procedures are often required to amend a constitution. These procedures may include: convocation of a special constituent assembly or constitutional convention, requiring a supermajority of legislators' votes, approval in two terms of parliament, the consent of regional legislatures, a referendum process, and/or other procedures that make amending a constitution more difficult than passing a simple law.",
"title": "Key features"
},
{
"paragraph_id": 76,
"text": "Constitutions may also provide that their most basic principles can never be abolished, even by amendment. In case a formally valid amendment of a constitution infringes these principles protected against any amendment, it may constitute a so-called unconstitutional constitutional law.",
"title": "Key features"
},
{
"paragraph_id": 77,
"text": "Codified constitutions normally consist of a ceremonial preamble, which sets forth the goals of the state and the motivation for the constitution, and several articles containing the substantive provisions. The preamble, which is omitted in some constitutions, may contain a reference to God and/or to fundamental values of the state such as liberty, democracy or human rights. In ethnic nation-states such as Estonia, the mission of the state can be defined as preserving a specific nation, language and culture.",
"title": "Key features"
},
{
"paragraph_id": 78,
"text": "As of 2017 only two sovereign states, New Zealand and the United Kingdom, have wholly uncodified constitutions. The Basic Laws of Israel have since 1950 been intended to be the basis for a constitution, but as of 2017 it had not been drafted. The various Laws are considered to have precedence over other laws, and give the procedure by which they can be amended, typically by a simple majority of members of the Knesset (parliament).",
"title": "Key features"
},
{
"paragraph_id": 79,
"text": "Uncodified constitutions are the product of an \"evolution\" of laws and conventions over centuries (such as in the Westminster System that developed in Britain). By contrast to codified constitutions, uncodified constitutions include both written sources – e.g. constitutional statutes enacted by the Parliament – and unwritten sources – constitutional conventions, observation of precedents, royal prerogatives, customs and traditions, such as holding general elections on Thursdays; together these constitute British constitutional law.",
"title": "Key features"
},
{
"paragraph_id": 80,
"text": "Some constitutions are largely, but not wholly, codified. For example, in the Constitution of Australia, most of its fundamental political principles and regulations concerning the relationship between branches of government, and concerning the government and the individual are codified in a single document, the Constitution of the Commonwealth of Australia. However, the presence of statutes with constitutional significance, namely the Statute of Westminster, as adopted by the Commonwealth in the Statute of Westminster Adoption Act 1942, and the Australia Act 1986 means that Australia's constitution is not contained in a single constitutional document. It means the Constitution of Australia is uncodified, it also contains constitutional conventions, thus is partially unwritten.",
"title": "Key features"
},
{
"paragraph_id": 81,
"text": "The Constitution of Canada resulted from the passage of several British North America Acts from 1867 to the Canada Act 1982, the act that formally severed British Parliament's ability to amend the Canadian constitution. The Canadian constitution includes specific legislative acts as mentioned in section 52(2) of the Constitution Act, 1982. However, some documents not explicitly listed in section 52(2) are also considered constitutional documents in Canada, entrenched via reference; such as the Proclamation of 1763. Although Canada's constitution includes a number of different statutes, amendments, and references, some constitutional rules that exist in Canada is derived from unwritten sources and constitutional conventions.",
"title": "Key features"
},
{
"paragraph_id": 82,
"text": "The terms written constitution and codified constitution are often used interchangeably, as are unwritten constitution and uncodified constitution, although this usage is technically inaccurate. A codified constitution is a single document; states that do not have such a document have uncodified, but not entirely unwritten, constitutions, since much of an uncodified constitution is usually written in laws such as the Basic Laws of Israel and the Parliament Acts of the United Kingdom. Uncodified constitutions largely lack protection against amendment by the government of the time. For example, the U.K. Fixed-term Parliaments Act 2011 legislated by simple majority for strictly fixed-term parliaments; until then the ruling party could call a general election at any convenient time up to the maximum term of five years. This change would require a constitutional amendment in most nations.",
"title": "Key features"
},
{
"paragraph_id": 83,
"text": "A constitutional amendment is a modification of the constitution of a polity, organization or other type of entity. Amendments are often interwoven into the relevant sections of an existing constitution, directly altering the text. Conversely, they can be appended to the constitution as supplemental additions (codicils), thus changing the frame of government without altering the existing text of the document.",
"title": "Key features"
},
{
"paragraph_id": 84,
"text": "Most constitutions require that amendments cannot be enacted unless they have passed a special procedure that is more stringent than that required of ordinary legislation.",
"title": "Key features"
},
{
"paragraph_id": 85,
"text": "Some countries are listed under more than one method because alternative procedures may be used.",
"title": "Key features"
},
{
"paragraph_id": 86,
"text": "An entrenched clause or entrenchment clause of a basic law or constitution is a provision that makes certain amendments either more difficult or impossible to pass, making such amendments inadmissible. Overriding an entrenched clause may require a supermajority, a referendum, or the consent of the minority party. For example, the U.S. Constitution has an entrenched clause that prohibits abolishing equal suffrage of the States within the Senate without their consent. The term eternity clause is used in a similar manner in the constitutions of the Czech Republic, Germany, Turkey, Greece, Italy, Morocco, the Islamic Republic of Iran, Brazil and Norway. India's constitution does not contain specific provisions on entrenched clauses but the basic structure doctrine makes it impossible for certain basic features of the Constitution to be altered or destroyed by the Parliament of India through an amendment. The Constitution of Colombia also lacks explicit entrenched clauses, but has a similar substantive limit on amending its fundamental principles through judicial interpretations.",
"title": "Key features"
},
{
"paragraph_id": 87,
"text": "Constitutions include various rights and duties. These include the following:",
"title": "Key features"
},
{
"paragraph_id": 88,
"text": "Constitutions usually explicitly divide power between various branches of government. The standard model, described by the Baron de Montesquieu, involves three branches of government: executive, legislative and judicial. Some constitutions include additional branches, such as an auditory branch. Constitutions vary extensively as to the degree of separation of powers between these branches.",
"title": "Key features"
},
{
"paragraph_id": 89,
"text": "In presidential and semi-presidential systems of government, department secretaries/ministers are accountable to the president, who has patronage powers to appoint and dismiss ministers. The president is accountable to the people in an election.",
"title": "Key features"
},
{
"paragraph_id": 90,
"text": "In parliamentary systems, Cabinet Ministers are accountable to Parliament, but it is the prime minister who appoints and dismisses them. In the case of the United Kingdom and other countries with a monarchy, it is the monarch who appoints and dismisses ministers, on the advice of the prime minister. In turn the prime minister will resign if the government loses the confidence of the parliament (or a part of it). Confidence can be lost if the government loses a vote of no confidence or, depending on the country, loses a particularly important vote in parliament, such as vote on the budget. When a government loses confidence, it stays in office until a new government is formed; something which normally but not necessarily required the holding of a general election.",
"title": "Key features"
},
{
"paragraph_id": 91,
"text": "Other independent institutions which some constitutions have set out include a central bank, an anti-corruption commission, an electoral commission, a judicial oversight body, a human rights commission, a media commission, an ombudsman, and a truth and reconciliation commission.",
"title": "Key features"
},
{
"paragraph_id": 92,
"text": "Constitutions also establish where sovereignty is located in the state. There are three basic types of distribution of sovereignty according to the degree of centralisation of power: unitary, federal, and confederal. The distinction is not absolute.",
"title": "Key features"
},
{
"paragraph_id": 93,
"text": "In a unitary state, sovereignty resides in the state itself, and the constitution determines this. The territory of the state may be divided into regions, but they are not sovereign and are subordinate to the state. In the UK, the constitutional doctrine of Parliamentary sovereignty dictates that sovereignty is ultimately contained at the centre. Some powers have been devolved to Northern Ireland, Scotland, and Wales (but not England). Some unitary states (Spain is an example) devolve more and more power to sub-national governments until the state functions in practice much like a federal state.",
"title": "Key features"
},
{
"paragraph_id": 94,
"text": "A federal state has a central structure with at most a small amount of territory mainly containing the institutions of the federal government, and several regions (called states, provinces, etc.) which compose the territory of the whole state. Sovereignty is divided between the centre and the constituent regions. The constitutions of Canada and the United States establish federal states, with power divided between the federal government and the provinces or states. Each of the regions may in turn have its own constitution (of unitary nature).",
"title": "Key features"
},
{
"paragraph_id": 95,
"text": "A confederal state comprises again several regions, but the central structure has only limited coordinating power, and sovereignty is located in the regions. Confederal constitutions are rare, and there is often dispute to whether so-called \"confederal\" states are actually federal.",
"title": "Key features"
},
{
"paragraph_id": 96,
"text": "To some extent a group of states which do not constitute a federation as such may by treaties and accords give up parts of their sovereignty to a supranational entity. For example, the countries constituting the European Union have agreed to abide by some Union-wide measures which restrict their absolute sovereignty in some ways, e.g., the use of the metric system of measurement instead of national units previously used.",
"title": "Key features"
},
{
"paragraph_id": 97,
"text": "Many constitutions allow the declaration under exceptional circumstances of some form of state of emergency during which some rights and guarantees are suspended. This provision can be and has been abused to allow a government to suppress dissent without regard for human rights – see the article on state of emergency.",
"title": "Key features"
},
{
"paragraph_id": 98,
"text": "Italian political theorist Giovanni Sartori noted the existence of national constitutions which are a facade for authoritarian sources of power. While such documents may express respect for human rights or establish an independent judiciary, they may be ignored when the government feels threatened, or never put into practice. An extreme example was the Constitution of the Soviet Union that on paper supported freedom of assembly and freedom of speech; however, citizens who transgressed unwritten limits were summarily imprisoned. The example demonstrates that the protections and benefits of a constitution are ultimately provided not through its written terms but through deference by government and society to its principles. A constitution may change from being real to a facade and back again as democratic and autocratic governments succeed each other.",
"title": "Key features"
},
{
"paragraph_id": 99,
"text": "Constitutions are often, but by no means always, protected by a legal body whose job it is to interpret those constitutions and, where applicable, declare void executive and legislative acts which infringe the constitution. In some countries, such as Germany, this function is carried out by a dedicated constitutional court which performs this (and only this) function. In other countries, such as Ireland, the ordinary courts may perform this function in addition to their other responsibilities. While elsewhere, like in the United Kingdom, the concept of declaring an act to be unconstitutional does not exist.",
"title": "Constitutional courts"
},
{
"paragraph_id": 100,
"text": "A constitutional violation is an action or legislative act that is judged by a constitutional court to be contrary to the constitution, that is, unconstitutional. An example of constitutional violation by the executive could be a public office holder who acts outside the powers granted to that office by a constitution. An example of constitutional violation by the legislature is an attempt to pass a law that would contradict the constitution, without first going through the proper constitutional amendment process.",
"title": "Constitutional courts"
},
{
"paragraph_id": 101,
"text": "Some countries, mainly those with uncodified constitutions, have no such courts at all. For example, the United Kingdom has traditionally operated under the principle of parliamentary sovereignty under which the laws passed by United Kingdom Parliament could not be questioned by the courts.",
"title": "Constitutional courts"
},
{
"paragraph_id": 102,
"text": "Judicial philosophies of constitutional interpretation (note: generally specific to United States constitutional law)",
"title": "See also"
}
] | A constitution is the aggregate of fundamental principles or established precedents that constitute the legal basis of a polity, organization or other type of entity, and commonly determines how that entity is to be governed. When these principles are written down into a single document or set of legal documents, those documents may be said to embody a written constitution; if they are encompassed in a single comprehensive document, it is said to embody a codified constitution. The Constitution of the United Kingdom is a notable example of an uncodified constitution; it is instead written in numerous fundamental Acts of a legislature, court cases, or treaties. Constitutions concern different levels of organizations, from sovereign countries to companies and unincorporated associations. A treaty that establishes an international organization is also its constitution, in that it would define how that organization is constituted. Within states, a constitution defines the principles upon which the state is based, the procedure in which laws are made and by whom. Some constitutions, especially codified constitutions, also act as limiters of state power, by establishing lines which a state's rulers cannot cross, such as fundamental rights. The Constitution of India is the longest written constitution of any country in the world, with 146,385 words in its English-language version, while the Constitution of Monaco is the shortest written constitution with 3,814 words. The Constitution of San Marino might be the world's oldest active written constitution, since some of its core documents have been in operation since 1600, while the Constitution of the United States is the oldest active codified constitution. The historical life expectancy of a constitution since 1789 is approximately 19 years. | 2001-10-01T02:23:51Z | 2023-12-31T08:17:07Z | [
"Template:Lang",
"Template:Clarify",
"Template:Lang-sr",
"Template:Dubious",
"Template:Webarchive",
"Template:Sister project links",
"Template:Other uses",
"Template:Circa",
"Template:Cite book",
"Template:Cite web",
"Template:Citation",
"Template:Cite news",
"Template:Law",
"Template:Use mdy dates",
"Template:Multiple issues",
"Template:Citation needed",
"Template:ISBN",
"Template:Cite journal",
"Template:Authority control",
"Template:Blockquote",
"Template:Main",
"Template:Government",
"Template:Short description",
"Template:Pp-vandalism",
"Template:Wikisource-inline",
"Template:Reflist",
"Template:Lang-ar",
"Template:As of",
"Template:See also",
"Template:Further"
] | https://en.wikipedia.org/wiki/Constitution |
5,254 | Common law | In law, common law (also known as judicial precedent, judge-made law, or case law) is the body of law created by judges and similar quasi-judicial tribunals by virtue of being stated in written opinions.
The defining characteristic of common law is that it arises as precedent. Common law courts look to the past decisions of courts to synthesize the legal principles of past cases. Stare decisis, the principle that cases should be decided according to consistent principled rules so that similar facts will yield similar results, lies at the heart of all common law systems. If a court finds that a similar dispute to the present one has been resolved in the past, the court is generally bound to follow the reasoning used in the prior decision. If, however, the court finds that the current dispute is fundamentally distinct from all previous cases (a "matter of first impression"), and legislative statutes (also called "positive law") are either silent or ambiguous on the question, judges have the authority and duty to resolve the issue. The opinion that a common law judge gives agglomerates with past decisions as precedent to bind future judges and litigants, unless overturned by further developments in the law or by subsequent statutory law.
The common law, so named because it was "common" to all the king's courts across England, originated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. The British Empire later spread the English legal system to its colonies, many of which retain the common law system today. These common law systems are legal systems that give great weight to judicial precedent, and to the style of reasoning inherited from the English legal system.
The term "common law", referring to the body of law made by the judiciary, is often distinguished from statutory law and regulations, which are laws adopted by the legislature and executive respectively. In legal systems that follow the common law, judicial precedent stands in contrast to and on equal footing with statutes. The other major legal system used by countries is the civil law, which codifies its legal principles into legal codes and does not treat judicial opinions as binding.
Today, one-third of the world's population lives in common law jurisdictions or in mixed legal systems that combine the common law with the civil law, including Antigua and Barbuda, Australia, Bahamas, Bangladesh, Barbados, Belize, Botswana, Burma, Cameroon, Canada (both the federal system and all its provinces except Quebec), Cyprus, Dominica, Fiji, Ghana, Grenada, Guyana, Hong Kong, India, Ireland, Israel, Jamaica, Kenya, Liberia, Malaysia, Malta, Marshall Islands, Micronesia, Namibia, Nauru, New Zealand, Nigeria, Pakistan, Palau, Papua New Guinea, Philippines, Sierra Leone, Singapore, South Africa, Sri Lanka, Trinidad and Tobago, the United Kingdom (including its overseas territories such as Gibraltar), the United States (both the federal system and 49 of its 50 states), and Zimbabwe.
The term common law has many connotations. The first three set out here are the most-common usages within the legal community. Other connotations from past centuries are sometimes seen and are sometimes heard in everyday speech.
The first definition of "common law" given in Black's Law Dictionary, 10th edition, 2014, is "The body of law derived from judicial decisions, rather than from statutes or constitutions; [synonym] CASELAW, [contrast] STATUTORY LAW". This usage is given as the first definition in modern legal dictionaries, is characterized as the "most common" usage among legal professionals, and is the usage frequently seen in decisions of courts. In this connotation, "common law" distinguishes the authority that promulgated a law. For example, the law in most Anglo-American jurisdictions includes "statutory law" enacted by a legislature, "regulatory law" (in the U.S.) or "delegated legislation" (in the UK) promulgated by executive branch agencies pursuant to delegation of rule-making authority from the legislature, and common law or "case law", i.e., decisions issued by courts (or quasi-judicial tribunals within agencies). This first connotation can be further differentiated into:
Publication of decisions, and indexing, is essential to the development of common law, and thus governments and private publishers publish law reports. While all decisions in common law jurisdictions are precedent (at varying levels and scope, as discussed throughout the article on precedent), some become "leading cases" or "landmark decisions" that are cited especially often.
Black's Law Dictionary, 10th ed., definition 2, differentiates "common law" jurisdictions and legal systems from "civil law" or "code" jurisdictions. Common law systems place great weight on court decisions, which are considered "law" with the same force of law as statutes—for nearly a millennium, common law courts have had the authority to make law where no legislative statute exists, and statutes mean what courts interpret them to mean.
By contrast, in civil law jurisdictions (the legal tradition that prevails, or is combined with common law, in Europe and most non-Islamic, non-common law countries), courts lack authority to act if there is no statute. Civil law judges tend to give less weight to judicial precedent, which means a civil law judge deciding a given case has more freedom to interpret the text of a statute independently (compared to a common law judge in the same circumstances), and therefore less predictably. For example, the Napoleonic Code expressly forbade French judges to pronounce general principles of law. The role of providing overarching principles, which in common law jurisdictions is provided in judicial opinions, in civil law jurisdictions is filled by giving greater weight to scholarly literature, as explained below.
Common law systems trace their history to England, while civil law systems trace their history through the Napoleonic Code back to the Corpus Juris Civilis of Roman law. A few Western countries use other legal traditions, such as Roman-Dutch law or Scots law, for example.
Black's Law Dictionary, 10th ed., definition 4, differentiates "common law" (or just "law") from "equity". Before 1873, England had two complementary court systems: courts of "law" which could only award money damages and recognized only the legal owner of property, and courts of "equity" (courts of chancery) that could issue injunctive relief (that is, a court order to a party to do something, give something to someone, or stop doing something) and recognized trusts of property. This split propagated to many of the colonies, including the United States. The states of Delaware, Mississippi, South Carolina, and Tennessee continue to have divided Courts of Law and Courts of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division. There is a difference of opinion in Commonwealth countries as to whether equity and common law have been fused or are merely administered by the same court, with the orthodox view that they have not (expressed as rejecting the "fusion fallacy") prevailing in Australia, while support for fusion has been expressed by the New Zealand Court of Appeal.
For most purposes, the U.S. federal system and most states have merged the two courts. Additionally, even before the separate courts were merged, most courts were permitted to apply both law and equity, though under potentially different procedural law. Nonetheless, the historical distinction between "law" and "equity" remains important today when the case involves issues such as the following:
Courts of equity rely on common law (in the sense of this first connotation) principles of binding precedent.
In addition, there are several historical (but now archaic) uses of the term that, while no longer current, provide background context that assists in understanding the meaning of "common law" today.
In one usage that is now archaic, but that gives insight into the history of the common law, "common law" referred to the pre-Christian system of law, imported by the pre-literate Saxons to England and upheld into their historical times until 1066, when the Norman conquest overthrew the last Saxon king—i.e., before (it was supposed) there was any consistent, written law to be applied.
"Common law" as the term is used today in common law countries contrasts with ius commune. While historically the ius commune became a secure point of reference in continental European legal systems, in England it was not a point of reference at all.
The English Court of Common Pleas dealt with lawsuits in which the monarch had no interest, i.e., between commoners.
Black's Law Dictionary, 10th ed., definition 3 is "General law common to a country as a whole, as opposed to special law that has only local application." From at least the 11th century and continuing for several centuries, there were several different circuits in the royal court system, served by itinerant judges who would travel from town to town dispensing the king's justice in "assizes". The term "common law" was used to describe the law held in common between the circuits and the different stops in each circuit. The more widely a particular law was recognized, the more weight it held, whereas purely local customs were generally subordinate to law recognized in a plurality of jurisdictions.
As used by non-lawyers in popular culture, the term "common law" connotes law based on ancient and unwritten universal custom of the people. The "ancient unwritten universal custom" view was the foundation of the first treatises by Blackstone and Coke, and was universal among lawyers and judges from the earliest times to the mid-19th century. However, for 100 years, lawyers and judges have recognized that the "ancient unwritten universal custom" view does not accord with the facts of the origin and growth of the law, and it is not held within the legal profession today.
Under the modern view, "common law" is not grounded in "custom" or "ancient usage", but rather acquires force of law instantly (without the delay implied by the term "custom" or "ancient") when pronounced by a higher court, because and to the extent the proposition is stated in judicial opinion. From the earliest times through the late 19th century, the dominant theory was that the common law was a pre-existent law or system of rules, a social standard of justice that existed in the habits, customs, and thoughts of the people. Under this older view, the legal profession considered it no part of a judge's duty to make new or change existing law, but only to expound and apply the old. By the early 20th century, largely at the urging of Oliver Wendell Holmes (as discussed throughout this article), this view had fallen into the minority view: Holmes pointed out that the older view worked undesirable and unjust results, and hampered a proper development of the law. In the century since Holmes, the dominant understanding has been that common law "decisions are themselves law, or rather the rules which the courts lay down in making the decisions constitute law". Holmes wrote in a 1917 opinion, "The common law is not a brooding omnipresence in the sky, but the articulate voice of some sovereign or quasi-sovereign that can be identified." Among legal professionals (lawyers and judges), the change in understanding occurred in the late 19th and early 20th centuries (as explained later in this article), though lay (non-legal) dictionaries were decades behind in recognizing the change.
The reality of the modern view, and implausibility of the old "ancient unwritten universal custom" view, can be seen in practical operation: under the pre-1870 view, (a) the "common law" should have been absolutely static over centuries (but it evolved), (b) jurisdictions could not logically diverge from each other (but nonetheless did and do today), (c) a new decision logically needed to operate retroactively (but did not), and (d) there was no standard to decide which English medieval customs should be "law" and which should not. All five tensions resolve under the modern view: (a) the common law evolved to meet the needs of the times (e.g., trial by combat passed out of the law by the 15th century), (b) the common law in different jurisdictions may diverge, (c) new decisions may (but need not) have retroactive operation, and (d) court decisions are effective immediately as they are issued, not years later, or after they become "custom", and questions of what "custom" might have been at some "ancient" time are simply irrelevant.
People using pseudolegal tactics and arguments have frequently claimed to base their arguments on common law; notably, the radical anti-government sovereign citizens and freemen on the land movements, who deny the legitimacy of their countries' legal systems, base their beliefs on idiosyncratic interpretations of common law. "Common law" has also been used as an alibi by groups such as the far-right American Patriot movement for setting up kangaroo courts in order to conduct vigilante actions or intimidate their opponents.
In a common law jurisdiction several stages of research and analysis are required to determine "what the law is" in a given situation. First, one must ascertain the facts. Then, one must locate any relevant statutes and cases. Then one must extract the principles, analogies and statements by various courts of what they consider important to determine how the next court is likely to rule on the facts of the present case. More recent decisions, and decisions of higher courts or legislatures carry more weight than earlier cases and those of lower courts. Finally, one integrates all the lines drawn and reasons given, and determines "what the law is". Then, one applies that law to the facts.
In practice, common law systems are considerably more complicated than the simplified system described above. The decisions of a court are binding only in a particular jurisdiction, and even within a given jurisdiction, some courts have more power than others. For example, in most jurisdictions, decisions by appellate courts are binding on lower courts in the same jurisdiction, and on future decisions of the same appellate court, but decisions of lower courts are only non-binding persuasive authority. Interactions between common law, constitutional law, statutory law and regulatory law also give rise to considerable complexity.
Oliver Wendell Holmes Jr. cautioned that "the proper derivation of general principles in both common and constitutional law ... arise gradually, in the emergence of a consensus from a multitude of particularized prior decisions". Justice Cardozo noted the "common law does not work from pre-established truths of universal and inflexible validity to conclusions derived from them deductively", but "[i]ts method is inductive, and it draws its generalizations from particulars".
The common law is more malleable than statutory law. First, common law courts are not absolutely bound by precedent, but can (when extraordinarily good reason is shown) reinterpret and revise the law, without legislative intervention, to adapt to new trends in political, legal and social philosophy. Second, the common law evolves through a series of gradual steps, that gradually works out all the details, so that over a decade or more, the law can change substantially but without a sharp break, thereby reducing disruptive effects. In contrast to common law incrementalism, the legislative process is very difficult to get started, as legislatures tend to delay action until a situation is intolerable. For these reasons, legislative changes tend to be large, jarring and disruptive (sometimes positively, sometimes negatively, and sometimes with unintended consequences).
One example of the gradual change that typifies evolution of the common law is the gradual change in liability for negligence. The traditional common law rule through most of the 19th century was that a plaintiff could not recover for a defendant's negligent production or distribution of a harmful instrumentality unless the two were parties to a contract (privity of contract). Thus, only the immediate purchaser could recover for a product defect, and if a part was built up out of parts from parts manufacturers, the ultimate buyer could not recover for injury caused by a defect in the part. In an 1842 English case, Winterbottom v Wright, the postal service had contracted with Wright to maintain its coaches. Winterbottom was a driver for the post. When the coach failed and injured Winterbottom, he sued Wright. The Winterbottom court recognized that there would be "absurd and outrageous consequences" if an injured person could sue any person peripherally involved, and knew it had to draw a line somewhere, a limit on the causal connection between the negligent conduct and the injury. The court looked to the contractual relationships, and held that liability would only flow as far as the person in immediate contract ("privity") with the negligent party.
A first exception to this rule arose in 1852, in the case of Thomas v. Winchester, when New York's highest court held that mislabeling a poison as an innocuous herb, and then selling the mislabeled poison through a dealer who would be expected to resell it, put "human life in imminent danger". Thomas relied on this reason to create an exception to the "privity" rule. In 1909, New York held in Statler v. Ray Mfg. Co. that a coffee urn manufacturer was liable to a person injured when the urn exploded, because the urn "was of such a character inherently that, when applied to the purposes for which it was designed, it was liable to become a source of great danger to many people if not carefully and properly constructed".
Yet the privity rule survived. In Cadillac Motor Car Co. v. Johnson (decided in 1915 by the federal appeals court for New York and several neighboring states), the court held that a car owner could not recover for injuries from a defective wheel, when the automobile owner had a contract only with the automobile dealer and not with the manufacturer, even though there was "no question that the wheel was made of dead and 'dozy' wood, quite insufficient for its purposes". The Cadillac court was willing to acknowledge that the case law supported exceptions for "an article dangerous in its nature or likely to become so in the course of the ordinary usage to be contemplated by the vendor". However, held the Cadillac court, "one who manufactures articles dangerous only if defectively made, or installed, e.g., tables, chairs, pictures or mirrors hung on the walls, carriages, automobiles, and so on, is not liable to third parties for injuries caused by them, except in case of willful injury or fraud".
Finally, in the famous case of MacPherson v. Buick Motor Co., in 1916, Judge Benjamin Cardozo for New York's highest court pulled a broader principle out of these predecessor cases. The facts were almost identical to Cadillac a year earlier: a wheel from a wheel manufacturer was sold to Buick, to a dealer, to MacPherson, and the wheel failed, injuring MacPherson. Judge Cardozo held:
It may be that Statler v. Ray Mfg. Co. have extended the rule of Thomas v. Winchester. If so, this court is committed to the extension. The defendant argues that things imminently dangerous to life are poisons, explosives, deadly weapons—things whose normal function it is to injure or destroy. But whatever the rule in Thomas v. Winchester may once have been, it has no longer that restricted meaning. A scaffold (Devlin v. Smith, supra) is not inherently a destructive instrument. It becomes destructive only if imperfectly constructed. A large coffee urn (Statler v. Ray Mfg. Co., supra) may have within itself, if negligently made, the potency of danger, yet no one thinks of it as an implement whose normal function is destruction. What is true of the coffee urn is equally true of bottles of aerated water (Torgesen v. Schultz, 192 N. Y. 156). We have mentioned only cases in this court. But the rule has received a like extension in our courts of intermediate appeal. In Burke v. Ireland (26 App. Div. 487), in an opinion by CULLEN, J., it was applied to a builder who constructed a defective building; in Kahner v. Otis Elevator Co. (96 App. Div. 169) to the manufacturer of an elevator; in Davies v. Pelham Hod Elevating Co. (65 Hun, 573; affirmed in this court without opinion, 146 N. Y. 363) to a contractor who furnished a defective rope with knowledge of the purpose for which the rope was to be used. We are not required at this time either to approve or to disapprove the application of the rule that was made in these cases. It is enough that they help to characterize the trend of judicial thought. We hold, then, that the principle of Thomas v. Winchester is not limited to poisons, explosives, and things of like nature, to things which in their normal operation are implements of destruction. If the nature of a thing is such that it is reasonably certain to place life and limb in peril when negligently made, it is then a thing of danger. Its nature gives warning of the consequences to be expected. If to the element of danger there is added knowledge that the thing will be used by persons other than the purchaser, and used without new tests then, irrespective of contract, the manufacturer of this thing of danger is under a duty to make it carefully. ... There must be knowledge of a danger, not merely possible, but probable.
Cardozo's new "rule" exists in no prior case, but is inferrable as a synthesis of the "thing of danger" principle stated in them, merely extending it to "foreseeable danger" even if "the purposes for which it was designed" were not themselves "a source of great danger". MacPherson takes some care to present itself as foreseeable progression, not a wild departure. Cardozo continues to adhere to the original principle of Winterbottom, that "absurd and outrageous consequences" must be avoided, and he does so by drawing a new line in the last sentence quoted above: "There must be knowledge of a danger, not merely possible, but probable." But while adhering to the underlying principle that some boundary is necessary, MacPherson overruled the prior common law by rendering the formerly dominant factor in the boundary, that is, the privity formality arising out of a contractual relationship between persons, totally irrelevant. Rather, the most important factor in the boundary would be the nature of the thing sold and the foreseeable uses that downstream purchasers would make of the thing.
The example of the evolution of the law of negligence in the preceding paragraphs illustrates two crucial principles: (a) The common law evolves, this evolution is in the hands of judges, and judges have "made law" for hundreds of years. (b) The reasons given for a decision are often more important in the long run than the outcome in a particular case. This is the reason that judicial opinions are usually quite long, and give rationales and policies that can be balanced with judgment in future cases, rather than the bright-line rules usually embodied in statutes.
All law systems rely on written publication of the law, so that it is accessible to all. Common law decisions are published in law reports for use by lawyers, courts and the general public.
After the American Revolution, Massachusetts became the first state to establish an official Reporter of Decisions. As newer states needed law, they often looked first to the Massachusetts Reports for authoritative precedents as a basis for their own common law. The United States federal courts relied on private publishers until after the Civil War, and only began publishing as a government function in 1874. West Publishing in Minnesota is the largest private-sector publisher of law reports in the United States. Government publishers typically issue only decisions "in the raw", while private sector publishers often add indexing, including references to the key principles of the common law involved, editorial analysis, and similar finding aids.
In common law legal systems, the common law is crucial to understanding almost all important areas of law. For example, in England and Wales, in English Canada, and in most states of the United States, the basic law of contracts, torts and property do not exist in statute, but only in common law (though there may be isolated modifications enacted by statute). As another example, the Supreme Court of the United States in 1877, held that a Michigan statute that established rules for solemnization of marriages did not abolish pre-existing common-law marriage, because the statute did not affirmatively require statutory solemnization and was silent as to preexisting common law.
In almost all areas of the law (even those where there is a statutory framework, such as contracts for the sale of goods, or the criminal law), legislature-enacted statutes or agency-promulgated regulations generally give only terse statements of general principle, and the fine boundaries and definitions exist only in the interstitial common law. To find out what the precise law is that applies to a particular set of facts, one has to locate precedential decisions on the topic, and reason from those decisions by analogy.
In common law jurisdictions (in the sense opposed to "civil law"), legislatures operate under the assumption that statutes will be interpreted against the backdrop of the pre-existing common law. As the United States Supreme Court explained in United States v Texas, 507 U.S. 529 (1993):
Just as longstanding is the principle that "[s]tatutes which invade the common law ... are to be read with a presumption favoring the retention of long-established and familiar principles, except when a statutory purpose to the contrary is evident. Isbrandtsen Co. v. Johnson, 343 U.S. 779, 783 (1952); Astoria Federal Savings & Loan Assn. v. Solimino, 501 U.S. 104, 108 (1991). In such cases, Congress does not write upon a clean slate. Astoria, 501 U.S. at 108. In order to abrogate a common-law principle, the statute must "speak directly" to the question addressed by the common law. Mobil Oil Corp. v. Higginbotham, 436 U. S. 618, 625 (1978); Milwaukee v. Illinois, 451 U. S. 304, 315 (1981).
For example, in most U.S. states, the criminal statutes are primarily codification of pre-existing common law. (Codification is the process of enacting a statute that collects and restates pre-existing law in a single document—when that pre-existing law is common law, the common law remains relevant to the interpretation of these statutes.) In reliance on this assumption, modern statutes often leave a number of terms and fine distinctions unstated—for example, a statute might be very brief, leaving the precise definition of terms unstated, under the assumption that these fine distinctions would be resolved in the future by the courts based upon what they then understand to be the pre-existing common law. (For this reason, many modern American law schools teach the common law of crime as it stood in England in 1789, because that centuries-old English common law is a necessary foundation to interpreting modern criminal statutes.)
With the transition from English law, which had common law crimes, to the new legal system under the U.S. Constitution, which prohibited ex post facto laws at both the federal and state level, the question was raised whether there could be common law crimes in the United States. It was settled in the case of United States v. Hudson, which decided that federal courts had no jurisdiction to define new common law crimes, and that there must always be a (constitutionally valid) statute defining the offense and the penalty for it.
Still, many states retain selected common law crimes. For example, in Virginia, the definition of the conduct that constitutes the crime of robbery exists only in the common law, and the robbery statute only sets the punishment. Virginia Code section 1-200 establishes the continued existence and vitality of common law principles and provides that "The common law of England, insofar as it is not repugnant to the principles of the Bill of Rights and Constitution of this Commonwealth, shall continue in full force within the same, and be the rule of decision, except as altered by the General Assembly."
By contrast to statutory codification of common law, some statutes displace common law, for example to create a new cause of action that did not exist in the common law, or to legislatively overrule the common law. An example is the tort of wrongful death, which allows certain persons, usually a spouse, child or estate, to sue for damages on behalf of the deceased. There is no such tort in English common law; thus, any jurisdiction that lacks a wrongful death statute will not allow a lawsuit for the wrongful death of a loved one. Where a wrongful death statute exists, the compensation or other remedy available is limited to the remedy specified in the statute (typically, an upper limit on the amount of damages). Courts generally interpret statutes that create new causes of action narrowly—that is, limited to their precise terms—because the courts generally recognize the legislature as being supreme in deciding the reach of judge-made law unless such statute should violate some "second order" constitutional law provision (cf. judicial activism). This principle is applied more strongly in fields of commercial law (contracts and the like) where predictability is of relatively higher value, and less in torts, where courts recognize a greater responsibility to "do justice".
Where a tort is rooted in common law, all traditionally recognized damages for that tort may be sued for, whether or not there is mention of those damages in the current statutory law. For instance, a person who sustains bodily injury through the negligence of another may sue for medical costs, pain, suffering, loss of earnings or earning capacity, mental and/or emotional distress, loss of quality of life, disfigurement and more. These damages need not be set forth in statute as they already exist in the tradition of common law. However, without a wrongful death statute, most of them are extinguished upon death.
In the United States, the power of the federal judiciary to review and invalidate unconstitutional acts of the federal executive branch is stated in the constitution, Article III sections 1 and 2: "The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. ... The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority". The first landmark decision on "the judicial power" was Marbury v. Madison, 5 U.S. (1 Cranch) 137 (1803). Later cases interpreted the "judicial power" of Article III to establish the power of federal courts to consider or overturn any action of Congress or of any state that conflicts with the Constitution.
The interactions between decisions of different courts is discussed further in the article on precedent. Further interactions between common law and either statute or regulation are discussed further in the articles on Skidmore deference, Chevron deference, and Auer deference.
The United States federal courts are divided into twelve regional circuits, each with a circuit court of appeals (plus a thirteenth, the Court of Appeals for the Federal Circuit, which hears appeals in patent cases and cases against the federal government, without geographic limitation). Decisions of one circuit court are binding on the district courts within the circuit and on the circuit court itself, but are only persuasive authority on sister circuits. District court decisions are not binding precedent at all, only persuasive.
Most of the U.S. federal courts of appeal have adopted a rule under which, in the event of any conflict in decisions of panels (most of the courts of appeal almost always sit in panels of three), the earlier panel decision is controlling, and a panel decision may only be overruled by the court of appeals sitting en banc (that is, all active judges of the court) or by a higher court. In these courts, the older decision remains controlling when an issue comes up the third time.
Other courts, for example, the Court of Customs and Patent Appeals and the Supreme Court, always sit en banc, and thus the later decision controls. These courts essentially overrule all previous cases in each new case, and older cases survive only to the extent they do not conflict with newer cases. The interpretations of these courts—for example, Supreme Court interpretations of the constitution or federal statutes—are stable only so long as the older interpretation maintains the support of a majority of the court. Older decisions persist through some combination of belief that the old decision is right, and that it is not sufficiently wrong to be overruled.
In the jurisdictions of England and Wales and of Northern Ireland, since 2009, the Supreme Court of the United Kingdom has the authority to overrule and unify criminal law decisions of lower courts; it is the final court of appeal for civil law cases in all three of the UK jurisdictions, but not for criminal law cases in Scotland, where the High Court of Justiciary has this power instead (except on questions of law relating to reserved matters such as devolution and human rights). From 1966 to 2009, this power lay with the House of Lords, granted by the Practice Statement of 1966.
Canada's federal system, described below, avoids regional variability of federal law by giving national jurisdiction to both layers of appellate courts.
The reliance on judicial opinion is a strength of common law systems, and is a significant contributor to the robust commercial systems in the United Kingdom and United States. Because there is reasonably precise guidance on almost every issue, parties (especially commercial parties) can predict whether a proposed course of action is likely to be lawful or unlawful, and have some assurance of consistency. As Justice Brandeis famously expressed it, "in most matters it is more important that the applicable rule of law be settled than that it be settled right." This ability to predict gives more freedom to come close to the boundaries of the law. For example, many commercial contracts are more economically efficient, and create greater wealth, because the parties know ahead of time that the proposed arrangement, though perhaps close to the line, is almost certainly legal. Newspapers, taxpayer-funded entities with some religious affiliation, and political parties can obtain fairly clear guidance on the boundaries within which their freedom of expression rights apply.
In contrast, in jurisdictions with very weak respect for precedent, fine questions of law are redetermined anew each time they arise, making consistency and prediction more difficult, and procedures far more protracted than necessary because parties cannot rely on written statements of law as reliable guides. In jurisdictions that do not have a strong allegiance to a large body of precedent, parties have less a priori guidance (unless the written law is very clear and kept updated) and must often leave a bigger "safety margin" of unexploited opportunities, and final determinations are reached only after far larger expenditures on legal fees by the parties.
This is the reason for the frequent choice of the law of the State of New York in commercial contracts, even when neither entity has extensive contacts with New York—and remarkably often even when neither party has contacts with the United States. Commercial contracts almost always include a "choice of law clause" to reduce uncertainty. Somewhat surprisingly, contracts throughout the world (for example, contracts involving parties in Japan, France and Germany, and from most of the other states of the United States) often choose the law of New York, even where the relationship of the parties and transaction to New York is quite attenuated. Because of its history as the United States' commercial center, New York common law has a depth and predictability not (yet) available in any other jurisdictions of the United States. Similarly, American corporations are often formed under Delaware corporate law, and American contracts relating to corporate law issues (merger and acquisitions of companies, rights of shareholders, and so on) include a Delaware choice of law clause, because of the deep body of law in Delaware on these issues. On the other hand, some other jurisdictions have sufficiently developed bodies of law so that parties have no real motivation to choose the law of a foreign jurisdiction (for example, England and Wales, and the state of California), but not yet so fully developed that parties with no relationship to the jurisdiction choose that law. Outside the United States, parties that are in different jurisdictions from each other often choose the law of England and Wales, particularly when the parties are each in former British colonies and members of the Commonwealth. The common theme in all cases is that commercial parties seek predictability and simplicity in their contractual relations, and frequently choose the law of a common law jurisdiction with a well-developed body of common law to achieve that result.
Likewise, for litigation of commercial disputes arising out of unpredictable torts (as opposed to the prospective choice of law clauses in contracts discussed in the previous paragraph), certain jurisdictions attract an unusually high fraction of cases, because of the predictability afforded by the depth of decided cases. For example, London is considered the pre-eminent centre for litigation of admiralty cases.
This is not to say that common law is better in every situation. For example, civil law can be clearer than case law when the legislature has had the foresight and diligence to address the precise set of facts applicable to a particular situation. For that reason, civil law statutes tend to be somewhat more detailed than statutes written by common law legislatures—but, conversely, that tends to make the statute more difficult to read (the United States tax code is an example).
The common law—so named because it was "common" to all the king's courts across England—originated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. Prior to the Norman Conquest, much of England's legal business took place in the local folk courts of its various shires and hundreds. A variety of other individual courts also existed across the land: urban boroughs and merchant fairs held their own courts, and large landholders also held their own manorial and seigniorial courts as needed. The degree to which common law drew from earlier Anglo-Saxon traditions such as the jury, ordeals, the penalty of outlawry, and writs – all of which were incorporated into the Norman common law – is still a subject of much discussion. Additionally, the Catholic Church operated its own court system that adjudicated issues of canon law.
The main sources for the history of the common law in the Middle Ages are the plea rolls and the Year Books. The plea rolls, which were the official court records for the Courts of Common Pleas and King's Bench, were written in Latin. The rolls were made up in bundles by law term: Hilary, Easter, Trinity, and Michaelmas, or winter, spring, summer, and autumn. They are currently deposited in the UK National Archives, by whose permission images of the rolls for the Courts of Common Pleas, King's Bench, and Exchequer of Pleas, from the 13th century to the 17th, can be viewed online at the Anglo-American Legal Tradition site (The O'Quinn Law Library of the University of Houston Law Center).
The doctrine of precedent developed during the 12th and 13th centuries, as the collective judicial decisions that were based in tradition, custom and precedent.
The form of reasoning used in common law is known as casuistry or case-based reasoning. The common law, as applied in civil cases (as distinct from criminal cases), was devised as a means of compensating someone for wrongful acts known as torts, including both intentional torts and torts caused by negligence, and as developing the body of law recognizing and regulating contracts. The type of procedure practiced in common law courts is known as the adversarial system; this is also a development of the common law.
In 1154, Henry II became the first Plantagenet king. Among many achievements, Henry institutionalized common law by creating a unified system of law "common" to the country through incorporating and elevating local custom to the national, ending local control and peculiarities, eliminating arbitrary remedies and reinstating a jury system—citizens sworn on oath to investigate reliable criminal accusations and civil claims. The jury reached its verdict through evaluating common local knowledge, not necessarily through the presentation of evidence, a distinguishing factor from today's civil and criminal court systems.
At the time, royal government centered on the Curia Regis (king's court), the body of aristocrats and prelates who assisted in the administration of the realm and the ancestor of Parliament, the Star Chamber, and Privy Council. Henry II developed the practice of sending judges (numbering around 20 to 30 in the 1180s) from his Curia Regis to hear the various disputes throughout the country, and return to the court thereafter. The king's itinerant justices would generally receive a writ or commission under the great seal. They would then resolve disputes on an ad hoc basis according to what they interpreted the customs to be. The king's judges would then return to London and often discuss their cases and the decisions they made with the other judges. These decisions would be recorded and filed. In time, a rule, known as stare decisis (also commonly known as precedent) developed, whereby a judge would be bound to follow the decision of an earlier judge; he was required to adopt the earlier judge's interpretation of the law and apply the same principles promulgated by that earlier judge if the two cases had similar facts to one another. Once judges began to regard each other's decisions to be binding precedent, the pre-Norman system of local customs and law varying in each locality was replaced by a system that was (at least in theory, though not always in practice) common throughout the whole country, hence the name "common law".
The king's object was to preserve public order, but providing law and order was also extremely profitable–cases on forest use as well as fines and forfeitures can generate "great treasure" for the government. Eyres (a Norman French word for judicial circuit, originating from Latin iter) are more than just courts; they would supervise local government, raise revenue, investigate crimes, and enforce feudal rights of the king. There were complaints of the eyre of 1198 reducing the kingdom to poverty and Cornishmen fleeing to escape the eyre of 1233.
Henry II's creation of a powerful and unified court system, which curbed somewhat the power of canonical (church) courts, brought him (and England) into conflict with the church, most famously with Thomas Becket, the Archbishop of Canterbury. The murder of the Archbishop gave rise to a wave of popular outrage against the King. International pressure on Henry grew, and in May 1172 he negotiated a settlement with the papacy in which the King swore to go on crusade as well as effectively overturned the more controversial clauses of the Constitutions of Clarendon. Henry nevertheless continued to exert influence in any ecclesiastical case which interested him and royal power was exercised more subtly with considerable success.
The English Court of Common Pleas was established after Magna Carta to try lawsuits between commoners in which the monarch had no interest. Its judges sat in open court in the Great Hall of the king's Palace of Westminster, permanently except in the vacations between the four terms of the Legal year.
Judge-made common law operated as the primary source of law for several hundred years, before Parliament acquired legislative powers to create statutory law. It is important to understand that common law is the older and more traditional source of law, and legislative power is simply a layer applied on top of the older common law foundation. Since the 12th century, courts have had parallel and co-equal authority to make law—"legislating from the bench" is a traditional and essential function of courts, which was carried over into the U.S. system as an essential component of the "judicial power" specified by Article III of the U.S. Constitution. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, "judges do and must legislate." In the United States, state courts continue to exercise full common law powers, and create both general common law and interstitial common law. In U.S. federal courts, after Erie R. Co. v. Tompkins, 304 U.S. 64, 78 (1938), the general dividing line is that federal courts can only "interpret" to create interstitial common law not exercise general common law powers. However, that authority to "interpret" can be an expansive power to "make law," especially on Constitutional issues where the Constitutional text is so terse. There are legitimate debates on how the powers of courts and legislatures should be balanced around "interpretation." However, the view that courts lack law-making power is historically inaccurate and constitutionally unsupportable.
In England, judges have devised a number of rules as to how to deal with precedent decisions. The early development of case-law in the thirteenth century has been traced to Bracton's On the Laws and Customs of England and led to the yearly compilations of court cases known as Year Books, of which the first extant was published in 1268, the same year that Bracton died. The Year Books are known as the law reports of medieval England, and are a principal source for knowledge of the developing legal doctrines, concepts, and methods in the period from the 13th to the 16th centuries, when the common law developed into recognizable form.
The term "common law" is often used as a contrast to Roman-derived "civil law", and the fundamental processes and forms of reasoning in the two are quite different. Nonetheless, there has been considerable cross-fertilization of ideas, while the two traditions and sets of foundational principles remain distinct.
By the time of the rediscovery of the Roman law in Europe in the 12th and 13th centuries, the common law had already developed far enough to prevent a Roman law reception as it occurred on the continent. However, the first common law scholars, most notably Glanvill and Bracton, as well as the early royal common law judges, had been well accustomed with Roman law. Often, they were clerics trained in the Roman canon law. One of the first and throughout its history one of the most significant treatises of the common law, Bracton's De Legibus et Consuetudinibus Angliae (On the Laws and Customs of England), was heavily influenced by the division of the law in Justinian's Institutes. The impact of Roman law had decreased sharply after the age of Bracton, but the Roman divisions of actions into in rem (typically, actions against a thing or property for the purpose of gaining title to that property; must be filed in a court where the property is located) and in personam (typically, actions directed against a person; these can affect a person's rights and, since a person often owns things, his property too) used by Bracton had a lasting effect and laid the groundwork for a return of Roman law structural concepts in the 18th and 19th centuries. Signs of this can be found in Blackstone's Commentaries on the Laws of England, and Roman law ideas regained importance with the revival of academic law schools in the 19th century. As a result, today, the main systematic divisions of the law into property, contract, and tort (and to some extent unjust enrichment) can be found in the civil law as well as in the common law.
The first attempt at a comprehensive compilation of centuries of common law was by Lord Chief Justice Edward Coke, in his treatise, Institutes of the Lawes of England in the 17th century.
The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769.
A reception statute is a statutory law adopted as a former British colony becomes independent, by which the new nation adopts (i.e. receives) pre-independence common law, to the extent not explicitly rejected by the legislative body or constitution of the new nation. Reception statutes generally consider the English common law dating prior to independence, and the precedent originating from it, as the default law, because of the importance of using an extensive and predictable body of law to govern the conduct of citizens and businesses in a new state. All U.S. states, with the partial exception of Louisiana, have either implemented reception statutes or adopted the common law by judicial opinion.
Other examples of reception statutes in the United States, the states of the U.S., Canada and its provinces, and Hong Kong, are discussed in the reception statute article.
Yet, adoption of the common law in the newly independent nation was not a foregone conclusion, and was controversial. Immediately after the American Revolution, there was widespread distrust and hostility to anything British, and the common law was no exception. Jeffersonians decried lawyers and their common law tradition as threats to the new republic. The Jeffersonians preferred a legislatively enacted civil law under the control of the political process, rather than the common law developed by judges that—by design—were insulated from the political process. The Federalists believed that the common law was the birthright of Independence: after all, the natural rights to "life, liberty, and the pursuit of happiness" were the rights protected by common law. Even advocates for the common law approach noted that it was not an ideal fit for the newly independent colonies: judges and lawyers alike were severely hindered by a lack of printed legal materials. Before Independence, the most comprehensive law libraries had been maintained by Tory lawyers, and those libraries vanished with the loyalist expatriation, and the ability to print books was limited. Lawyer (later President) John Adams complained that he "suffered very much for the want of books". To bootstrap this most basic need of a common law system—knowable, written law—in 1803, lawyers in Massachusetts donated their books to found a law library. A Jeffersonian newspaper criticized the library, as it would carry forward "all the old authorities practiced in England for centuries back ... whereby a new system of jurisprudence [will be founded] on the high monarchical system [to] become the Common Law of this Commonwealth... [The library] may hereafter have a very unsocial purpose."
For several decades after independence, English law still exerted influence over American common law—for example, with Byrne v Boadle (1863), which first applied the res ipsa loquitur doctrine.
Well into the 19th century, ancient maxims played a large role in common law adjudication. Many of these maxims had originated in Roman Law, migrated to England before the introduction of Christianity to the British Isles, and were typically stated in Latin even in English decisions. Many examples are familiar in everyday speech even today, "One cannot be a judge in one's own cause" (see Dr. Bonham's Case), rights are reciprocal to obligations, and the like. Judicial decisions and treatises of the 17th and 18th centuries, such at those of Lord Chief Justice Edward Coke, presented the common law as a collection of such maxims.
Reliance on old maxims and rigid adherence to precedent, no matter how old or ill-considered, came under critical discussion in the late 19th century, starting in the United States. Oliver Wendell Holmes Jr. in his famous article, "The Path of the Law", commented, "It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past." Justice Holmes noted that study of maxims might be sufficient for "the man of the present", but "the man of the future is the man of statistics and the master of economics". In an 1880 lecture at Harvard, he wrote:
The life of the law has not been logic; it has been experience. The felt necessities of the time, the prevalent moral and political theories, intuitions of public policy, avowed or unconscious, even the prejudices which judges share with their fellow men, have had a good deal more to do than the syllogism in determining the rules by which men should be governed. The law embodies the story of a nation's development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.
In the early 20th century, Louis Brandeis, later appointed to the United States Supreme Court, became noted for his use of policy-driving facts and economics in his briefs, and extensive appendices presenting facts that lead a judge to the advocate's conclusion. By this time, briefs relied more on facts than on Latin maxims.
Reliance on old maxims is now deprecated. Common law decisions today reflect both precedent and policy judgment drawn from economics, the social sciences, business, decisions of foreign courts, and the like. The degree to which these external factors should influence adjudication is the subject of active debate, but it is indisputable that judges do draw on experience and learning from everyday life, from other fields, and from other jurisdictions.
As early as the 15th century, it became the practice that litigants who felt they had been cheated by the common law system would petition the King in person. For example, they might argue that an award of damages (at common law (as opposed to equity)) was not sufficient redress for a trespasser occupying their land, and instead request that the trespasser be evicted. From this developed the system of equity, administered by the Lord Chancellor, in the courts of chancery. By their nature, equity and law were frequently in conflict and litigation would frequently continue for years as one court countermanded the other, even though it was established by the 17th century that equity should prevail.
In England, courts of law (as opposed to equity) were merged with courts of equity by the Judicature Acts of 1873 and 1875, with equity prevailing in case of conflict.
In the United States, parallel systems of law (providing money damages, with cases heard by a jury upon either party's request) and equity (fashioning a remedy to fit the situation, including injunctive relief, heard by a judge) survived well into the 20th century. The United States federal courts procedurally separated law and equity: the same judges could hear either kind of case, but a given case could only pursue causes in law or in equity, and the two kinds of cases proceeded under different procedural rules. This became problematic when a given case required both money damages and injunctive relief. In 1937, the new Federal Rules of Civil Procedure combined law and equity into one form of action, the "civil action". Fed.R.Civ.P. 2. The distinction survives to the extent that issues that were "common law (as opposed to equity)" as of 1791 (the date of adoption of the Seventh Amendment) are still subject to the right of either party to request a jury, and "equity" issues are decided by a judge.
The states of Delaware, Illinois, Mississippi, South Carolina, and Tennessee continue to have divided courts of law and courts of chancery, for example, the Delaware Court of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division.
For centuries, through to the 19th century, the common law acknowledged only specific forms of action, and required very careful drafting of the opening pleading (called a writ) to slot into exactly one of them: debt, detinue, covenant, special assumpsit, general assumpsit, trespass, trover, replevin, case (or trespass on the case), and ejectment. To initiate a lawsuit, a pleading had to be drafted to meet myriad technical requirements: correctly categorizing the case into the correct legal pigeonhole (pleading in the alternative was not permitted), and using specific legal terms and phrases that had been traditional for centuries. Under the old common law pleading standards, a suit by a pro se ("for oneself", without a lawyer) party was all but impossible, and there was often considerable procedural jousting at the outset of a case over minor wording issues.
One of the major reforms of the late 19th century and early 20th century was the abolition of common law pleading requirements. A plaintiff can initiate a case by giving the defendant "a short and plain statement" of facts that constitute an alleged wrong. This reform moved the attention of courts from technical scrutiny of words to a more rational consideration of the facts, and opened access to justice far more broadly.
The main alternative to the common law system is the civil law system, which is used in Continental Europe, and most of Central and South America.
The primary contrast between the two systems is the role of written decisions and precedent.
In common law jurisdictions, nearly every case that presents a bona fide disagreement on the law is resolved in a written opinion. The legal reasoning for the decision, known as ratio decidendi, not only determines the court's judgment between the parties, but also stands as precedent for resolving future disputes. In contrast, civil law decisions typically do not include explanatory opinions, and thus no precedent flows from one decision to the next. In common law systems, a single decided case is binding common law (connotation 1) to the same extent as statute or regulation, under the principle of stare decisis. In contrast, in civil law systems, individual decisions have only advisory, not binding effect. In civil law systems, case law only acquires weight when a long series of cases use consistent reasoning, called jurisprudence constante. Civil law lawyers consult case law to obtain their best prediction of how a court will rule, but comparatively, civil law judges are less bound to follow it.
For that reason, statutes in civil law systems are more comprehensive, detailed, and continuously updated, covering all matters capable of being brought before a court.
Common law systems tend to give more weight to separation of powers between the judicial branch and the executive branch. In contrast, civil law systems are typically more tolerant of allowing individual officials to exercise both powers. One example of this contrast is the difference between the two systems in allocation of responsibility between prosecutor and adjudicator.
Common law courts usually use an adversarial system, in which two sides present their cases to a neutral judge. For example, in criminal cases, in adversarial systems, the prosecutor and adjudicator are two separate people. The prosecutor is lodged in the executive branch, and conducts the investigation to locate evidence. That prosecutor presents the evidence to a neutral adjudicator, who makes a decision.
In contrast, in civil law systems, criminal proceedings proceed under an inquisitorial system in which an examining magistrate serves two roles by first developing the evidence and arguments for one side and then the other during the investigation phase. The examining magistrate then presents the dossier detailing his or her findings to the president of the bench that will adjudicate on the case where it has been decided that a trial shall be conducted. Therefore, the president of the bench's view of the case is not neutral and may be biased while conducting the trial after the reading of the dossier. Unlike the common law proceedings, the president of the bench in the inquisitorial system is not merely an umpire and is entitled to directly interview the witnesses or express comments during the trial, as long as he or she does not express his or her view on the guilt of the accused.
The proceeding in the inquisitorial system is essentially by writing. Most of the witnesses would have given evidence in the investigation phase and such evidence will be contained in the dossier under the form of police reports. In the same way, the accused would have already put his or her case at the investigation phase but he or she will be free to change his or her evidence at trial. Whether the accused pleads guilty or not, a trial will be conducted. Unlike the adversarial system, the conviction and sentence to be served (if any) will be released by the trial jury together with the president of the trial bench, following their common deliberation.
In contrast, in an adversarial system, on issues of fact, the onus of framing the case rests on the parties, and judges generally decide the case presented to them, rather than acting as active investigators, or actively reframing the issues presented. "In our adversary system, in both civil and criminal cases, in the first instance and on appeal, we follow the principle of party presentation. That is, we rely on the parties to frame the issues for decision and assign to courts the role of neutral arbiter of matters the parties present." This principle applies with force in all issues in criminal matters, and to factual issues: courts seldom engage in fact gathering on their own initiative, but decide facts on the evidence presented (even here, there are exceptions, for "legislative facts" as opposed to "adjudicative facts").
On the other hand, on issues of law, common law courts regularly raise new issues (such as matters of jurisdiction or standing), perform independent research, and reformulate the legal grounds on which to analyze the facts presented to them. The United States Supreme Court regularly decides based on issues raised only in amicus briefs from non-parties. One of the most notable such cases was Erie Railroad v. Tompkins, a 1938 case in which neither party questioned the ruling from the 1842 case Swift v. Tyson that served as the foundation for their arguments, but which led the Supreme Court to overturn Swift during their deliberations. To avoid lack of notice, courts may invite briefing on an issue to ensure adequate notice. However, there are limits—an appeals court may not introduce a theory that contradicts the party's own contentions.
There are many exceptions in both directions. For example, most proceedings before U.S. federal and state agencies are inquisitorial in nature, at least the initial stages (e.g., a patent examiner, a social security hearing officer, and so on), even though the law to be applied is developed through common law processes.
The role of the legal academy presents a significant "cultural" difference between common law (connotation 2) and civil law jurisdictions. In both systems, treatises compile decisions and state overarching principles that (in the author's opinion) explain the results of the cases. In neither system are treatises considered "law", but the weight given them is nonetheless quite different.
In common law jurisdictions, lawyers and judges tend to use these treatises as only "finding aids" to locate the relevant cases. In common law jurisdictions, scholarly work is seldom cited as authority for what the law is. Chief Justice Roberts noted the "great disconnect between the academy and the profession." When common law courts rely on scholarly work, it is almost always only for factual findings, policy justification, or the history and evolution of the law, but the court's legal conclusion is reached through analysis of relevant statutes and common law, seldom scholarly commentary.
In contrast, in civil law jurisdictions, courts give the writings of law professors significant weight, partly because civil law decisions traditionally were very brief, sometimes no more than a paragraph stating who wins and who loses. The rationale had to come from somewhere else: the academy often filled that role.
The contrast between civil law and common law legal systems has become increasingly blurred, with the growing importance of jurisprudence (similar to case law but not binding) in civil law countries, and the growing importance of statute law and codes in common law countries.
Examples of common law being replaced by statute or codified rule in the United States include criminal law (since 1812, U.S. federal courts and most but not all of the states have held that criminal law must be embodied in statute if the public is to have fair notice), commercial law (the Uniform Commercial Code in the early 1960s) and procedure (the Federal Rules of Civil Procedure in the 1930s and the Federal Rules of Evidence in the 1970s). But in each case, the statute sets the general principles, but the interstitial common law process determines the scope and application of the statute.
An example of convergence from the other direction is shown in the 1982 decision Srl CILFIT and Lanificio di Gavardo SpA v Ministry of Health (ECLI:EU:C:1982:335), in which the European Court of Justice held that questions it has already answered need not be resubmitted. This showed how a historically distinctly common law principle is used by a court composed of judges (at that time) of essentially civil law jurisdiction.
The former Soviet Bloc and other socialist countries used a socialist law system, although there is controversy as to whether socialist law ever constituted a separate legal system or not.
Much of the Muslim world uses legal systems based on Sharia (also called Islamic law).
Many churches use a system of canon law. The canon law of the Catholic Church influenced the common law during the medieval period through its preservation of Roman law doctrine such as the presumption of innocence.
The common law constitutes the basis of the legal systems of:
and many other generally English-speaking countries or Commonwealth countries (except Scotland, which is bijuridicial, and Malta). Essentially, every country that was colonised at some time by England, Great Britain, or the United Kingdom uses common law except those that were formerly colonised by other nations, such as Quebec (which follows the bijuridicial law or civil code of France in part), South Africa and Sri Lanka (which follow Roman Dutch law), where the prior civil law system was retained to respect the civil rights of the local colonists. Guyana and Saint Lucia have mixed common law and civil law systems.
The remainder of this section discusses jurisdiction-specific variants, arranged chronologically.
Scotland is often said to use the civil law system, but it has a unique system that combines elements of an uncodified civil law dating back to the Corpus Juris Civilis with an element of its own common law long predating the Treaty of Union with England in 1707 (see Legal institutions of Scotland in the High Middle Ages), founded on the customary laws of the tribes residing there. Historically, Scottish common law differed in that the use of precedent was subject to the courts' seeking to discover the principle that justifies a law rather than searching for an example as a precedent, and principles of natural justice and fairness have always played a role in Scots Law. From the 19th century, the Scottish approach to precedent developed into a stare decisis akin to that already established in England thereby reflecting a narrower, more modern approach to the application of case law in subsequent instances. This is not to say that the substantive rules of the common laws of both countries are the same, but in many matters (particularly those of UK-wide interest), they are similar.
Scotland shares the Supreme Court with England, Wales and Northern Ireland for civil cases; the court's decisions are binding on the jurisdiction from which a case arises but only influential on similar cases arising in Scotland. This has had the effect of converging the law in certain areas. For instance, the modern UK law of negligence is based on Donoghue v Stevenson, a case originating in Paisley, Scotland.
Scotland maintains a separate criminal law system from the rest of the UK, with the High Court of Justiciary being the final court for criminal appeals. The highest court of appeal in civil cases brought in Scotland is now the Supreme Court of the United Kingdom (before October 2009, final appellate jurisdiction lay with the House of Lords).
The centuries-old authority of the common law courts in England to develop law case by case and to apply statute law—"legislating from the bench"—is a traditional function of courts, which was carried over into the U.S. system as an essential component of the judicial power for states. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, "judges do and must legislate" (in the federal courts, only interstitially, in state courts, to the full limits of common law adjudicatory authority).
The original colony of New Netherland was settled by the Dutch and the law was also Dutch. When the English captured pre-existing colonies they continued to allow the local settlers to keep their civil law. However, the Dutch settlers revolted against the English and the colony was recaptured by the Dutch. In 1664, the colony of New York had two distinct legal systems: on Manhattan Island and along the Hudson River, sophisticated courts modeled on those of the Netherlands were resolving disputes learnedly in accordance with Dutch customary law. On Long Island, Staten Island, and in Westchester, on the other hand, English courts were administering a crude, untechnical variant of the common law carried from Puritan New England and practiced without the intercession of lawyers. When the English finally regained control of New Netherland they imposed common law upon all the colonists, including the Dutch. This was problematic, as the patroon system of land holding, based on the feudal system and civil law, continued to operate in the colony until it was abolished in the mid-19th century. New York began a codification of its law in the 19th century. The only part of this codification process that was considered complete is known as the Field Code applying to civil procedure. The influence of Roman-Dutch law continued in the colony well into the late 19th century. The codification of a law of general obligations shows how remnants of the civil law tradition in New York continued on from the Dutch days.
Under Louisiana's codified system, the Louisiana Civil Code, private law—that is, substantive law between private sector parties—is based on principles of law from continental Europe, with some common law influences. These principles derive ultimately from Roman law, transmitted through French law and Spanish law, as the state's current territory intersects the area of North America colonized by Spain and by France. Contrary to popular belief, the Louisiana code does not directly derive from the Napoleonic Code, as the latter was enacted in 1804, one year after the Louisiana Purchase. However, the two codes are similar in many respects due to common roots.
Louisiana's criminal law largely rests on English common law. Louisiana's administrative law is generally similar to the administrative law of the U.S. federal government and other U.S. states. Louisiana's procedural law is generally in line with that of other U.S. states, which in turn is generally based on the U.S. Federal Rules of Civil Procedure.
Historically notable among the Louisiana code's differences from common law is the role of property rights among women, particularly in inheritance gained by widows.
The U.S. state of California has a system based on common law, but it has codified the law in the manner of civil law jurisdictions. The reason for the enactment of the California Codes in the 19th century was to replace a pre-existing system based on Spanish civil law with a system based on common law, similar to that in most other states. California and a number of other Western states, however, have retained the concept of community property derived from civil law. The California courts have treated portions of the codes as an extension of the common-law tradition, subject to judicial development in the same manner as judge-made common law. (Most notably, in the case Li v. Yellow Cab Co., 13 Cal.3d 804 (1975), the California Supreme Court adopted the principle of comparative negligence in the face of a California Civil Code provision codifying the traditional common-law doctrine of contributory negligence.)
The United States federal government (as opposed to the states) has a variant on a common law system. United States federal courts only act as interpreters of statutes and the constitution by elaborating and precisely defining broad statutory language (connotation 1(b) above), but, unlike state courts, do not generally act as an independent source of common law.
Before 1938, the federal courts, like almost all other common law courts, decided the law on any issue where the relevant legislature (either the U.S. Congress or state legislature, depending on the issue) had not acted, by looking to courts in the same system, that is, other federal courts, even on issues of state law, and even where there was no express grant of authority from Congress or the Constitution.
In 1938, the U.S. Supreme Court in Erie Railroad Co. v. Tompkins 304 U.S. 64, 78 (1938), overruled earlier precedent, and held "There is no federal general common law," thus confining the federal courts to act only as interstitial interpreters of law originating elsewhere. E.g., Texas Industries v. Radcliff, 451 U.S. 630 (1981) (without an express grant of statutory authority, federal courts cannot create rules of intuitive justice, for example, a right to contribution from co-conspirators). Post-1938, federal courts deciding issues that arise under state law are required to defer to state court interpretations of state statutes, or reason what a state's highest court would rule if presented with the issue, or to certify the question to the state's highest court for resolution.
Later courts have limited Erie slightly, to create a few situations where United States federal courts are permitted to create federal common law rules without express statutory authority, for example, where a federal rule of decision is necessary to protect uniquely federal interests, such as foreign affairs, or financial instruments issued by the federal government. See, e.g., Clearfield Trust Co. v. United States, 318 U.S. 363 (1943) (giving federal courts the authority to fashion common law rules with respect to issues of federal power, in this case negotiable instruments backed by the federal government); see also International News Service v. Associated Press, 248 U.S. 215 (1918) (creating a cause of action for misappropriation of "hot news" that lacks any statutory grounding); but see National Basketball Association v. Motorola, Inc., 105 F.3d 841, 843–44, 853 (2d Cir. 1997) (noting continued vitality of INS "hot news" tort under New York state law, but leaving open the question of whether it survives under federal law). Except on Constitutional issues, Congress is free to legislatively overrule federal courts' common law.
Most executive branch agencies in the United States federal government have some adjudicatory authority. To greater or lesser extent, agencies honor their own precedent to ensure consistent results. Agency decision making is governed by the Administrative Procedure Act of 1946.
For example, the National Labor Relations Board issues relatively few regulations, but instead promulgates most of its substantive rules through common law (connotation 1).
The law of India, Pakistan, and Bangladesh are largely based on English common law because of the long period of British colonial influence during the period of the British Raj.
Ancient India represented a distinct tradition of law, and had a historically independent school of legal theory and practice. The Arthashastra, dating from 400 BCE and the Manusmriti, from 100 CE, were influential treatises in India, texts that were considered authoritative legal guidance. Manu's central philosophy was tolerance and pluralism, and was cited across Southeast Asia. Early in this period, which finally culminated in the creation of the Gupta Empire, relations with ancient Greece and Rome were not infrequent. The appearance of similar fundamental institutions of international law in various parts of the world show that they are inherent in international society, irrespective of culture and tradition. Inter-State relations in the pre-Islamic period resulted in clear-cut rules of warfare of a high humanitarian standard, in rules of neutrality, of treaty law, of customary law embodied in religious charters, in exchange of embassies of a temporary or semi-permanent character.
When India became part of the British Empire, there was a break in tradition, and Hindu and Islamic law were supplanted by the common law. After the failed rebellion against the British in 1857, the British Parliament took over control of India from the British East India Company, and British India came under the direct rule of the Crown. The British Parliament passed the Government of India Act 1858 to this effect, which set up the structure of British government in India. It established in Britain the office of the Secretary of State for India through whom the Parliament would exercise its rule, along with a Council of India to aid him. It also established the office of the Governor-General of India along with an Executive Council in India, which consisted of high officials of the British Government. As a result, the present judicial system of the country derives largely from the British system and has little correlation to the institutions of the pre-British era.
Post-partition, India retained its common law system. Much of contemporary Indian law shows substantial European and American influence. Legislation first introduced by the British is still in effect in modified form today. During the drafting of the Indian Constitution, laws from Ireland, the United States, Britain, and France were all synthesized to produce a refined set of Indian laws. Indian laws also adhere to the United Nations guidelines on human rights law and environmental law. Certain international trade laws, such as those on intellectual property, are also enforced in India.
Post-partition, Pakistan retained its common law system.
Post-partition, Bangladesh retained its common law system.
Canada has separate federal and provincial legal systems.
Each province and territory is considered a separate jurisdiction with respect to case law. Each has its own procedural law in civil matters, statutorily created provincial courts and superior trial courts with inherent jurisdiction culminating in the Court of Appeal of the province. These Courts of Appeal are then subject to the Supreme Court of Canada in terms of appeal of their decisions.
All but one of the provinces of Canada use a common law system for civil matters (the exception being Quebec, which uses a French-heritage civil law system for issues arising within provincial jurisdiction, such as property ownership and contracts).
Canadian Federal Courts operate under a separate system throughout Canada and deal with narrower range of subject matter than superior courts in each province and territory. They only hear cases on subjects assigned to them by federal statutes, such as immigration, intellectual property, judicial review of federal government decisions, and admiralty. The Federal Court of Appeal is the appellate court for federal courts and hears cases in multiple cities; unlike the United States, the Canadian Federal Court of Appeal is not divided into appellate circuits.
Canadian federal statutes must use the terminology of both the common law and civil law for civil matters; this is referred to as legislative bijuralism.
Criminal law is uniform throughout Canada. It is based on the federal statutory Criminal Code, which in addition to substance also details procedural law. The administration of justice are the responsibilities of the provinces. Canadian criminal law uses a common law system no matter which province a case proceeds.
Nicaragua's legal system is also a mixture of the English Common Law and Civil Law. This situation was brought through the influence of British administration of the Eastern half of the Mosquito Coast from the mid-17th century until about 1894, the William Walker period from about 1855 through 1857, US interventions/occupations during the period from 1909 to 1933, the influence of US institutions during the Somoza family administrations (1933 through 1979) and the considerable importation between 1979 and the present of US culture and institutions.
Israel has no formal written constitution. Its basic principles are inherited from the law of the British Mandate of Palestine and thus resemble those of British and American law, namely: the role of courts in creating the body of law and the authority of the supreme court in reviewing and if necessary overturning legislative and executive decisions, as well as employing the adversarial system. However, because Israel has no written constitution, basic laws can be changed by a vote of 61 out of 120 votes in the parliament. One of the primary reasons that the Israeli constitution remains unwritten is the fear by whatever party holds power that creating a written constitution, combined with the common-law elements, would severely limit the powers of the Knesset (which, following the doctrine of parliamentary sovereignty, holds near-unlimited power).
Roman Dutch common law is a bijuridical or mixed system of law similar to the common law system in Scotland and Louisiana. Roman Dutch common law jurisdictions include South Africa, Botswana, Lesotho, Namibia, Swaziland, Sri Lanka and Zimbabwe. Many of these jurisdictions recognise customary law, and in some, such as South Africa the Constitution requires that the common law be developed in accordance with the Bill of Rights. Roman Dutch common law is a development of Roman Dutch law by courts in the Roman Dutch common law jurisdictions. During the Napoleonic wars the Kingdom of the Netherlands adopted the French code civil in 1809, however the Dutch colonies in the Cape of Good Hope and Sri Lanka, at the time called Ceylon, were seized by the British to prevent them being used as bases by the French Navy. The system was developed by the courts and spread with the expansion of British colonies in Southern Africa. Roman Dutch common law relies on legal principles set out in Roman law sources such as Justinian's Institutes and Digest, and also on the writing of Dutch jurists of the 17th century such as Grotius and Voet. In practice, the majority of decisions rely on recent precedent.
Ghana follows the English common law tradition which was inherited from the British during her colonisation. Consequently, the laws of Ghana are, for the most part, a modified version of imported law that is continuously adapting to changing socio-economic and political realities of the country. The Bond of 1844 marked the period when the people of Ghana (then Gold Coast) ceded their independence to the British and gave the British judicial authority. Later, the Supreme Court Ordinance of 1876 formally introduced British law, be it the common law or statutory law, in the Gold Coast. Section 14 of the Ordinance formalised the application of the common-law tradition in the country.
Ghana, after independence, did not do away with the common law system inherited from the British, and today it has been enshrined in the 1992 Constitution of the country. Chapter four of Ghana's Constitution, entitled "The Laws of Ghana", has in Article 11(1) the list of laws applicable in the state. This comprises (a) the Constitution; (b) enactments made by or under the authority of the Parliament established by the Constitution; (c) any Orders, Rules and Regulations made by any person or authority under a power conferred by the Constitution; (d) the existing law; and (e) the common law. Thus, the modern-day Constitution of Ghana, like those before it, embraced the English common law by entrenching it in its provisions. The doctrine of judicial precedence which is based on the principle of stare decisis as applied in England and other pure common law countries also applies in Ghana.
Edward Coke, a 17th-century Lord Chief Justice of the English Court of Common Pleas and a Member of Parliament (MP), wrote several legal texts that collected and integrated centuries of case law. Lawyers in both England and America learned the law from his Institutes and Reports until the end of the 18th century. His works are still cited by common law courts around the world.
The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769. Since 1979, a facsimile edition of that first edition has been available in four paper-bound volumes. Today it has been superseded in the English part of the United Kingdom by Halsbury's Laws of England that covers both common and statutory English law.
While he was still on the Massachusetts Supreme Judicial Court, and before being named to the U.S. Supreme Court, Justice Oliver Wendell Holmes Jr. published a short volume called The Common Law, which remains a classic in the field. Unlike Blackstone and the Restatements, Holmes' book only briefly discusses what the law is; rather, Holmes describes the common law process. Law professor John Chipman Gray's The Nature and Sources of the Law, an examination and survey of the common law, is also still commonly read in U.S. law schools.
In the United States, Restatements of various subject matter areas (Contracts, Torts, Judgments, and so on.), edited by the American Law Institute, collect the common law for the area. The ALI Restatements are often cited by American courts and lawyers for propositions of uncodified common law, and are considered highly persuasive authority, just below binding precedential decisions. The Corpus Juris Secundum is an encyclopedia whose main content is a compendium of the common law and its variations throughout the various state jurisdictions.
Scots common law covers matters including murder and theft, and has sources in custom, in legal writings and previous court decisions. The legal writings used are called Institutional Texts and come mostly from the 17th, 18th and 19th centuries. Examples include Craig, Jus Feudale (1655) and Stair, The Institutions of the Law of Scotland (1681). | [
{
"paragraph_id": 0,
"text": "In law, common law (also known as judicial precedent, judge-made law, or case law) is the body of law created by judges and similar quasi-judicial tribunals by virtue of being stated in written opinions.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The defining characteristic of common law is that it arises as precedent. Common law courts look to the past decisions of courts to synthesize the legal principles of past cases. Stare decisis, the principle that cases should be decided according to consistent principled rules so that similar facts will yield similar results, lies at the heart of all common law systems. If a court finds that a similar dispute to the present one has been resolved in the past, the court is generally bound to follow the reasoning used in the prior decision. If, however, the court finds that the current dispute is fundamentally distinct from all previous cases (a \"matter of first impression\"), and legislative statutes (also called \"positive law\") are either silent or ambiguous on the question, judges have the authority and duty to resolve the issue. The opinion that a common law judge gives agglomerates with past decisions as precedent to bind future judges and litigants, unless overturned by further developments in the law or by subsequent statutory law.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The common law, so named because it was \"common\" to all the king's courts across England, originated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. The British Empire later spread the English legal system to its colonies, many of which retain the common law system today. These common law systems are legal systems that give great weight to judicial precedent, and to the style of reasoning inherited from the English legal system.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The term \"common law\", referring to the body of law made by the judiciary, is often distinguished from statutory law and regulations, which are laws adopted by the legislature and executive respectively. In legal systems that follow the common law, judicial precedent stands in contrast to and on equal footing with statutes. The other major legal system used by countries is the civil law, which codifies its legal principles into legal codes and does not treat judicial opinions as binding.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Today, one-third of the world's population lives in common law jurisdictions or in mixed legal systems that combine the common law with the civil law, including Antigua and Barbuda, Australia, Bahamas, Bangladesh, Barbados, Belize, Botswana, Burma, Cameroon, Canada (both the federal system and all its provinces except Quebec), Cyprus, Dominica, Fiji, Ghana, Grenada, Guyana, Hong Kong, India, Ireland, Israel, Jamaica, Kenya, Liberia, Malaysia, Malta, Marshall Islands, Micronesia, Namibia, Nauru, New Zealand, Nigeria, Pakistan, Palau, Papua New Guinea, Philippines, Sierra Leone, Singapore, South Africa, Sri Lanka, Trinidad and Tobago, the United Kingdom (including its overseas territories such as Gibraltar), the United States (both the federal system and 49 of its 50 states), and Zimbabwe.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The term common law has many connotations. The first three set out here are the most-common usages within the legal community. Other connotations from past centuries are sometimes seen and are sometimes heard in everyday speech.",
"title": "Definitions"
},
{
"paragraph_id": 6,
"text": "The first definition of \"common law\" given in Black's Law Dictionary, 10th edition, 2014, is \"The body of law derived from judicial decisions, rather than from statutes or constitutions; [synonym] CASELAW, [contrast] STATUTORY LAW\". This usage is given as the first definition in modern legal dictionaries, is characterized as the \"most common\" usage among legal professionals, and is the usage frequently seen in decisions of courts. In this connotation, \"common law\" distinguishes the authority that promulgated a law. For example, the law in most Anglo-American jurisdictions includes \"statutory law\" enacted by a legislature, \"regulatory law\" (in the U.S.) or \"delegated legislation\" (in the UK) promulgated by executive branch agencies pursuant to delegation of rule-making authority from the legislature, and common law or \"case law\", i.e., decisions issued by courts (or quasi-judicial tribunals within agencies). This first connotation can be further differentiated into:",
"title": "Definitions"
},
{
"paragraph_id": 7,
"text": "Publication of decisions, and indexing, is essential to the development of common law, and thus governments and private publishers publish law reports. While all decisions in common law jurisdictions are precedent (at varying levels and scope, as discussed throughout the article on precedent), some become \"leading cases\" or \"landmark decisions\" that are cited especially often.",
"title": "Definitions"
},
{
"paragraph_id": 8,
"text": "Black's Law Dictionary, 10th ed., definition 2, differentiates \"common law\" jurisdictions and legal systems from \"civil law\" or \"code\" jurisdictions. Common law systems place great weight on court decisions, which are considered \"law\" with the same force of law as statutes—for nearly a millennium, common law courts have had the authority to make law where no legislative statute exists, and statutes mean what courts interpret them to mean.",
"title": "Definitions"
},
{
"paragraph_id": 9,
"text": "By contrast, in civil law jurisdictions (the legal tradition that prevails, or is combined with common law, in Europe and most non-Islamic, non-common law countries), courts lack authority to act if there is no statute. Civil law judges tend to give less weight to judicial precedent, which means a civil law judge deciding a given case has more freedom to interpret the text of a statute independently (compared to a common law judge in the same circumstances), and therefore less predictably. For example, the Napoleonic Code expressly forbade French judges to pronounce general principles of law. The role of providing overarching principles, which in common law jurisdictions is provided in judicial opinions, in civil law jurisdictions is filled by giving greater weight to scholarly literature, as explained below.",
"title": "Definitions"
},
{
"paragraph_id": 10,
"text": "Common law systems trace their history to England, while civil law systems trace their history through the Napoleonic Code back to the Corpus Juris Civilis of Roman law. A few Western countries use other legal traditions, such as Roman-Dutch law or Scots law, for example.",
"title": "Definitions"
},
{
"paragraph_id": 11,
"text": "Black's Law Dictionary, 10th ed., definition 4, differentiates \"common law\" (or just \"law\") from \"equity\". Before 1873, England had two complementary court systems: courts of \"law\" which could only award money damages and recognized only the legal owner of property, and courts of \"equity\" (courts of chancery) that could issue injunctive relief (that is, a court order to a party to do something, give something to someone, or stop doing something) and recognized trusts of property. This split propagated to many of the colonies, including the United States. The states of Delaware, Mississippi, South Carolina, and Tennessee continue to have divided Courts of Law and Courts of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division. There is a difference of opinion in Commonwealth countries as to whether equity and common law have been fused or are merely administered by the same court, with the orthodox view that they have not (expressed as rejecting the \"fusion fallacy\") prevailing in Australia, while support for fusion has been expressed by the New Zealand Court of Appeal.",
"title": "Definitions"
},
{
"paragraph_id": 12,
"text": "For most purposes, the U.S. federal system and most states have merged the two courts. Additionally, even before the separate courts were merged, most courts were permitted to apply both law and equity, though under potentially different procedural law. Nonetheless, the historical distinction between \"law\" and \"equity\" remains important today when the case involves issues such as the following:",
"title": "Definitions"
},
{
"paragraph_id": 13,
"text": "Courts of equity rely on common law (in the sense of this first connotation) principles of binding precedent.",
"title": "Definitions"
},
{
"paragraph_id": 14,
"text": "In addition, there are several historical (but now archaic) uses of the term that, while no longer current, provide background context that assists in understanding the meaning of \"common law\" today.",
"title": "Definitions"
},
{
"paragraph_id": 15,
"text": "In one usage that is now archaic, but that gives insight into the history of the common law, \"common law\" referred to the pre-Christian system of law, imported by the pre-literate Saxons to England and upheld into their historical times until 1066, when the Norman conquest overthrew the last Saxon king—i.e., before (it was supposed) there was any consistent, written law to be applied.",
"title": "Definitions"
},
{
"paragraph_id": 16,
"text": "\"Common law\" as the term is used today in common law countries contrasts with ius commune. While historically the ius commune became a secure point of reference in continental European legal systems, in England it was not a point of reference at all.",
"title": "Definitions"
},
{
"paragraph_id": 17,
"text": "The English Court of Common Pleas dealt with lawsuits in which the monarch had no interest, i.e., between commoners.",
"title": "Definitions"
},
{
"paragraph_id": 18,
"text": "Black's Law Dictionary, 10th ed., definition 3 is \"General law common to a country as a whole, as opposed to special law that has only local application.\" From at least the 11th century and continuing for several centuries, there were several different circuits in the royal court system, served by itinerant judges who would travel from town to town dispensing the king's justice in \"assizes\". The term \"common law\" was used to describe the law held in common between the circuits and the different stops in each circuit. The more widely a particular law was recognized, the more weight it held, whereas purely local customs were generally subordinate to law recognized in a plurality of jurisdictions.",
"title": "Definitions"
},
{
"paragraph_id": 19,
"text": "As used by non-lawyers in popular culture, the term \"common law\" connotes law based on ancient and unwritten universal custom of the people. The \"ancient unwritten universal custom\" view was the foundation of the first treatises by Blackstone and Coke, and was universal among lawyers and judges from the earliest times to the mid-19th century. However, for 100 years, lawyers and judges have recognized that the \"ancient unwritten universal custom\" view does not accord with the facts of the origin and growth of the law, and it is not held within the legal profession today.",
"title": "Definitions"
},
{
"paragraph_id": 20,
"text": "Under the modern view, \"common law\" is not grounded in \"custom\" or \"ancient usage\", but rather acquires force of law instantly (without the delay implied by the term \"custom\" or \"ancient\") when pronounced by a higher court, because and to the extent the proposition is stated in judicial opinion. From the earliest times through the late 19th century, the dominant theory was that the common law was a pre-existent law or system of rules, a social standard of justice that existed in the habits, customs, and thoughts of the people. Under this older view, the legal profession considered it no part of a judge's duty to make new or change existing law, but only to expound and apply the old. By the early 20th century, largely at the urging of Oliver Wendell Holmes (as discussed throughout this article), this view had fallen into the minority view: Holmes pointed out that the older view worked undesirable and unjust results, and hampered a proper development of the law. In the century since Holmes, the dominant understanding has been that common law \"decisions are themselves law, or rather the rules which the courts lay down in making the decisions constitute law\". Holmes wrote in a 1917 opinion, \"The common law is not a brooding omnipresence in the sky, but the articulate voice of some sovereign or quasi-sovereign that can be identified.\" Among legal professionals (lawyers and judges), the change in understanding occurred in the late 19th and early 20th centuries (as explained later in this article), though lay (non-legal) dictionaries were decades behind in recognizing the change.",
"title": "Definitions"
},
{
"paragraph_id": 21,
"text": "The reality of the modern view, and implausibility of the old \"ancient unwritten universal custom\" view, can be seen in practical operation: under the pre-1870 view, (a) the \"common law\" should have been absolutely static over centuries (but it evolved), (b) jurisdictions could not logically diverge from each other (but nonetheless did and do today), (c) a new decision logically needed to operate retroactively (but did not), and (d) there was no standard to decide which English medieval customs should be \"law\" and which should not. All five tensions resolve under the modern view: (a) the common law evolved to meet the needs of the times (e.g., trial by combat passed out of the law by the 15th century), (b) the common law in different jurisdictions may diverge, (c) new decisions may (but need not) have retroactive operation, and (d) court decisions are effective immediately as they are issued, not years later, or after they become \"custom\", and questions of what \"custom\" might have been at some \"ancient\" time are simply irrelevant.",
"title": "Definitions"
},
{
"paragraph_id": 22,
"text": "People using pseudolegal tactics and arguments have frequently claimed to base their arguments on common law; notably, the radical anti-government sovereign citizens and freemen on the land movements, who deny the legitimacy of their countries' legal systems, base their beliefs on idiosyncratic interpretations of common law. \"Common law\" has also been used as an alibi by groups such as the far-right American Patriot movement for setting up kangaroo courts in order to conduct vigilante actions or intimidate their opponents.",
"title": "Definitions"
},
{
"paragraph_id": 23,
"text": "In a common law jurisdiction several stages of research and analysis are required to determine \"what the law is\" in a given situation. First, one must ascertain the facts. Then, one must locate any relevant statutes and cases. Then one must extract the principles, analogies and statements by various courts of what they consider important to determine how the next court is likely to rule on the facts of the present case. More recent decisions, and decisions of higher courts or legislatures carry more weight than earlier cases and those of lower courts. Finally, one integrates all the lines drawn and reasons given, and determines \"what the law is\". Then, one applies that law to the facts.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 24,
"text": "In practice, common law systems are considerably more complicated than the simplified system described above. The decisions of a court are binding only in a particular jurisdiction, and even within a given jurisdiction, some courts have more power than others. For example, in most jurisdictions, decisions by appellate courts are binding on lower courts in the same jurisdiction, and on future decisions of the same appellate court, but decisions of lower courts are only non-binding persuasive authority. Interactions between common law, constitutional law, statutory law and regulatory law also give rise to considerable complexity.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 25,
"text": "Oliver Wendell Holmes Jr. cautioned that \"the proper derivation of general principles in both common and constitutional law ... arise gradually, in the emergence of a consensus from a multitude of particularized prior decisions\". Justice Cardozo noted the \"common law does not work from pre-established truths of universal and inflexible validity to conclusions derived from them deductively\", but \"[i]ts method is inductive, and it draws its generalizations from particulars\".",
"title": "Basic principles of common law"
},
{
"paragraph_id": 26,
"text": "The common law is more malleable than statutory law. First, common law courts are not absolutely bound by precedent, but can (when extraordinarily good reason is shown) reinterpret and revise the law, without legislative intervention, to adapt to new trends in political, legal and social philosophy. Second, the common law evolves through a series of gradual steps, that gradually works out all the details, so that over a decade or more, the law can change substantially but without a sharp break, thereby reducing disruptive effects. In contrast to common law incrementalism, the legislative process is very difficult to get started, as legislatures tend to delay action until a situation is intolerable. For these reasons, legislative changes tend to be large, jarring and disruptive (sometimes positively, sometimes negatively, and sometimes with unintended consequences).",
"title": "Basic principles of common law"
},
{
"paragraph_id": 27,
"text": "One example of the gradual change that typifies evolution of the common law is the gradual change in liability for negligence. The traditional common law rule through most of the 19th century was that a plaintiff could not recover for a defendant's negligent production or distribution of a harmful instrumentality unless the two were parties to a contract (privity of contract). Thus, only the immediate purchaser could recover for a product defect, and if a part was built up out of parts from parts manufacturers, the ultimate buyer could not recover for injury caused by a defect in the part. In an 1842 English case, Winterbottom v Wright, the postal service had contracted with Wright to maintain its coaches. Winterbottom was a driver for the post. When the coach failed and injured Winterbottom, he sued Wright. The Winterbottom court recognized that there would be \"absurd and outrageous consequences\" if an injured person could sue any person peripherally involved, and knew it had to draw a line somewhere, a limit on the causal connection between the negligent conduct and the injury. The court looked to the contractual relationships, and held that liability would only flow as far as the person in immediate contract (\"privity\") with the negligent party.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 28,
"text": "A first exception to this rule arose in 1852, in the case of Thomas v. Winchester, when New York's highest court held that mislabeling a poison as an innocuous herb, and then selling the mislabeled poison through a dealer who would be expected to resell it, put \"human life in imminent danger\". Thomas relied on this reason to create an exception to the \"privity\" rule. In 1909, New York held in Statler v. Ray Mfg. Co. that a coffee urn manufacturer was liable to a person injured when the urn exploded, because the urn \"was of such a character inherently that, when applied to the purposes for which it was designed, it was liable to become a source of great danger to many people if not carefully and properly constructed\".",
"title": "Basic principles of common law"
},
{
"paragraph_id": 29,
"text": "Yet the privity rule survived. In Cadillac Motor Car Co. v. Johnson (decided in 1915 by the federal appeals court for New York and several neighboring states), the court held that a car owner could not recover for injuries from a defective wheel, when the automobile owner had a contract only with the automobile dealer and not with the manufacturer, even though there was \"no question that the wheel was made of dead and 'dozy' wood, quite insufficient for its purposes\". The Cadillac court was willing to acknowledge that the case law supported exceptions for \"an article dangerous in its nature or likely to become so in the course of the ordinary usage to be contemplated by the vendor\". However, held the Cadillac court, \"one who manufactures articles dangerous only if defectively made, or installed, e.g., tables, chairs, pictures or mirrors hung on the walls, carriages, automobiles, and so on, is not liable to third parties for injuries caused by them, except in case of willful injury or fraud\".",
"title": "Basic principles of common law"
},
{
"paragraph_id": 30,
"text": "Finally, in the famous case of MacPherson v. Buick Motor Co., in 1916, Judge Benjamin Cardozo for New York's highest court pulled a broader principle out of these predecessor cases. The facts were almost identical to Cadillac a year earlier: a wheel from a wheel manufacturer was sold to Buick, to a dealer, to MacPherson, and the wheel failed, injuring MacPherson. Judge Cardozo held:",
"title": "Basic principles of common law"
},
{
"paragraph_id": 31,
"text": "It may be that Statler v. Ray Mfg. Co. have extended the rule of Thomas v. Winchester. If so, this court is committed to the extension. The defendant argues that things imminently dangerous to life are poisons, explosives, deadly weapons—things whose normal function it is to injure or destroy. But whatever the rule in Thomas v. Winchester may once have been, it has no longer that restricted meaning. A scaffold (Devlin v. Smith, supra) is not inherently a destructive instrument. It becomes destructive only if imperfectly constructed. A large coffee urn (Statler v. Ray Mfg. Co., supra) may have within itself, if negligently made, the potency of danger, yet no one thinks of it as an implement whose normal function is destruction. What is true of the coffee urn is equally true of bottles of aerated water (Torgesen v. Schultz, 192 N. Y. 156). We have mentioned only cases in this court. But the rule has received a like extension in our courts of intermediate appeal. In Burke v. Ireland (26 App. Div. 487), in an opinion by CULLEN, J., it was applied to a builder who constructed a defective building; in Kahner v. Otis Elevator Co. (96 App. Div. 169) to the manufacturer of an elevator; in Davies v. Pelham Hod Elevating Co. (65 Hun, 573; affirmed in this court without opinion, 146 N. Y. 363) to a contractor who furnished a defective rope with knowledge of the purpose for which the rope was to be used. We are not required at this time either to approve or to disapprove the application of the rule that was made in these cases. It is enough that they help to characterize the trend of judicial thought. We hold, then, that the principle of Thomas v. Winchester is not limited to poisons, explosives, and things of like nature, to things which in their normal operation are implements of destruction. If the nature of a thing is such that it is reasonably certain to place life and limb in peril when negligently made, it is then a thing of danger. Its nature gives warning of the consequences to be expected. If to the element of danger there is added knowledge that the thing will be used by persons other than the purchaser, and used without new tests then, irrespective of contract, the manufacturer of this thing of danger is under a duty to make it carefully. ... There must be knowledge of a danger, not merely possible, but probable.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 32,
"text": "Cardozo's new \"rule\" exists in no prior case, but is inferrable as a synthesis of the \"thing of danger\" principle stated in them, merely extending it to \"foreseeable danger\" even if \"the purposes for which it was designed\" were not themselves \"a source of great danger\". MacPherson takes some care to present itself as foreseeable progression, not a wild departure. Cardozo continues to adhere to the original principle of Winterbottom, that \"absurd and outrageous consequences\" must be avoided, and he does so by drawing a new line in the last sentence quoted above: \"There must be knowledge of a danger, not merely possible, but probable.\" But while adhering to the underlying principle that some boundary is necessary, MacPherson overruled the prior common law by rendering the formerly dominant factor in the boundary, that is, the privity formality arising out of a contractual relationship between persons, totally irrelevant. Rather, the most important factor in the boundary would be the nature of the thing sold and the foreseeable uses that downstream purchasers would make of the thing.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 33,
"text": "The example of the evolution of the law of negligence in the preceding paragraphs illustrates two crucial principles: (a) The common law evolves, this evolution is in the hands of judges, and judges have \"made law\" for hundreds of years. (b) The reasons given for a decision are often more important in the long run than the outcome in a particular case. This is the reason that judicial opinions are usually quite long, and give rationales and policies that can be balanced with judgment in future cases, rather than the bright-line rules usually embodied in statutes.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 34,
"text": "All law systems rely on written publication of the law, so that it is accessible to all. Common law decisions are published in law reports for use by lawyers, courts and the general public.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 35,
"text": "After the American Revolution, Massachusetts became the first state to establish an official Reporter of Decisions. As newer states needed law, they often looked first to the Massachusetts Reports for authoritative precedents as a basis for their own common law. The United States federal courts relied on private publishers until after the Civil War, and only began publishing as a government function in 1874. West Publishing in Minnesota is the largest private-sector publisher of law reports in the United States. Government publishers typically issue only decisions \"in the raw\", while private sector publishers often add indexing, including references to the key principles of the common law involved, editorial analysis, and similar finding aids.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 36,
"text": "In common law legal systems, the common law is crucial to understanding almost all important areas of law. For example, in England and Wales, in English Canada, and in most states of the United States, the basic law of contracts, torts and property do not exist in statute, but only in common law (though there may be isolated modifications enacted by statute). As another example, the Supreme Court of the United States in 1877, held that a Michigan statute that established rules for solemnization of marriages did not abolish pre-existing common-law marriage, because the statute did not affirmatively require statutory solemnization and was silent as to preexisting common law.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 37,
"text": "In almost all areas of the law (even those where there is a statutory framework, such as contracts for the sale of goods, or the criminal law), legislature-enacted statutes or agency-promulgated regulations generally give only terse statements of general principle, and the fine boundaries and definitions exist only in the interstitial common law. To find out what the precise law is that applies to a particular set of facts, one has to locate precedential decisions on the topic, and reason from those decisions by analogy.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 38,
"text": "In common law jurisdictions (in the sense opposed to \"civil law\"), legislatures operate under the assumption that statutes will be interpreted against the backdrop of the pre-existing common law. As the United States Supreme Court explained in United States v Texas, 507 U.S. 529 (1993):",
"title": "Basic principles of common law"
},
{
"paragraph_id": 39,
"text": "Just as longstanding is the principle that \"[s]tatutes which invade the common law ... are to be read with a presumption favoring the retention of long-established and familiar principles, except when a statutory purpose to the contrary is evident. Isbrandtsen Co. v. Johnson, 343 U.S. 779, 783 (1952); Astoria Federal Savings & Loan Assn. v. Solimino, 501 U.S. 104, 108 (1991). In such cases, Congress does not write upon a clean slate. Astoria, 501 U.S. at 108. In order to abrogate a common-law principle, the statute must \"speak directly\" to the question addressed by the common law. Mobil Oil Corp. v. Higginbotham, 436 U. S. 618, 625 (1978); Milwaukee v. Illinois, 451 U. S. 304, 315 (1981).",
"title": "Basic principles of common law"
},
{
"paragraph_id": 40,
"text": "For example, in most U.S. states, the criminal statutes are primarily codification of pre-existing common law. (Codification is the process of enacting a statute that collects and restates pre-existing law in a single document—when that pre-existing law is common law, the common law remains relevant to the interpretation of these statutes.) In reliance on this assumption, modern statutes often leave a number of terms and fine distinctions unstated—for example, a statute might be very brief, leaving the precise definition of terms unstated, under the assumption that these fine distinctions would be resolved in the future by the courts based upon what they then understand to be the pre-existing common law. (For this reason, many modern American law schools teach the common law of crime as it stood in England in 1789, because that centuries-old English common law is a necessary foundation to interpreting modern criminal statutes.)",
"title": "Basic principles of common law"
},
{
"paragraph_id": 41,
"text": "With the transition from English law, which had common law crimes, to the new legal system under the U.S. Constitution, which prohibited ex post facto laws at both the federal and state level, the question was raised whether there could be common law crimes in the United States. It was settled in the case of United States v. Hudson, which decided that federal courts had no jurisdiction to define new common law crimes, and that there must always be a (constitutionally valid) statute defining the offense and the penalty for it.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 42,
"text": "Still, many states retain selected common law crimes. For example, in Virginia, the definition of the conduct that constitutes the crime of robbery exists only in the common law, and the robbery statute only sets the punishment. Virginia Code section 1-200 establishes the continued existence and vitality of common law principles and provides that \"The common law of England, insofar as it is not repugnant to the principles of the Bill of Rights and Constitution of this Commonwealth, shall continue in full force within the same, and be the rule of decision, except as altered by the General Assembly.\"",
"title": "Basic principles of common law"
},
{
"paragraph_id": 43,
"text": "By contrast to statutory codification of common law, some statutes displace common law, for example to create a new cause of action that did not exist in the common law, or to legislatively overrule the common law. An example is the tort of wrongful death, which allows certain persons, usually a spouse, child or estate, to sue for damages on behalf of the deceased. There is no such tort in English common law; thus, any jurisdiction that lacks a wrongful death statute will not allow a lawsuit for the wrongful death of a loved one. Where a wrongful death statute exists, the compensation or other remedy available is limited to the remedy specified in the statute (typically, an upper limit on the amount of damages). Courts generally interpret statutes that create new causes of action narrowly—that is, limited to their precise terms—because the courts generally recognize the legislature as being supreme in deciding the reach of judge-made law unless such statute should violate some \"second order\" constitutional law provision (cf. judicial activism). This principle is applied more strongly in fields of commercial law (contracts and the like) where predictability is of relatively higher value, and less in torts, where courts recognize a greater responsibility to \"do justice\".",
"title": "Basic principles of common law"
},
{
"paragraph_id": 44,
"text": "Where a tort is rooted in common law, all traditionally recognized damages for that tort may be sued for, whether or not there is mention of those damages in the current statutory law. For instance, a person who sustains bodily injury through the negligence of another may sue for medical costs, pain, suffering, loss of earnings or earning capacity, mental and/or emotional distress, loss of quality of life, disfigurement and more. These damages need not be set forth in statute as they already exist in the tradition of common law. However, without a wrongful death statute, most of them are extinguished upon death.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 45,
"text": "In the United States, the power of the federal judiciary to review and invalidate unconstitutional acts of the federal executive branch is stated in the constitution, Article III sections 1 and 2: \"The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. ... The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority\". The first landmark decision on \"the judicial power\" was Marbury v. Madison, 5 U.S. (1 Cranch) 137 (1803). Later cases interpreted the \"judicial power\" of Article III to establish the power of federal courts to consider or overturn any action of Congress or of any state that conflicts with the Constitution.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 46,
"text": "The interactions between decisions of different courts is discussed further in the article on precedent. Further interactions between common law and either statute or regulation are discussed further in the articles on Skidmore deference, Chevron deference, and Auer deference.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 47,
"text": "The United States federal courts are divided into twelve regional circuits, each with a circuit court of appeals (plus a thirteenth, the Court of Appeals for the Federal Circuit, which hears appeals in patent cases and cases against the federal government, without geographic limitation). Decisions of one circuit court are binding on the district courts within the circuit and on the circuit court itself, but are only persuasive authority on sister circuits. District court decisions are not binding precedent at all, only persuasive.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 48,
"text": "Most of the U.S. federal courts of appeal have adopted a rule under which, in the event of any conflict in decisions of panels (most of the courts of appeal almost always sit in panels of three), the earlier panel decision is controlling, and a panel decision may only be overruled by the court of appeals sitting en banc (that is, all active judges of the court) or by a higher court. In these courts, the older decision remains controlling when an issue comes up the third time.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 49,
"text": "Other courts, for example, the Court of Customs and Patent Appeals and the Supreme Court, always sit en banc, and thus the later decision controls. These courts essentially overrule all previous cases in each new case, and older cases survive only to the extent they do not conflict with newer cases. The interpretations of these courts—for example, Supreme Court interpretations of the constitution or federal statutes—are stable only so long as the older interpretation maintains the support of a majority of the court. Older decisions persist through some combination of belief that the old decision is right, and that it is not sufficiently wrong to be overruled.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 50,
"text": "In the jurisdictions of England and Wales and of Northern Ireland, since 2009, the Supreme Court of the United Kingdom has the authority to overrule and unify criminal law decisions of lower courts; it is the final court of appeal for civil law cases in all three of the UK jurisdictions, but not for criminal law cases in Scotland, where the High Court of Justiciary has this power instead (except on questions of law relating to reserved matters such as devolution and human rights). From 1966 to 2009, this power lay with the House of Lords, granted by the Practice Statement of 1966.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 51,
"text": "Canada's federal system, described below, avoids regional variability of federal law by giving national jurisdiction to both layers of appellate courts.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 52,
"text": "The reliance on judicial opinion is a strength of common law systems, and is a significant contributor to the robust commercial systems in the United Kingdom and United States. Because there is reasonably precise guidance on almost every issue, parties (especially commercial parties) can predict whether a proposed course of action is likely to be lawful or unlawful, and have some assurance of consistency. As Justice Brandeis famously expressed it, \"in most matters it is more important that the applicable rule of law be settled than that it be settled right.\" This ability to predict gives more freedom to come close to the boundaries of the law. For example, many commercial contracts are more economically efficient, and create greater wealth, because the parties know ahead of time that the proposed arrangement, though perhaps close to the line, is almost certainly legal. Newspapers, taxpayer-funded entities with some religious affiliation, and political parties can obtain fairly clear guidance on the boundaries within which their freedom of expression rights apply.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 53,
"text": "In contrast, in jurisdictions with very weak respect for precedent, fine questions of law are redetermined anew each time they arise, making consistency and prediction more difficult, and procedures far more protracted than necessary because parties cannot rely on written statements of law as reliable guides. In jurisdictions that do not have a strong allegiance to a large body of precedent, parties have less a priori guidance (unless the written law is very clear and kept updated) and must often leave a bigger \"safety margin\" of unexploited opportunities, and final determinations are reached only after far larger expenditures on legal fees by the parties.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 54,
"text": "This is the reason for the frequent choice of the law of the State of New York in commercial contracts, even when neither entity has extensive contacts with New York—and remarkably often even when neither party has contacts with the United States. Commercial contracts almost always include a \"choice of law clause\" to reduce uncertainty. Somewhat surprisingly, contracts throughout the world (for example, contracts involving parties in Japan, France and Germany, and from most of the other states of the United States) often choose the law of New York, even where the relationship of the parties and transaction to New York is quite attenuated. Because of its history as the United States' commercial center, New York common law has a depth and predictability not (yet) available in any other jurisdictions of the United States. Similarly, American corporations are often formed under Delaware corporate law, and American contracts relating to corporate law issues (merger and acquisitions of companies, rights of shareholders, and so on) include a Delaware choice of law clause, because of the deep body of law in Delaware on these issues. On the other hand, some other jurisdictions have sufficiently developed bodies of law so that parties have no real motivation to choose the law of a foreign jurisdiction (for example, England and Wales, and the state of California), but not yet so fully developed that parties with no relationship to the jurisdiction choose that law. Outside the United States, parties that are in different jurisdictions from each other often choose the law of England and Wales, particularly when the parties are each in former British colonies and members of the Commonwealth. The common theme in all cases is that commercial parties seek predictability and simplicity in their contractual relations, and frequently choose the law of a common law jurisdiction with a well-developed body of common law to achieve that result.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 55,
"text": "Likewise, for litigation of commercial disputes arising out of unpredictable torts (as opposed to the prospective choice of law clauses in contracts discussed in the previous paragraph), certain jurisdictions attract an unusually high fraction of cases, because of the predictability afforded by the depth of decided cases. For example, London is considered the pre-eminent centre for litigation of admiralty cases.",
"title": "Basic principles of common law"
},
{
"paragraph_id": 56,
"text": "This is not to say that common law is better in every situation. For example, civil law can be clearer than case law when the legislature has had the foresight and diligence to address the precise set of facts applicable to a particular situation. For that reason, civil law statutes tend to be somewhat more detailed than statutes written by common law legislatures—but, conversely, that tends to make the statute more difficult to read (the United States tax code is an example).",
"title": "Basic principles of common law"
},
{
"paragraph_id": 57,
"text": "The common law—so named because it was \"common\" to all the king's courts across England—originated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. Prior to the Norman Conquest, much of England's legal business took place in the local folk courts of its various shires and hundreds. A variety of other individual courts also existed across the land: urban boroughs and merchant fairs held their own courts, and large landholders also held their own manorial and seigniorial courts as needed. The degree to which common law drew from earlier Anglo-Saxon traditions such as the jury, ordeals, the penalty of outlawry, and writs – all of which were incorporated into the Norman common law – is still a subject of much discussion. Additionally, the Catholic Church operated its own court system that adjudicated issues of canon law.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "The main sources for the history of the common law in the Middle Ages are the plea rolls and the Year Books. The plea rolls, which were the official court records for the Courts of Common Pleas and King's Bench, were written in Latin. The rolls were made up in bundles by law term: Hilary, Easter, Trinity, and Michaelmas, or winter, spring, summer, and autumn. They are currently deposited in the UK National Archives, by whose permission images of the rolls for the Courts of Common Pleas, King's Bench, and Exchequer of Pleas, from the 13th century to the 17th, can be viewed online at the Anglo-American Legal Tradition site (The O'Quinn Law Library of the University of Houston Law Center).",
"title": "History"
},
{
"paragraph_id": 59,
"text": "The doctrine of precedent developed during the 12th and 13th centuries, as the collective judicial decisions that were based in tradition, custom and precedent.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "The form of reasoning used in common law is known as casuistry or case-based reasoning. The common law, as applied in civil cases (as distinct from criminal cases), was devised as a means of compensating someone for wrongful acts known as torts, including both intentional torts and torts caused by negligence, and as developing the body of law recognizing and regulating contracts. The type of procedure practiced in common law courts is known as the adversarial system; this is also a development of the common law.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "In 1154, Henry II became the first Plantagenet king. Among many achievements, Henry institutionalized common law by creating a unified system of law \"common\" to the country through incorporating and elevating local custom to the national, ending local control and peculiarities, eliminating arbitrary remedies and reinstating a jury system—citizens sworn on oath to investigate reliable criminal accusations and civil claims. The jury reached its verdict through evaluating common local knowledge, not necessarily through the presentation of evidence, a distinguishing factor from today's civil and criminal court systems.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "At the time, royal government centered on the Curia Regis (king's court), the body of aristocrats and prelates who assisted in the administration of the realm and the ancestor of Parliament, the Star Chamber, and Privy Council. Henry II developed the practice of sending judges (numbering around 20 to 30 in the 1180s) from his Curia Regis to hear the various disputes throughout the country, and return to the court thereafter. The king's itinerant justices would generally receive a writ or commission under the great seal. They would then resolve disputes on an ad hoc basis according to what they interpreted the customs to be. The king's judges would then return to London and often discuss their cases and the decisions they made with the other judges. These decisions would be recorded and filed. In time, a rule, known as stare decisis (also commonly known as precedent) developed, whereby a judge would be bound to follow the decision of an earlier judge; he was required to adopt the earlier judge's interpretation of the law and apply the same principles promulgated by that earlier judge if the two cases had similar facts to one another. Once judges began to regard each other's decisions to be binding precedent, the pre-Norman system of local customs and law varying in each locality was replaced by a system that was (at least in theory, though not always in practice) common throughout the whole country, hence the name \"common law\".",
"title": "History"
},
{
"paragraph_id": 63,
"text": "The king's object was to preserve public order, but providing law and order was also extremely profitable–cases on forest use as well as fines and forfeitures can generate \"great treasure\" for the government. Eyres (a Norman French word for judicial circuit, originating from Latin iter) are more than just courts; they would supervise local government, raise revenue, investigate crimes, and enforce feudal rights of the king. There were complaints of the eyre of 1198 reducing the kingdom to poverty and Cornishmen fleeing to escape the eyre of 1233.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Henry II's creation of a powerful and unified court system, which curbed somewhat the power of canonical (church) courts, brought him (and England) into conflict with the church, most famously with Thomas Becket, the Archbishop of Canterbury. The murder of the Archbishop gave rise to a wave of popular outrage against the King. International pressure on Henry grew, and in May 1172 he negotiated a settlement with the papacy in which the King swore to go on crusade as well as effectively overturned the more controversial clauses of the Constitutions of Clarendon. Henry nevertheless continued to exert influence in any ecclesiastical case which interested him and royal power was exercised more subtly with considerable success.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "The English Court of Common Pleas was established after Magna Carta to try lawsuits between commoners in which the monarch had no interest. Its judges sat in open court in the Great Hall of the king's Palace of Westminster, permanently except in the vacations between the four terms of the Legal year.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "Judge-made common law operated as the primary source of law for several hundred years, before Parliament acquired legislative powers to create statutory law. It is important to understand that common law is the older and more traditional source of law, and legislative power is simply a layer applied on top of the older common law foundation. Since the 12th century, courts have had parallel and co-equal authority to make law—\"legislating from the bench\" is a traditional and essential function of courts, which was carried over into the U.S. system as an essential component of the \"judicial power\" specified by Article III of the U.S. Constitution. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, \"judges do and must legislate.\" In the United States, state courts continue to exercise full common law powers, and create both general common law and interstitial common law. In U.S. federal courts, after Erie R. Co. v. Tompkins, 304 U.S. 64, 78 (1938), the general dividing line is that federal courts can only \"interpret\" to create interstitial common law not exercise general common law powers. However, that authority to \"interpret\" can be an expansive power to \"make law,\" especially on Constitutional issues where the Constitutional text is so terse. There are legitimate debates on how the powers of courts and legislatures should be balanced around \"interpretation.\" However, the view that courts lack law-making power is historically inaccurate and constitutionally unsupportable.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "In England, judges have devised a number of rules as to how to deal with precedent decisions. The early development of case-law in the thirteenth century has been traced to Bracton's On the Laws and Customs of England and led to the yearly compilations of court cases known as Year Books, of which the first extant was published in 1268, the same year that Bracton died. The Year Books are known as the law reports of medieval England, and are a principal source for knowledge of the developing legal doctrines, concepts, and methods in the period from the 13th to the 16th centuries, when the common law developed into recognizable form.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "The term \"common law\" is often used as a contrast to Roman-derived \"civil law\", and the fundamental processes and forms of reasoning in the two are quite different. Nonetheless, there has been considerable cross-fertilization of ideas, while the two traditions and sets of foundational principles remain distinct.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "By the time of the rediscovery of the Roman law in Europe in the 12th and 13th centuries, the common law had already developed far enough to prevent a Roman law reception as it occurred on the continent. However, the first common law scholars, most notably Glanvill and Bracton, as well as the early royal common law judges, had been well accustomed with Roman law. Often, they were clerics trained in the Roman canon law. One of the first and throughout its history one of the most significant treatises of the common law, Bracton's De Legibus et Consuetudinibus Angliae (On the Laws and Customs of England), was heavily influenced by the division of the law in Justinian's Institutes. The impact of Roman law had decreased sharply after the age of Bracton, but the Roman divisions of actions into in rem (typically, actions against a thing or property for the purpose of gaining title to that property; must be filed in a court where the property is located) and in personam (typically, actions directed against a person; these can affect a person's rights and, since a person often owns things, his property too) used by Bracton had a lasting effect and laid the groundwork for a return of Roman law structural concepts in the 18th and 19th centuries. Signs of this can be found in Blackstone's Commentaries on the Laws of England, and Roman law ideas regained importance with the revival of academic law schools in the 19th century. As a result, today, the main systematic divisions of the law into property, contract, and tort (and to some extent unjust enrichment) can be found in the civil law as well as in the common law.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "The first attempt at a comprehensive compilation of centuries of common law was by Lord Chief Justice Edward Coke, in his treatise, Institutes of the Lawes of England in the 17th century.",
"title": "History"
},
{
"paragraph_id": 71,
"text": "The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "A reception statute is a statutory law adopted as a former British colony becomes independent, by which the new nation adopts (i.e. receives) pre-independence common law, to the extent not explicitly rejected by the legislative body or constitution of the new nation. Reception statutes generally consider the English common law dating prior to independence, and the precedent originating from it, as the default law, because of the importance of using an extensive and predictable body of law to govern the conduct of citizens and businesses in a new state. All U.S. states, with the partial exception of Louisiana, have either implemented reception statutes or adopted the common law by judicial opinion.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "Other examples of reception statutes in the United States, the states of the U.S., Canada and its provinces, and Hong Kong, are discussed in the reception statute article.",
"title": "History"
},
{
"paragraph_id": 74,
"text": "Yet, adoption of the common law in the newly independent nation was not a foregone conclusion, and was controversial. Immediately after the American Revolution, there was widespread distrust and hostility to anything British, and the common law was no exception. Jeffersonians decried lawyers and their common law tradition as threats to the new republic. The Jeffersonians preferred a legislatively enacted civil law under the control of the political process, rather than the common law developed by judges that—by design—were insulated from the political process. The Federalists believed that the common law was the birthright of Independence: after all, the natural rights to \"life, liberty, and the pursuit of happiness\" were the rights protected by common law. Even advocates for the common law approach noted that it was not an ideal fit for the newly independent colonies: judges and lawyers alike were severely hindered by a lack of printed legal materials. Before Independence, the most comprehensive law libraries had been maintained by Tory lawyers, and those libraries vanished with the loyalist expatriation, and the ability to print books was limited. Lawyer (later President) John Adams complained that he \"suffered very much for the want of books\". To bootstrap this most basic need of a common law system—knowable, written law—in 1803, lawyers in Massachusetts donated their books to found a law library. A Jeffersonian newspaper criticized the library, as it would carry forward \"all the old authorities practiced in England for centuries back ... whereby a new system of jurisprudence [will be founded] on the high monarchical system [to] become the Common Law of this Commonwealth... [The library] may hereafter have a very unsocial purpose.\"",
"title": "History"
},
{
"paragraph_id": 75,
"text": "For several decades after independence, English law still exerted influence over American common law—for example, with Byrne v Boadle (1863), which first applied the res ipsa loquitur doctrine.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "Well into the 19th century, ancient maxims played a large role in common law adjudication. Many of these maxims had originated in Roman Law, migrated to England before the introduction of Christianity to the British Isles, and were typically stated in Latin even in English decisions. Many examples are familiar in everyday speech even today, \"One cannot be a judge in one's own cause\" (see Dr. Bonham's Case), rights are reciprocal to obligations, and the like. Judicial decisions and treatises of the 17th and 18th centuries, such at those of Lord Chief Justice Edward Coke, presented the common law as a collection of such maxims.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "Reliance on old maxims and rigid adherence to precedent, no matter how old or ill-considered, came under critical discussion in the late 19th century, starting in the United States. Oliver Wendell Holmes Jr. in his famous article, \"The Path of the Law\", commented, \"It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past.\" Justice Holmes noted that study of maxims might be sufficient for \"the man of the present\", but \"the man of the future is the man of statistics and the master of economics\". In an 1880 lecture at Harvard, he wrote:",
"title": "History"
},
{
"paragraph_id": 78,
"text": "The life of the law has not been logic; it has been experience. The felt necessities of the time, the prevalent moral and political theories, intuitions of public policy, avowed or unconscious, even the prejudices which judges share with their fellow men, have had a good deal more to do than the syllogism in determining the rules by which men should be governed. The law embodies the story of a nation's development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.",
"title": "History"
},
{
"paragraph_id": 79,
"text": "In the early 20th century, Louis Brandeis, later appointed to the United States Supreme Court, became noted for his use of policy-driving facts and economics in his briefs, and extensive appendices presenting facts that lead a judge to the advocate's conclusion. By this time, briefs relied more on facts than on Latin maxims.",
"title": "History"
},
{
"paragraph_id": 80,
"text": "Reliance on old maxims is now deprecated. Common law decisions today reflect both precedent and policy judgment drawn from economics, the social sciences, business, decisions of foreign courts, and the like. The degree to which these external factors should influence adjudication is the subject of active debate, but it is indisputable that judges do draw on experience and learning from everyday life, from other fields, and from other jurisdictions.",
"title": "History"
},
{
"paragraph_id": 81,
"text": "As early as the 15th century, it became the practice that litigants who felt they had been cheated by the common law system would petition the King in person. For example, they might argue that an award of damages (at common law (as opposed to equity)) was not sufficient redress for a trespasser occupying their land, and instead request that the trespasser be evicted. From this developed the system of equity, administered by the Lord Chancellor, in the courts of chancery. By their nature, equity and law were frequently in conflict and litigation would frequently continue for years as one court countermanded the other, even though it was established by the 17th century that equity should prevail.",
"title": "History"
},
{
"paragraph_id": 82,
"text": "In England, courts of law (as opposed to equity) were merged with courts of equity by the Judicature Acts of 1873 and 1875, with equity prevailing in case of conflict.",
"title": "History"
},
{
"paragraph_id": 83,
"text": "In the United States, parallel systems of law (providing money damages, with cases heard by a jury upon either party's request) and equity (fashioning a remedy to fit the situation, including injunctive relief, heard by a judge) survived well into the 20th century. The United States federal courts procedurally separated law and equity: the same judges could hear either kind of case, but a given case could only pursue causes in law or in equity, and the two kinds of cases proceeded under different procedural rules. This became problematic when a given case required both money damages and injunctive relief. In 1937, the new Federal Rules of Civil Procedure combined law and equity into one form of action, the \"civil action\". Fed.R.Civ.P. 2. The distinction survives to the extent that issues that were \"common law (as opposed to equity)\" as of 1791 (the date of adoption of the Seventh Amendment) are still subject to the right of either party to request a jury, and \"equity\" issues are decided by a judge.",
"title": "History"
},
{
"paragraph_id": 84,
"text": "The states of Delaware, Illinois, Mississippi, South Carolina, and Tennessee continue to have divided courts of law and courts of chancery, for example, the Delaware Court of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division.",
"title": "History"
},
{
"paragraph_id": 85,
"text": "For centuries, through to the 19th century, the common law acknowledged only specific forms of action, and required very careful drafting of the opening pleading (called a writ) to slot into exactly one of them: debt, detinue, covenant, special assumpsit, general assumpsit, trespass, trover, replevin, case (or trespass on the case), and ejectment. To initiate a lawsuit, a pleading had to be drafted to meet myriad technical requirements: correctly categorizing the case into the correct legal pigeonhole (pleading in the alternative was not permitted), and using specific legal terms and phrases that had been traditional for centuries. Under the old common law pleading standards, a suit by a pro se (\"for oneself\", without a lawyer) party was all but impossible, and there was often considerable procedural jousting at the outset of a case over minor wording issues.",
"title": "History"
},
{
"paragraph_id": 86,
"text": "One of the major reforms of the late 19th century and early 20th century was the abolition of common law pleading requirements. A plaintiff can initiate a case by giving the defendant \"a short and plain statement\" of facts that constitute an alleged wrong. This reform moved the attention of courts from technical scrutiny of words to a more rational consideration of the facts, and opened access to justice far more broadly.",
"title": "History"
},
{
"paragraph_id": 87,
"text": "The main alternative to the common law system is the civil law system, which is used in Continental Europe, and most of Central and South America.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 88,
"text": "The primary contrast between the two systems is the role of written decisions and precedent.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 89,
"text": "In common law jurisdictions, nearly every case that presents a bona fide disagreement on the law is resolved in a written opinion. The legal reasoning for the decision, known as ratio decidendi, not only determines the court's judgment between the parties, but also stands as precedent for resolving future disputes. In contrast, civil law decisions typically do not include explanatory opinions, and thus no precedent flows from one decision to the next. In common law systems, a single decided case is binding common law (connotation 1) to the same extent as statute or regulation, under the principle of stare decisis. In contrast, in civil law systems, individual decisions have only advisory, not binding effect. In civil law systems, case law only acquires weight when a long series of cases use consistent reasoning, called jurisprudence constante. Civil law lawyers consult case law to obtain their best prediction of how a court will rule, but comparatively, civil law judges are less bound to follow it.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 90,
"text": "For that reason, statutes in civil law systems are more comprehensive, detailed, and continuously updated, covering all matters capable of being brought before a court.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 91,
"text": "Common law systems tend to give more weight to separation of powers between the judicial branch and the executive branch. In contrast, civil law systems are typically more tolerant of allowing individual officials to exercise both powers. One example of this contrast is the difference between the two systems in allocation of responsibility between prosecutor and adjudicator.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 92,
"text": "Common law courts usually use an adversarial system, in which two sides present their cases to a neutral judge. For example, in criminal cases, in adversarial systems, the prosecutor and adjudicator are two separate people. The prosecutor is lodged in the executive branch, and conducts the investigation to locate evidence. That prosecutor presents the evidence to a neutral adjudicator, who makes a decision.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 93,
"text": "In contrast, in civil law systems, criminal proceedings proceed under an inquisitorial system in which an examining magistrate serves two roles by first developing the evidence and arguments for one side and then the other during the investigation phase. The examining magistrate then presents the dossier detailing his or her findings to the president of the bench that will adjudicate on the case where it has been decided that a trial shall be conducted. Therefore, the president of the bench's view of the case is not neutral and may be biased while conducting the trial after the reading of the dossier. Unlike the common law proceedings, the president of the bench in the inquisitorial system is not merely an umpire and is entitled to directly interview the witnesses or express comments during the trial, as long as he or she does not express his or her view on the guilt of the accused.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 94,
"text": "The proceeding in the inquisitorial system is essentially by writing. Most of the witnesses would have given evidence in the investigation phase and such evidence will be contained in the dossier under the form of police reports. In the same way, the accused would have already put his or her case at the investigation phase but he or she will be free to change his or her evidence at trial. Whether the accused pleads guilty or not, a trial will be conducted. Unlike the adversarial system, the conviction and sentence to be served (if any) will be released by the trial jury together with the president of the trial bench, following their common deliberation.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 95,
"text": "In contrast, in an adversarial system, on issues of fact, the onus of framing the case rests on the parties, and judges generally decide the case presented to them, rather than acting as active investigators, or actively reframing the issues presented. \"In our adversary system, in both civil and criminal cases, in the first instance and on appeal, we follow the principle of party presentation. That is, we rely on the parties to frame the issues for decision and assign to courts the role of neutral arbiter of matters the parties present.\" This principle applies with force in all issues in criminal matters, and to factual issues: courts seldom engage in fact gathering on their own initiative, but decide facts on the evidence presented (even here, there are exceptions, for \"legislative facts\" as opposed to \"adjudicative facts\").",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 96,
"text": "On the other hand, on issues of law, common law courts regularly raise new issues (such as matters of jurisdiction or standing), perform independent research, and reformulate the legal grounds on which to analyze the facts presented to them. The United States Supreme Court regularly decides based on issues raised only in amicus briefs from non-parties. One of the most notable such cases was Erie Railroad v. Tompkins, a 1938 case in which neither party questioned the ruling from the 1842 case Swift v. Tyson that served as the foundation for their arguments, but which led the Supreme Court to overturn Swift during their deliberations. To avoid lack of notice, courts may invite briefing on an issue to ensure adequate notice. However, there are limits—an appeals court may not introduce a theory that contradicts the party's own contentions.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 97,
"text": "There are many exceptions in both directions. For example, most proceedings before U.S. federal and state agencies are inquisitorial in nature, at least the initial stages (e.g., a patent examiner, a social security hearing officer, and so on), even though the law to be applied is developed through common law processes.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 98,
"text": "The role of the legal academy presents a significant \"cultural\" difference between common law (connotation 2) and civil law jurisdictions. In both systems, treatises compile decisions and state overarching principles that (in the author's opinion) explain the results of the cases. In neither system are treatises considered \"law\", but the weight given them is nonetheless quite different.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 99,
"text": "In common law jurisdictions, lawyers and judges tend to use these treatises as only \"finding aids\" to locate the relevant cases. In common law jurisdictions, scholarly work is seldom cited as authority for what the law is. Chief Justice Roberts noted the \"great disconnect between the academy and the profession.\" When common law courts rely on scholarly work, it is almost always only for factual findings, policy justification, or the history and evolution of the law, but the court's legal conclusion is reached through analysis of relevant statutes and common law, seldom scholarly commentary.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 100,
"text": "In contrast, in civil law jurisdictions, courts give the writings of law professors significant weight, partly because civil law decisions traditionally were very brief, sometimes no more than a paragraph stating who wins and who loses. The rationale had to come from somewhere else: the academy often filled that role.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 101,
"text": "The contrast between civil law and common law legal systems has become increasingly blurred, with the growing importance of jurisprudence (similar to case law but not binding) in civil law countries, and the growing importance of statute law and codes in common law countries.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 102,
"text": "Examples of common law being replaced by statute or codified rule in the United States include criminal law (since 1812, U.S. federal courts and most but not all of the states have held that criminal law must be embodied in statute if the public is to have fair notice), commercial law (the Uniform Commercial Code in the early 1960s) and procedure (the Federal Rules of Civil Procedure in the 1930s and the Federal Rules of Evidence in the 1970s). But in each case, the statute sets the general principles, but the interstitial common law process determines the scope and application of the statute.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 103,
"text": "An example of convergence from the other direction is shown in the 1982 decision Srl CILFIT and Lanificio di Gavardo SpA v Ministry of Health (ECLI:EU:C:1982:335), in which the European Court of Justice held that questions it has already answered need not be resubmitted. This showed how a historically distinctly common law principle is used by a court composed of judges (at that time) of essentially civil law jurisdiction.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 104,
"text": "The former Soviet Bloc and other socialist countries used a socialist law system, although there is controversy as to whether socialist law ever constituted a separate legal system or not.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 105,
"text": "Much of the Muslim world uses legal systems based on Sharia (also called Islamic law).",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 106,
"text": "Many churches use a system of canon law. The canon law of the Catholic Church influenced the common law during the medieval period through its preservation of Roman law doctrine such as the presumption of innocence.",
"title": "Alternatives to common law systems"
},
{
"paragraph_id": 107,
"text": "The common law constitutes the basis of the legal systems of:",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 108,
"text": "and many other generally English-speaking countries or Commonwealth countries (except Scotland, which is bijuridicial, and Malta). Essentially, every country that was colonised at some time by England, Great Britain, or the United Kingdom uses common law except those that were formerly colonised by other nations, such as Quebec (which follows the bijuridicial law or civil code of France in part), South Africa and Sri Lanka (which follow Roman Dutch law), where the prior civil law system was retained to respect the civil rights of the local colonists. Guyana and Saint Lucia have mixed common law and civil law systems.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 109,
"text": "The remainder of this section discusses jurisdiction-specific variants, arranged chronologically.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 110,
"text": "Scotland is often said to use the civil law system, but it has a unique system that combines elements of an uncodified civil law dating back to the Corpus Juris Civilis with an element of its own common law long predating the Treaty of Union with England in 1707 (see Legal institutions of Scotland in the High Middle Ages), founded on the customary laws of the tribes residing there. Historically, Scottish common law differed in that the use of precedent was subject to the courts' seeking to discover the principle that justifies a law rather than searching for an example as a precedent, and principles of natural justice and fairness have always played a role in Scots Law. From the 19th century, the Scottish approach to precedent developed into a stare decisis akin to that already established in England thereby reflecting a narrower, more modern approach to the application of case law in subsequent instances. This is not to say that the substantive rules of the common laws of both countries are the same, but in many matters (particularly those of UK-wide interest), they are similar.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 111,
"text": "Scotland shares the Supreme Court with England, Wales and Northern Ireland for civil cases; the court's decisions are binding on the jurisdiction from which a case arises but only influential on similar cases arising in Scotland. This has had the effect of converging the law in certain areas. For instance, the modern UK law of negligence is based on Donoghue v Stevenson, a case originating in Paisley, Scotland.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 112,
"text": "Scotland maintains a separate criminal law system from the rest of the UK, with the High Court of Justiciary being the final court for criminal appeals. The highest court of appeal in civil cases brought in Scotland is now the Supreme Court of the United Kingdom (before October 2009, final appellate jurisdiction lay with the House of Lords).",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 113,
"text": "The centuries-old authority of the common law courts in England to develop law case by case and to apply statute law—\"legislating from the bench\"—is a traditional function of courts, which was carried over into the U.S. system as an essential component of the judicial power for states. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, \"judges do and must legislate\" (in the federal courts, only interstitially, in state courts, to the full limits of common law adjudicatory authority).",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 114,
"text": "The original colony of New Netherland was settled by the Dutch and the law was also Dutch. When the English captured pre-existing colonies they continued to allow the local settlers to keep their civil law. However, the Dutch settlers revolted against the English and the colony was recaptured by the Dutch. In 1664, the colony of New York had two distinct legal systems: on Manhattan Island and along the Hudson River, sophisticated courts modeled on those of the Netherlands were resolving disputes learnedly in accordance with Dutch customary law. On Long Island, Staten Island, and in Westchester, on the other hand, English courts were administering a crude, untechnical variant of the common law carried from Puritan New England and practiced without the intercession of lawyers. When the English finally regained control of New Netherland they imposed common law upon all the colonists, including the Dutch. This was problematic, as the patroon system of land holding, based on the feudal system and civil law, continued to operate in the colony until it was abolished in the mid-19th century. New York began a codification of its law in the 19th century. The only part of this codification process that was considered complete is known as the Field Code applying to civil procedure. The influence of Roman-Dutch law continued in the colony well into the late 19th century. The codification of a law of general obligations shows how remnants of the civil law tradition in New York continued on from the Dutch days.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 115,
"text": "Under Louisiana's codified system, the Louisiana Civil Code, private law—that is, substantive law between private sector parties—is based on principles of law from continental Europe, with some common law influences. These principles derive ultimately from Roman law, transmitted through French law and Spanish law, as the state's current territory intersects the area of North America colonized by Spain and by France. Contrary to popular belief, the Louisiana code does not directly derive from the Napoleonic Code, as the latter was enacted in 1804, one year after the Louisiana Purchase. However, the two codes are similar in many respects due to common roots.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 116,
"text": "Louisiana's criminal law largely rests on English common law. Louisiana's administrative law is generally similar to the administrative law of the U.S. federal government and other U.S. states. Louisiana's procedural law is generally in line with that of other U.S. states, which in turn is generally based on the U.S. Federal Rules of Civil Procedure.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 117,
"text": "Historically notable among the Louisiana code's differences from common law is the role of property rights among women, particularly in inheritance gained by widows.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 118,
"text": "The U.S. state of California has a system based on common law, but it has codified the law in the manner of civil law jurisdictions. The reason for the enactment of the California Codes in the 19th century was to replace a pre-existing system based on Spanish civil law with a system based on common law, similar to that in most other states. California and a number of other Western states, however, have retained the concept of community property derived from civil law. The California courts have treated portions of the codes as an extension of the common-law tradition, subject to judicial development in the same manner as judge-made common law. (Most notably, in the case Li v. Yellow Cab Co., 13 Cal.3d 804 (1975), the California Supreme Court adopted the principle of comparative negligence in the face of a California Civil Code provision codifying the traditional common-law doctrine of contributory negligence.)",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 119,
"text": "The United States federal government (as opposed to the states) has a variant on a common law system. United States federal courts only act as interpreters of statutes and the constitution by elaborating and precisely defining broad statutory language (connotation 1(b) above), but, unlike state courts, do not generally act as an independent source of common law.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 120,
"text": "Before 1938, the federal courts, like almost all other common law courts, decided the law on any issue where the relevant legislature (either the U.S. Congress or state legislature, depending on the issue) had not acted, by looking to courts in the same system, that is, other federal courts, even on issues of state law, and even where there was no express grant of authority from Congress or the Constitution.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 121,
"text": "In 1938, the U.S. Supreme Court in Erie Railroad Co. v. Tompkins 304 U.S. 64, 78 (1938), overruled earlier precedent, and held \"There is no federal general common law,\" thus confining the federal courts to act only as interstitial interpreters of law originating elsewhere. E.g., Texas Industries v. Radcliff, 451 U.S. 630 (1981) (without an express grant of statutory authority, federal courts cannot create rules of intuitive justice, for example, a right to contribution from co-conspirators). Post-1938, federal courts deciding issues that arise under state law are required to defer to state court interpretations of state statutes, or reason what a state's highest court would rule if presented with the issue, or to certify the question to the state's highest court for resolution.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 122,
"text": "Later courts have limited Erie slightly, to create a few situations where United States federal courts are permitted to create federal common law rules without express statutory authority, for example, where a federal rule of decision is necessary to protect uniquely federal interests, such as foreign affairs, or financial instruments issued by the federal government. See, e.g., Clearfield Trust Co. v. United States, 318 U.S. 363 (1943) (giving federal courts the authority to fashion common law rules with respect to issues of federal power, in this case negotiable instruments backed by the federal government); see also International News Service v. Associated Press, 248 U.S. 215 (1918) (creating a cause of action for misappropriation of \"hot news\" that lacks any statutory grounding); but see National Basketball Association v. Motorola, Inc., 105 F.3d 841, 843–44, 853 (2d Cir. 1997) (noting continued vitality of INS \"hot news\" tort under New York state law, but leaving open the question of whether it survives under federal law). Except on Constitutional issues, Congress is free to legislatively overrule federal courts' common law.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 123,
"text": "Most executive branch agencies in the United States federal government have some adjudicatory authority. To greater or lesser extent, agencies honor their own precedent to ensure consistent results. Agency decision making is governed by the Administrative Procedure Act of 1946.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 124,
"text": "For example, the National Labor Relations Board issues relatively few regulations, but instead promulgates most of its substantive rules through common law (connotation 1).",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 125,
"text": "The law of India, Pakistan, and Bangladesh are largely based on English common law because of the long period of British colonial influence during the period of the British Raj.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 126,
"text": "Ancient India represented a distinct tradition of law, and had a historically independent school of legal theory and practice. The Arthashastra, dating from 400 BCE and the Manusmriti, from 100 CE, were influential treatises in India, texts that were considered authoritative legal guidance. Manu's central philosophy was tolerance and pluralism, and was cited across Southeast Asia. Early in this period, which finally culminated in the creation of the Gupta Empire, relations with ancient Greece and Rome were not infrequent. The appearance of similar fundamental institutions of international law in various parts of the world show that they are inherent in international society, irrespective of culture and tradition. Inter-State relations in the pre-Islamic period resulted in clear-cut rules of warfare of a high humanitarian standard, in rules of neutrality, of treaty law, of customary law embodied in religious charters, in exchange of embassies of a temporary or semi-permanent character.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 127,
"text": "When India became part of the British Empire, there was a break in tradition, and Hindu and Islamic law were supplanted by the common law. After the failed rebellion against the British in 1857, the British Parliament took over control of India from the British East India Company, and British India came under the direct rule of the Crown. The British Parliament passed the Government of India Act 1858 to this effect, which set up the structure of British government in India. It established in Britain the office of the Secretary of State for India through whom the Parliament would exercise its rule, along with a Council of India to aid him. It also established the office of the Governor-General of India along with an Executive Council in India, which consisted of high officials of the British Government. As a result, the present judicial system of the country derives largely from the British system and has little correlation to the institutions of the pre-British era.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 128,
"text": "Post-partition, India retained its common law system. Much of contemporary Indian law shows substantial European and American influence. Legislation first introduced by the British is still in effect in modified form today. During the drafting of the Indian Constitution, laws from Ireland, the United States, Britain, and France were all synthesized to produce a refined set of Indian laws. Indian laws also adhere to the United Nations guidelines on human rights law and environmental law. Certain international trade laws, such as those on intellectual property, are also enforced in India.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 129,
"text": "Post-partition, Pakistan retained its common law system.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 130,
"text": "Post-partition, Bangladesh retained its common law system.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 131,
"text": "Canada has separate federal and provincial legal systems.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 132,
"text": "Each province and territory is considered a separate jurisdiction with respect to case law. Each has its own procedural law in civil matters, statutorily created provincial courts and superior trial courts with inherent jurisdiction culminating in the Court of Appeal of the province. These Courts of Appeal are then subject to the Supreme Court of Canada in terms of appeal of their decisions.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 133,
"text": "All but one of the provinces of Canada use a common law system for civil matters (the exception being Quebec, which uses a French-heritage civil law system for issues arising within provincial jurisdiction, such as property ownership and contracts).",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 134,
"text": "Canadian Federal Courts operate under a separate system throughout Canada and deal with narrower range of subject matter than superior courts in each province and territory. They only hear cases on subjects assigned to them by federal statutes, such as immigration, intellectual property, judicial review of federal government decisions, and admiralty. The Federal Court of Appeal is the appellate court for federal courts and hears cases in multiple cities; unlike the United States, the Canadian Federal Court of Appeal is not divided into appellate circuits.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 135,
"text": "Canadian federal statutes must use the terminology of both the common law and civil law for civil matters; this is referred to as legislative bijuralism.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 136,
"text": "Criminal law is uniform throughout Canada. It is based on the federal statutory Criminal Code, which in addition to substance also details procedural law. The administration of justice are the responsibilities of the provinces. Canadian criminal law uses a common law system no matter which province a case proceeds.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 137,
"text": "Nicaragua's legal system is also a mixture of the English Common Law and Civil Law. This situation was brought through the influence of British administration of the Eastern half of the Mosquito Coast from the mid-17th century until about 1894, the William Walker period from about 1855 through 1857, US interventions/occupations during the period from 1909 to 1933, the influence of US institutions during the Somoza family administrations (1933 through 1979) and the considerable importation between 1979 and the present of US culture and institutions.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 138,
"text": "Israel has no formal written constitution. Its basic principles are inherited from the law of the British Mandate of Palestine and thus resemble those of British and American law, namely: the role of courts in creating the body of law and the authority of the supreme court in reviewing and if necessary overturning legislative and executive decisions, as well as employing the adversarial system. However, because Israel has no written constitution, basic laws can be changed by a vote of 61 out of 120 votes in the parliament. One of the primary reasons that the Israeli constitution remains unwritten is the fear by whatever party holds power that creating a written constitution, combined with the common-law elements, would severely limit the powers of the Knesset (which, following the doctrine of parliamentary sovereignty, holds near-unlimited power).",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 139,
"text": "Roman Dutch common law is a bijuridical or mixed system of law similar to the common law system in Scotland and Louisiana. Roman Dutch common law jurisdictions include South Africa, Botswana, Lesotho, Namibia, Swaziland, Sri Lanka and Zimbabwe. Many of these jurisdictions recognise customary law, and in some, such as South Africa the Constitution requires that the common law be developed in accordance with the Bill of Rights. Roman Dutch common law is a development of Roman Dutch law by courts in the Roman Dutch common law jurisdictions. During the Napoleonic wars the Kingdom of the Netherlands adopted the French code civil in 1809, however the Dutch colonies in the Cape of Good Hope and Sri Lanka, at the time called Ceylon, were seized by the British to prevent them being used as bases by the French Navy. The system was developed by the courts and spread with the expansion of British colonies in Southern Africa. Roman Dutch common law relies on legal principles set out in Roman law sources such as Justinian's Institutes and Digest, and also on the writing of Dutch jurists of the 17th century such as Grotius and Voet. In practice, the majority of decisions rely on recent precedent.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 140,
"text": "Ghana follows the English common law tradition which was inherited from the British during her colonisation. Consequently, the laws of Ghana are, for the most part, a modified version of imported law that is continuously adapting to changing socio-economic and political realities of the country. The Bond of 1844 marked the period when the people of Ghana (then Gold Coast) ceded their independence to the British and gave the British judicial authority. Later, the Supreme Court Ordinance of 1876 formally introduced British law, be it the common law or statutory law, in the Gold Coast. Section 14 of the Ordinance formalised the application of the common-law tradition in the country.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 141,
"text": "Ghana, after independence, did not do away with the common law system inherited from the British, and today it has been enshrined in the 1992 Constitution of the country. Chapter four of Ghana's Constitution, entitled \"The Laws of Ghana\", has in Article 11(1) the list of laws applicable in the state. This comprises (a) the Constitution; (b) enactments made by or under the authority of the Parliament established by the Constitution; (c) any Orders, Rules and Regulations made by any person or authority under a power conferred by the Constitution; (d) the existing law; and (e) the common law. Thus, the modern-day Constitution of Ghana, like those before it, embraced the English common law by entrenching it in its provisions. The doctrine of judicial precedence which is based on the principle of stare decisis as applied in England and other pure common law countries also applies in Ghana.",
"title": "Common law legal systems in the present day"
},
{
"paragraph_id": 142,
"text": "Edward Coke, a 17th-century Lord Chief Justice of the English Court of Common Pleas and a Member of Parliament (MP), wrote several legal texts that collected and integrated centuries of case law. Lawyers in both England and America learned the law from his Institutes and Reports until the end of the 18th century. His works are still cited by common law courts around the world.",
"title": "Scholarly works"
},
{
"paragraph_id": 143,
"text": "The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769. Since 1979, a facsimile edition of that first edition has been available in four paper-bound volumes. Today it has been superseded in the English part of the United Kingdom by Halsbury's Laws of England that covers both common and statutory English law.",
"title": "Scholarly works"
},
{
"paragraph_id": 144,
"text": "While he was still on the Massachusetts Supreme Judicial Court, and before being named to the U.S. Supreme Court, Justice Oliver Wendell Holmes Jr. published a short volume called The Common Law, which remains a classic in the field. Unlike Blackstone and the Restatements, Holmes' book only briefly discusses what the law is; rather, Holmes describes the common law process. Law professor John Chipman Gray's The Nature and Sources of the Law, an examination and survey of the common law, is also still commonly read in U.S. law schools.",
"title": "Scholarly works"
},
{
"paragraph_id": 145,
"text": "In the United States, Restatements of various subject matter areas (Contracts, Torts, Judgments, and so on.), edited by the American Law Institute, collect the common law for the area. The ALI Restatements are often cited by American courts and lawyers for propositions of uncodified common law, and are considered highly persuasive authority, just below binding precedential decisions. The Corpus Juris Secundum is an encyclopedia whose main content is a compendium of the common law and its variations throughout the various state jurisdictions.",
"title": "Scholarly works"
},
{
"paragraph_id": 146,
"text": "Scots common law covers matters including murder and theft, and has sources in custom, in legal writings and previous court decisions. The legal writings used are called Institutional Texts and come mostly from the 17th, 18th and 19th centuries. Examples include Craig, Jus Feudale (1655) and Stair, The Institutions of the Law of Scotland (1681).",
"title": "Scholarly works"
}
] | In law, common law is the body of law created by judges and similar quasi-judicial tribunals by virtue of being stated in written opinions. The defining characteristic of common law is that it arises as precedent. Common law courts look to the past decisions of courts to synthesize the legal principles of past cases. Stare decisis, the principle that cases should be decided according to consistent principled rules so that similar facts will yield similar results, lies at the heart of all common law systems. If a court finds that a similar dispute to the present one has been resolved in the past, the court is generally bound to follow the reasoning used in the prior decision. If, however, the court finds that the current dispute is fundamentally distinct from all previous cases, and legislative statutes are either silent or ambiguous on the question, judges have the authority and duty to resolve the issue. The opinion that a common law judge gives agglomerates with past decisions as precedent to bind future judges and litigants, unless overturned by further developments in the law or by subsequent statutory law. The common law, so named because it was "common" to all the king's courts across England, originated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. The British Empire later spread the English legal system to its colonies, many of which retain the common law system today. These common law systems are legal systems that give great weight to judicial precedent, and to the style of reasoning inherited from the English legal system. The term "common law", referring to the body of law made by the judiciary, is often distinguished from statutory law and regulations, which are laws adopted by the legislature and executive respectively. In legal systems that follow the common law, judicial precedent stands in contrast to and on equal footing with statutes. The other major legal system used by countries is the civil law, which codifies its legal principles into legal codes and does not treat judicial opinions as binding. Today, one-third of the world's population lives in common law jurisdictions or in mixed legal systems that combine the common law with the civil law, including Antigua and Barbuda, Australia, Bahamas, Bangladesh, Barbados, Belize, Botswana, Burma, Cameroon, Canada, Cyprus, Dominica, Fiji, Ghana, Grenada, Guyana, Hong Kong, India, Ireland, Israel, Jamaica, Kenya, Liberia, Malaysia, Malta, Marshall Islands, Micronesia, Namibia, Nauru, New Zealand, Nigeria, Pakistan, Palau, Papua New Guinea, Philippines, Sierra Leone, Singapore, South Africa, Sri Lanka, Trinidad and Tobago, the United Kingdom, the United States, and Zimbabwe. | 2001-11-11T14:56:17Z | 2023-12-18T21:55:40Z | [
"Template:Reflist",
"Template:Webarchive",
"Template:Cite AustLII",
"Template:Ssrn",
"Template:Ordered list",
"Template:See also",
"Template:Frcp",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Mdash",
"Template:JSTOR",
"Template:Ussc",
"Template:Citation needed",
"Template:Cite web",
"Template:Harvnb",
"Template:Cite news",
"Template:ISSN",
"Template:Cite speech",
"Template:Blockquote",
"Template:Lang",
"Template:Cite journal",
"Template:Cite BAILII",
"Template:Original research?",
"Template:Law",
"Template:Anchor",
"Template:ECLI",
"Template:Wikiquote",
"Template:Gutenberg",
"Template:Fcn",
"Template:Sfnp",
"Template:Ndash",
"Template:ISBN",
"Template:Cite conference",
"Template:EB9 Poster",
"Template:Authority control",
"Template:Distinguish",
"Template:Verify-inline",
"Template:'\"",
"Template:Doi",
"Template:Citation",
"Template:Sic",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Common_law |
5,255 | Civil law | Civil law may refer to: | [
{
"paragraph_id": 0,
"text": "Civil law may refer to:",
"title": ""
}
] | Civil law may refer to: Civil law, the part of law that concerns private citizens and legal persons
Civil law, or continental law, a legal system originating in continental Europe and based on Roman law
Private law, the branch of law in a civil law legal system that concerns relations among private individuals
Municipal law, the domestic law of a state, as opposed to international law | 2022-09-14T06:58:29Z | [
"Template:Wiktionary",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Civil_law |
|
5,257 | Court of appeals (disambiguation) | A court of appeals is generally an appellate court.
Court of Appeals may refer to: | [
{
"paragraph_id": 0,
"text": "A court of appeals is generally an appellate court.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Court of Appeals may refer to:",
"title": ""
}
] | A court of appeals is generally an appellate court. Court of Appeals may refer to: Israeli Military Court of Appeals
Corte d'Assise d'Appello (Italy)
Court of Appeals of the Philippines
High Court of Appeals of Turkey
Court of Appeals | 2023-06-12T19:21:17Z | [
"Template:Lang",
"Template:Intitle",
"Template:Disambiguation",
"Template:Wikt"
] | https://en.wikipedia.org/wiki/Court_of_appeals_(disambiguation) |
|
5,259 | Common descent | Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth.
Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.
Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species:
There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.
The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In Essai de cosmologie (1750), Maupertuis noted:
May we not say that, in the fortuitous combination of the productions of Nature, since only those creatures could survive in whose organizations a certain degree of adaptation was present, there is nothing extraordinary in the fact that such adaptation is actually found in all these species which now exist? Chance, one might say, turned out a vast number of individuals; a small proportion of these were organized in such a manner that the animals' organs could satisfy their needs. A much greater number showed neither adaptation nor order; these last have all perished.... Thus the species which we see today are but a small part of all those that a blind destiny has produced.
In 1790, the philosopher Immanuel Kant wrote in Kritik der Urteilskraft (Critique of Judgment) that the similarity of animal forms implies a common original type, and thus a common parent.
In 1794, Charles Darwin's grandfather, Erasmus Darwin asked:
[W]ould it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which the great First Cause endued with animality, with the power of acquiring new parts attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end?
Charles Darwin's views about common descent, as expressed in On the Origin of Species, were that it was probable that there was only one progenitor for all life forms:
Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed.
But he precedes that remark by, "Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide." And in the subsequent edition, he asserts rather,
"We do not know all the possible transitional gradations between the simplest and the most perfect organs; it cannot be pretended that we know all the varied means of Distribution during the long lapse of years, or that we know how imperfect the Geological Record is. Grave as these several difficulties are, in my judgment they do not overthrow the theory of descent from a few created forms with subsequent modification".
Common descent was widely accepted amongst the scientific community after Darwin's publication. In 1907, Vernon Kellogg commented that "practically no naturalists of position and recognized attainment doubt the theory of descent."
In 2008, biologist T. Ryan Gregory noted that:
No reliable observation has ever been found to contradict the general notion of common descent. It should come as no surprise, then, that the scientific community at large has accepted evolutionary descent as a historical reality since Darwin’s time and considers it among the most reliably established and fundamentally important facts in all of science.
All known forms of life are based on the same fundamental biochemical organization: genetic information encoded in DNA, transcribed into RNA, through the effect of protein- and RNA-enzymes, then translated into proteins by (highly similar) ribosomes, with ATP, NADPH and others as energy sources. Analysis of small sequence differences in widely shared substances such as cytochrome c further supports universal common descent. Some 23 proteins are found in all organisms, serving as enzymes carrying out core functions like DNA replication. The fact that only one such set of enzymes exists is convincing evidence of a single ancestry. 6,331 genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.
The genetic code (the "translation table" according to which DNA information is translated into amino acids, and hence proteins) is nearly identical for all known lifeforms, from bacteria and archaea to animals and plants. The universality of this code is generally regarded by biologists as definitive evidence in favor of universal common descent.
The way that codons (DNA triplets) are mapped to amino acids seems to be strongly optimised. Richard Egel argues that in particular the hydrophobic (non-polar) side-chains are well organised, suggesting that these enabled the earliest organisms to create peptides with water-repelling regions able to support the essential electron exchange (redox) reactions for energy transfer.
Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry.
Biologists often point to the universality of many aspects of cellular life as supportive evidence to the more compelling evidence listed above. These similarities include the energy carrier adenosine triphosphate (ATP), and the fact that all amino acids found in proteins are left-handed. It is, however, possible that these similarities resulted because of the laws of physics and chemistry - rather than through universal common descent - and therefore resulted in convergent evolution. In contrast, there is evidence for homology of the central subunits of transmembrane ATPases throughout all living organisms, especially how the rotating elements are bound to the membrane. This supports the assumption of a LUCA as a cellular organism, although primordial membranes may have been semipermeable and evolved later to the membranes of modern bacteria, and on a second path to those of modern archaea also.
Another important piece of evidence is from detailed phylogenetic trees (i.e., "genealogic trees" of species) mapping out the proposed divisions and common ancestors of all living species. In 2010, Douglas L. Theobald published a statistical analysis of available genetic data, mapping them to phylogenetic trees, that gave "strong quantitative support, by a formal test, for the unity of life."
Traditionally, these trees have been built using morphological methods, such as appearance, embryology, etc. Recently, it has been possible to construct these trees using molecular data, based on similarities and differences between genetic and protein sequences. All these methods produce essentially similar results, even though most genetic variation has no influence over external morphology. That phylogenetic trees based on different types of information agree with each other is strong evidence of a real underlying common descent.
Theobald noted that substantial horizontal gene transfer could have occurred during early evolution. Bacteria today remain capable of gene exchange between distantly-related lineages. This weakens the basic assumption of phylogenetic analysis, that similarity of genomes implies common ancestry, because sufficient gene exchange would allow lineages to share much of their genome whether or not they shared an ancestor (monophyly). This has led to questions about the single ancestry of life. However, biologists consider it very unlikely that completely unrelated proto-organisms could have exchanged genes, as their different coding mechanisms would have resulted only in garble rather than functioning systems. Later, however, many organisms all derived from a single ancestor could readily have shared genes that all worked in the same way, and it appears that they have.
If early organisms had been driven by the same environmental conditions to evolve similar biochemistry convergently, they might independently have acquired similar genetic sequences. Theobald's "formal test" was accordingly criticised by Takahiro Yonezawa and colleagues for not including consideration of convergence. They argued that Theobald's test was insufficient to distinguish between the competing hypotheses. Theobald has defended his method against this claim, arguing that his tests distinguish between phylogenetic structure and mere sequence similarity. Therefore, Theobald argued, his results show that "real universally conserved proteins are homologous."
The possibility is mentioned, above, that all living organisms may be descended from an original single-celled organism with a DNA genome, and that this implies a single origin for life. Although such a universal common ancestor may have existed, such a complex entity is unlikely to have arisen spontaneously from non-life and thus a cell with a DNA genome cannot reasonably be regarded as the “origin” of life. To understand the “origin” of life, it has been proposed that DNA based cellular life descended from relatively simple pre-cellular self-replicating RNA molecules able to undergo natural selection. During the course of evolution, this RNA world was replaced by the evolutionary emergence of the DNA world. A world of independently self-replicating RNA genomes apparently no longer exists (RNA viruses are dependent on host cells with DNA genomes). Because the RNA world is apparently gone, it is not clear how scientific evidence could be brought to bear on the question of whether there was a single “origin” of life event from which all life descended. | [
{
"paragraph_id": 0,
"text": "Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species:",
"title": ""
},
{
"paragraph_id": 3,
"text": "There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In Essai de cosmologie (1750), Maupertuis noted:",
"title": "History"
},
{
"paragraph_id": 5,
"text": "May we not say that, in the fortuitous combination of the productions of Nature, since only those creatures could survive in whose organizations a certain degree of adaptation was present, there is nothing extraordinary in the fact that such adaptation is actually found in all these species which now exist? Chance, one might say, turned out a vast number of individuals; a small proportion of these were organized in such a manner that the animals' organs could satisfy their needs. A much greater number showed neither adaptation nor order; these last have all perished.... Thus the species which we see today are but a small part of all those that a blind destiny has produced.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1790, the philosopher Immanuel Kant wrote in Kritik der Urteilskraft (Critique of Judgment) that the similarity of animal forms implies a common original type, and thus a common parent.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1794, Charles Darwin's grandfather, Erasmus Darwin asked:",
"title": "History"
},
{
"paragraph_id": 8,
"text": "[W]ould it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which the great First Cause endued with animality, with the power of acquiring new parts attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end?",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Charles Darwin's views about common descent, as expressed in On the Origin of Species, were that it was probable that there was only one progenitor for all life forms:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "But he precedes that remark by, \"Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide.\" And in the subsequent edition, he asserts rather,",
"title": "History"
},
{
"paragraph_id": 12,
"text": "\"We do not know all the possible transitional gradations between the simplest and the most perfect organs; it cannot be pretended that we know all the varied means of Distribution during the long lapse of years, or that we know how imperfect the Geological Record is. Grave as these several difficulties are, in my judgment they do not overthrow the theory of descent from a few created forms with subsequent modification\".",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Common descent was widely accepted amongst the scientific community after Darwin's publication. In 1907, Vernon Kellogg commented that \"practically no naturalists of position and recognized attainment doubt the theory of descent.\"",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 2008, biologist T. Ryan Gregory noted that:",
"title": "History"
},
{
"paragraph_id": 15,
"text": "No reliable observation has ever been found to contradict the general notion of common descent. It should come as no surprise, then, that the scientific community at large has accepted evolutionary descent as a historical reality since Darwin’s time and considers it among the most reliably established and fundamentally important facts in all of science.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "All known forms of life are based on the same fundamental biochemical organization: genetic information encoded in DNA, transcribed into RNA, through the effect of protein- and RNA-enzymes, then translated into proteins by (highly similar) ribosomes, with ATP, NADPH and others as energy sources. Analysis of small sequence differences in widely shared substances such as cytochrome c further supports universal common descent. Some 23 proteins are found in all organisms, serving as enzymes carrying out core functions like DNA replication. The fact that only one such set of enzymes exists is convincing evidence of a single ancestry. 6,331 genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.",
"title": "Evidence"
},
{
"paragraph_id": 17,
"text": "The genetic code (the \"translation table\" according to which DNA information is translated into amino acids, and hence proteins) is nearly identical for all known lifeforms, from bacteria and archaea to animals and plants. The universality of this code is generally regarded by biologists as definitive evidence in favor of universal common descent.",
"title": "Evidence"
},
{
"paragraph_id": 18,
"text": "The way that codons (DNA triplets) are mapped to amino acids seems to be strongly optimised. Richard Egel argues that in particular the hydrophobic (non-polar) side-chains are well organised, suggesting that these enabled the earliest organisms to create peptides with water-repelling regions able to support the essential electron exchange (redox) reactions for energy transfer.",
"title": "Evidence"
},
{
"paragraph_id": 19,
"text": "Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry.",
"title": "Evidence"
},
{
"paragraph_id": 20,
"text": "Biologists often point to the universality of many aspects of cellular life as supportive evidence to the more compelling evidence listed above. These similarities include the energy carrier adenosine triphosphate (ATP), and the fact that all amino acids found in proteins are left-handed. It is, however, possible that these similarities resulted because of the laws of physics and chemistry - rather than through universal common descent - and therefore resulted in convergent evolution. In contrast, there is evidence for homology of the central subunits of transmembrane ATPases throughout all living organisms, especially how the rotating elements are bound to the membrane. This supports the assumption of a LUCA as a cellular organism, although primordial membranes may have been semipermeable and evolved later to the membranes of modern bacteria, and on a second path to those of modern archaea also.",
"title": "Evidence"
},
{
"paragraph_id": 21,
"text": "Another important piece of evidence is from detailed phylogenetic trees (i.e., \"genealogic trees\" of species) mapping out the proposed divisions and common ancestors of all living species. In 2010, Douglas L. Theobald published a statistical analysis of available genetic data, mapping them to phylogenetic trees, that gave \"strong quantitative support, by a formal test, for the unity of life.\"",
"title": "Evidence"
},
{
"paragraph_id": 22,
"text": "Traditionally, these trees have been built using morphological methods, such as appearance, embryology, etc. Recently, it has been possible to construct these trees using molecular data, based on similarities and differences between genetic and protein sequences. All these methods produce essentially similar results, even though most genetic variation has no influence over external morphology. That phylogenetic trees based on different types of information agree with each other is strong evidence of a real underlying common descent.",
"title": "Evidence"
},
{
"paragraph_id": 23,
"text": "Theobald noted that substantial horizontal gene transfer could have occurred during early evolution. Bacteria today remain capable of gene exchange between distantly-related lineages. This weakens the basic assumption of phylogenetic analysis, that similarity of genomes implies common ancestry, because sufficient gene exchange would allow lineages to share much of their genome whether or not they shared an ancestor (monophyly). This has led to questions about the single ancestry of life. However, biologists consider it very unlikely that completely unrelated proto-organisms could have exchanged genes, as their different coding mechanisms would have resulted only in garble rather than functioning systems. Later, however, many organisms all derived from a single ancestor could readily have shared genes that all worked in the same way, and it appears that they have.",
"title": "Objections"
},
{
"paragraph_id": 24,
"text": "If early organisms had been driven by the same environmental conditions to evolve similar biochemistry convergently, they might independently have acquired similar genetic sequences. Theobald's \"formal test\" was accordingly criticised by Takahiro Yonezawa and colleagues for not including consideration of convergence. They argued that Theobald's test was insufficient to distinguish between the competing hypotheses. Theobald has defended his method against this claim, arguing that his tests distinguish between phylogenetic structure and mere sequence similarity. Therefore, Theobald argued, his results show that \"real universally conserved proteins are homologous.\"",
"title": "Objections"
},
{
"paragraph_id": 25,
"text": "The possibility is mentioned, above, that all living organisms may be descended from an original single-celled organism with a DNA genome, and that this implies a single origin for life. Although such a universal common ancestor may have existed, such a complex entity is unlikely to have arisen spontaneously from non-life and thus a cell with a DNA genome cannot reasonably be regarded as the “origin” of life. To understand the “origin” of life, it has been proposed that DNA based cellular life descended from relatively simple pre-cellular self-replicating RNA molecules able to undergo natural selection. During the course of evolution, this RNA world was replaced by the evolutionary emergence of the DNA world. A world of independently self-replicating RNA genomes apparently no longer exists (RNA viruses are dependent on host cells with DNA genomes). Because the RNA world is apparently gone, it is not clear how scientific evidence could be brought to bear on the question of whether there was a single “origin” of life event from which all life descended.",
"title": "Objections"
}
] | Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth. Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species: | 2001-03-14T04:32:06Z | 2023-12-18T21:36:27Z | [
"Template:See also",
"Template:Efn",
"Template:Further",
"Template:PhylomapB",
"Template:Cite journal",
"Template:Main",
"Template:Portal bar",
"Template:Evolution",
"Template:Refend",
"Template:Notelist",
"Template:Cite news",
"Template:Redirect",
"Template:Evolutionary biology",
"Template:Cite book",
"Template:Reflist",
"Template:Refbegin",
"Template:Cbignore",
"Template:Cite magazine",
"Template:Blockquote",
"Template:Smallcaps",
"Template:Quantify",
"Template:Harvnb",
"Template:ISBN",
"Template:Cite web",
"Template:Origin of life",
"Template:Short description",
"Template:For",
"Template:Internet Archive"
] | https://en.wikipedia.org/wiki/Common_descent |
5,261 | Celtic music | Celtic music is a broad grouping of music genres that evolved out of the folk music traditions of the Celtic people of Northwestern Europe (the modern Celtic nations). It refers to both orally-transmitted traditional music and recorded music and the styles vary considerably to include everything from traditional music to a wide range of hybrids.
Celtic music means two things mainly. First, it is the music of the people that identify themselves as Celts. Secondly, it refers to whatever qualities may be unique to the music of the Celtic nations. Many notable Celtic musicians such as Alan Stivell and Paddy Moloney claim that the different Celtic music genres have a lot in common.
These following melodic practices may be used widely across the different variants of Celtic Music:
These two latter usage patterns may simply be remnants of formerly widespread melodic practices.
Often, the term Celtic music is applied to the music of Ireland and Scotland because both lands have produced well-known distinctive styles which actually have genuine commonality and clear mutual influences. The definition is further complicated by the fact that Irish independence has allowed Ireland to promote 'Celtic' music as a specifically Irish product. However, these are modern geographical references to a people who share a common Celtic ancestry and consequently, a common musical heritage.
These styles are known because of the importance of Irish and Scottish people in the English speaking world, especially in the United States, where they had a profound impact on American music, particularly bluegrass and country music. The music of Wales, Cornwall, the Isle of Man, Brittany, Galician traditional music (Spain) and music of Portugal are also considered Celtic music, the tradition being particularly strong in Brittany, where Celtic festivals large and small take place throughout the year, and in Wales, where the ancient eisteddfod tradition has been revived and flourishes. Additionally, the musics of ethnically Celtic peoples abroad are vibrant, especially in Canada and the United States. In Canada the provinces of Atlantic Canada are known for being a home of Celtic music, most notably on the islands of Newfoundland, Cape Breton and Prince Edward Island. The traditional music of Atlantic Canada is heavily influenced by the Irish, Scottish and Acadian ethnic makeup of much of the region's communities. In some parts of Atlantic Canada, such as Newfoundland, Celtic music is as or more popular than in the old country. Further, some older forms of Celtic music that are rare in Scotland and Ireland today, such as the practice of accompanying a fiddle with a piano, or the Gaelic spinning songs of Cape Breton remain common in the Maritimes. Much of the music of this region is Celtic in nature, but originates in the local area and celebrates the sea, seafaring, fishing and other primary industries.
Instruments associated with Celtic Music include the Celtic harp, uilleann pipes or Great Highland bagpipe, fiddle, tin whistle, flute, bodhrán, bones, concertina, accordion and a recent addition, the Irish bouzouki.
In Celtic Music: A Complete Guide, June Skinner Sawyers acknowledges six Celtic nationalities divided into two groups according to their linguistic heritage. The Q-Celtic nationalities are the Irish, Scottish and Manx peoples, while the P-Celtic groups are the Cornish, Bretons and Welsh peoples. Musician Alan Stivell uses a similar dichotomy, between the Gaelic (Irish/Scottish/Manx) and the Brythonic (Breton/Welsh/Cornish) branches, which differentiate "mostly by the extended range (sometimes more than two octaves) of Irish and Scottish melodies and the closed range of Breton and Welsh melodies (often reduced to a half-octave), and by the frequent use of the pure pentatonic scale in Gaelic music."
There is also tremendous variation between Celtic regions. Ireland, Scotland, Wales, Cornwall, and Brittany have living traditions of language and music, and there has been a recent major revival of interest in Celtic heritage in the Isle of Man. Galicia has a Celtic language revival movement to revive the Q-Celtic Gallaic language used into Roman times., which is not an attested language unlike Celtiberian. A Brythonic language may have been spoken in parts of Galicia and Asturias into early Medieval times brought by Britons fleeing the Anglo-Saxon invasions via Brittany., but here again there are several hypotheses and very little traces of it : lack of archeological, linguistic evidence and documents. The Romance language currently spoken in Galicia, Galician (Galego) is closely related to the Portuguese language used mainly in Brazil and Portugal and in many ways closer to Latin than other Romance languages. Galician music is claimed to be Celtic. The same is true of the music of Asturias, Cantabria, and that of Northern Portugal (some say even traditional music from Central Portugal can be labeled Celtic).
Breton artist Alan Stivell was one of the earliest musicians to use the word Celtic and Keltia in his marketing materials, starting in the early 1960s as part of the worldwide folk music revival of that era with the term quickly catching on with other artists worldwide. Today, the genre is well established and incredibly diverse.
There are musical genres and styles specific to each Celtic country, due in part to the influence of individual song traditions and the characteristics of specific languages:
The modern Celtic music scene involves a large number of music festivals, as it has traditionally. Some of the most prominent festivals focused solely on music include:
The oldest musical tradition which fits under the label of Celtic fusion originated in the rural American south in the early colonial period and incorporated English, Scottish, Irish, Welsh, German, and African influences. Variously referred to as roots music, American folk music, or old-time music, this tradition has exerted a strong influence on all forms of American music, including country, blues, and rock and roll. In addition to its lasting effects on other genres, it marked the first modern large-scale mixing of musical traditions from multiple ethnic and religious communities within the Celtic diaspora.
In the 1960s several bands put forward modern adaptations of Celtic music pulling influences from several of the Celtic nations at once to create a modern pan-celtic sound. A few of those include bagadoù (Breton pipe bands), Fairport Convention, Pentangle, Steeleye Span and Horslips.
In the 1970s Clannad made their mark initially in the folk and traditional scene, and then subsequently went on to bridge the gap between traditional Celtic and pop music in the 1980s and 1990s, incorporating elements from new-age, smooth jazz, and folk rock. Traces of Clannad's legacy can be heard in the music of many artists, including Altan, Anúna, Capercaillie, the Corrs, Dexys Midnight Runners, Enya, Loreena McKennitt, Riverdance, Donna Taggart, and U2. The solo music of Clannad's lead singer, Moya Brennan (often referred to as the First Lady of Celtic Music) has further enhanced this influence.
Later, beginning in 1982 with the Pogues' invention of Celtic folk-punk and Stockton's Wing blend of Irish traditional and Pop, Rock and Reggae, there has been a movement to incorporate Celtic influences into other genres of music. Bands like Flogging Molly, Black 47, Dropkick Murphys, the Young Dubliners, the Tossers introduced a hybrid of Celtic rock, punk, reggae, hardcore and other elements in the 1990s that has become popular with Irish-American youth.
Today there are Celtic-influenced subgenres of virtually every type of popular music including electronica, rock, metal, punk, hip hop, reggae, new-age, Latin, Andean and pop. Collectively these modern interpretations of Celtic music are sometimes referred to as Celtic fusion.
Outside of America, the first deliberate attempts to create a "Pan-Celtic music" were made by the Breton Taldir Jaffrennou, having translated songs from Ireland, Scotland, and Wales into Breton between the two world wars. One of his major works was to bring "Hen Wlad Fy Nhadau" (the Welsh national anthem) back in Brittany and create lyrics in Breton. Eventually this song became "Bro goz va zadoù" ("Old land of my fathers") and is the most widely accepted Breton anthem. In the 70s, the Breton Alan Cochevelou (future Alan Stivell) began playing a mixed repertoire from the main Celtic countries on the Celtic harp his father created. Probably the most successful all-inclusive Celtic music composition in recent years is Shaun Daveys composition The Pilgrim. This suite depicts the journey of St. Colum Cille through the Celtic nations of Ireland, Scotland, the Isle of Man, Wales, Cornwall, Brittany and Galicia. The suite which includes a Scottish pipe band, Irish and Welsh harpists, Galician gaitas, Irish uilleann pipes, the bombardes of Brittany, two vocal soloists and a narrator is set against a background of a classical orchestra and a large choir.
Modern music may also be termed "Celtic" because it is written and recorded in a Celtic language, regardless of musical style. Many of the Celtic languages have experienced resurgences in modern years, spurred on partly by the action of artists and musicians who have embraced them as hallmarks of identity and distinctness. In 1971, the Irish band Skara Brae recorded its only LP (simply called Skara Brae), all songs in Irish. In 1978 Runrig recorded an album in Scottish Gaelic. In 1992 Capercaillie recorded "A Prince Among Islands", the first Scottish Gaelic language record to reach the UK top 40. In 1996, a song in Breton represented France in the 41st Eurovision Song Contest, the first time in history that France had a song without a word in French. Since about 2005, Oi Polloi (from Scotland) have recorded in Scottish Gaelic. Mill a h-Uile Rud (a Scottish Gaelic punk band from Seattle) recorded in the language in 2004.
Several contemporary bands have Welsh language songs, such as Ceredwen, which fuses traditional instruments with trip hop beats, the Super Furry Animals, Fernhill, and so on (see the Music of Wales article for more Welsh and Welsh-language bands). The same phenomenon occurs in Brittany, where many singers record songs in Breton, traditional or modern (hip hop, rap, and so on.). | [
{
"paragraph_id": 0,
"text": "Celtic music is a broad grouping of music genres that evolved out of the folk music traditions of the Celtic people of Northwestern Europe (the modern Celtic nations). It refers to both orally-transmitted traditional music and recorded music and the styles vary considerably to include everything from traditional music to a wide range of hybrids.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Celtic music means two things mainly. First, it is the music of the people that identify themselves as Celts. Secondly, it refers to whatever qualities may be unique to the music of the Celtic nations. Many notable Celtic musicians such as Alan Stivell and Paddy Moloney claim that the different Celtic music genres have a lot in common.",
"title": "Description and definition"
},
{
"paragraph_id": 2,
"text": "These following melodic practices may be used widely across the different variants of Celtic Music:",
"title": "Description and definition"
},
{
"paragraph_id": 3,
"text": "These two latter usage patterns may simply be remnants of formerly widespread melodic practices.",
"title": "Description and definition"
},
{
"paragraph_id": 4,
"text": "Often, the term Celtic music is applied to the music of Ireland and Scotland because both lands have produced well-known distinctive styles which actually have genuine commonality and clear mutual influences. The definition is further complicated by the fact that Irish independence has allowed Ireland to promote 'Celtic' music as a specifically Irish product. However, these are modern geographical references to a people who share a common Celtic ancestry and consequently, a common musical heritage.",
"title": "Description and definition"
},
{
"paragraph_id": 5,
"text": "These styles are known because of the importance of Irish and Scottish people in the English speaking world, especially in the United States, where they had a profound impact on American music, particularly bluegrass and country music. The music of Wales, Cornwall, the Isle of Man, Brittany, Galician traditional music (Spain) and music of Portugal are also considered Celtic music, the tradition being particularly strong in Brittany, where Celtic festivals large and small take place throughout the year, and in Wales, where the ancient eisteddfod tradition has been revived and flourishes. Additionally, the musics of ethnically Celtic peoples abroad are vibrant, especially in Canada and the United States. In Canada the provinces of Atlantic Canada are known for being a home of Celtic music, most notably on the islands of Newfoundland, Cape Breton and Prince Edward Island. The traditional music of Atlantic Canada is heavily influenced by the Irish, Scottish and Acadian ethnic makeup of much of the region's communities. In some parts of Atlantic Canada, such as Newfoundland, Celtic music is as or more popular than in the old country. Further, some older forms of Celtic music that are rare in Scotland and Ireland today, such as the practice of accompanying a fiddle with a piano, or the Gaelic spinning songs of Cape Breton remain common in the Maritimes. Much of the music of this region is Celtic in nature, but originates in the local area and celebrates the sea, seafaring, fishing and other primary industries.",
"title": "Description and definition"
},
{
"paragraph_id": 6,
"text": "Instruments associated with Celtic Music include the Celtic harp, uilleann pipes or Great Highland bagpipe, fiddle, tin whistle, flute, bodhrán, bones, concertina, accordion and a recent addition, the Irish bouzouki.",
"title": "Description and definition"
},
{
"paragraph_id": 7,
"text": "In Celtic Music: A Complete Guide, June Skinner Sawyers acknowledges six Celtic nationalities divided into two groups according to their linguistic heritage. The Q-Celtic nationalities are the Irish, Scottish and Manx peoples, while the P-Celtic groups are the Cornish, Bretons and Welsh peoples. Musician Alan Stivell uses a similar dichotomy, between the Gaelic (Irish/Scottish/Manx) and the Brythonic (Breton/Welsh/Cornish) branches, which differentiate \"mostly by the extended range (sometimes more than two octaves) of Irish and Scottish melodies and the closed range of Breton and Welsh melodies (often reduced to a half-octave), and by the frequent use of the pure pentatonic scale in Gaelic music.\"",
"title": "Divisions"
},
{
"paragraph_id": 8,
"text": "There is also tremendous variation between Celtic regions. Ireland, Scotland, Wales, Cornwall, and Brittany have living traditions of language and music, and there has been a recent major revival of interest in Celtic heritage in the Isle of Man. Galicia has a Celtic language revival movement to revive the Q-Celtic Gallaic language used into Roman times., which is not an attested language unlike Celtiberian. A Brythonic language may have been spoken in parts of Galicia and Asturias into early Medieval times brought by Britons fleeing the Anglo-Saxon invasions via Brittany., but here again there are several hypotheses and very little traces of it : lack of archeological, linguistic evidence and documents. The Romance language currently spoken in Galicia, Galician (Galego) is closely related to the Portuguese language used mainly in Brazil and Portugal and in many ways closer to Latin than other Romance languages. Galician music is claimed to be Celtic. The same is true of the music of Asturias, Cantabria, and that of Northern Portugal (some say even traditional music from Central Portugal can be labeled Celtic).",
"title": "Divisions"
},
{
"paragraph_id": 9,
"text": "Breton artist Alan Stivell was one of the earliest musicians to use the word Celtic and Keltia in his marketing materials, starting in the early 1960s as part of the worldwide folk music revival of that era with the term quickly catching on with other artists worldwide. Today, the genre is well established and incredibly diverse.",
"title": "Divisions"
},
{
"paragraph_id": 10,
"text": "There are musical genres and styles specific to each Celtic country, due in part to the influence of individual song traditions and the characteristics of specific languages:",
"title": "Forms"
},
{
"paragraph_id": 11,
"text": "The modern Celtic music scene involves a large number of music festivals, as it has traditionally. Some of the most prominent festivals focused solely on music include:",
"title": "Festivals"
},
{
"paragraph_id": 12,
"text": "The oldest musical tradition which fits under the label of Celtic fusion originated in the rural American south in the early colonial period and incorporated English, Scottish, Irish, Welsh, German, and African influences. Variously referred to as roots music, American folk music, or old-time music, this tradition has exerted a strong influence on all forms of American music, including country, blues, and rock and roll. In addition to its lasting effects on other genres, it marked the first modern large-scale mixing of musical traditions from multiple ethnic and religious communities within the Celtic diaspora.",
"title": "Celtic fusion"
},
{
"paragraph_id": 13,
"text": "In the 1960s several bands put forward modern adaptations of Celtic music pulling influences from several of the Celtic nations at once to create a modern pan-celtic sound. A few of those include bagadoù (Breton pipe bands), Fairport Convention, Pentangle, Steeleye Span and Horslips.",
"title": "Celtic fusion"
},
{
"paragraph_id": 14,
"text": "In the 1970s Clannad made their mark initially in the folk and traditional scene, and then subsequently went on to bridge the gap between traditional Celtic and pop music in the 1980s and 1990s, incorporating elements from new-age, smooth jazz, and folk rock. Traces of Clannad's legacy can be heard in the music of many artists, including Altan, Anúna, Capercaillie, the Corrs, Dexys Midnight Runners, Enya, Loreena McKennitt, Riverdance, Donna Taggart, and U2. The solo music of Clannad's lead singer, Moya Brennan (often referred to as the First Lady of Celtic Music) has further enhanced this influence.",
"title": "Celtic fusion"
},
{
"paragraph_id": 15,
"text": "Later, beginning in 1982 with the Pogues' invention of Celtic folk-punk and Stockton's Wing blend of Irish traditional and Pop, Rock and Reggae, there has been a movement to incorporate Celtic influences into other genres of music. Bands like Flogging Molly, Black 47, Dropkick Murphys, the Young Dubliners, the Tossers introduced a hybrid of Celtic rock, punk, reggae, hardcore and other elements in the 1990s that has become popular with Irish-American youth.",
"title": "Celtic fusion"
},
{
"paragraph_id": 16,
"text": "Today there are Celtic-influenced subgenres of virtually every type of popular music including electronica, rock, metal, punk, hip hop, reggae, new-age, Latin, Andean and pop. Collectively these modern interpretations of Celtic music are sometimes referred to as Celtic fusion.",
"title": "Celtic fusion"
},
{
"paragraph_id": 17,
"text": "Outside of America, the first deliberate attempts to create a \"Pan-Celtic music\" were made by the Breton Taldir Jaffrennou, having translated songs from Ireland, Scotland, and Wales into Breton between the two world wars. One of his major works was to bring \"Hen Wlad Fy Nhadau\" (the Welsh national anthem) back in Brittany and create lyrics in Breton. Eventually this song became \"Bro goz va zadoù\" (\"Old land of my fathers\") and is the most widely accepted Breton anthem. In the 70s, the Breton Alan Cochevelou (future Alan Stivell) began playing a mixed repertoire from the main Celtic countries on the Celtic harp his father created. Probably the most successful all-inclusive Celtic music composition in recent years is Shaun Daveys composition The Pilgrim. This suite depicts the journey of St. Colum Cille through the Celtic nations of Ireland, Scotland, the Isle of Man, Wales, Cornwall, Brittany and Galicia. The suite which includes a Scottish pipe band, Irish and Welsh harpists, Galician gaitas, Irish uilleann pipes, the bombardes of Brittany, two vocal soloists and a narrator is set against a background of a classical orchestra and a large choir.",
"title": "Other modern adaptations"
},
{
"paragraph_id": 18,
"text": "Modern music may also be termed \"Celtic\" because it is written and recorded in a Celtic language, regardless of musical style. Many of the Celtic languages have experienced resurgences in modern years, spurred on partly by the action of artists and musicians who have embraced them as hallmarks of identity and distinctness. In 1971, the Irish band Skara Brae recorded its only LP (simply called Skara Brae), all songs in Irish. In 1978 Runrig recorded an album in Scottish Gaelic. In 1992 Capercaillie recorded \"A Prince Among Islands\", the first Scottish Gaelic language record to reach the UK top 40. In 1996, a song in Breton represented France in the 41st Eurovision Song Contest, the first time in history that France had a song without a word in French. Since about 2005, Oi Polloi (from Scotland) have recorded in Scottish Gaelic. Mill a h-Uile Rud (a Scottish Gaelic punk band from Seattle) recorded in the language in 2004.",
"title": "Other modern adaptations"
},
{
"paragraph_id": 19,
"text": "Several contemporary bands have Welsh language songs, such as Ceredwen, which fuses traditional instruments with trip hop beats, the Super Furry Animals, Fernhill, and so on (see the Music of Wales article for more Welsh and Welsh-language bands). The same phenomenon occurs in Brittany, where many singers record songs in Breton, traditional or modern (hip hop, rap, and so on.).",
"title": "Other modern adaptations"
}
] | Celtic music is a broad grouping of music genres that evolved out of the folk music traditions of the Celtic people of Northwestern Europe. It refers to both orally-transmitted traditional music and recorded music and the styles vary considerably to include everything from traditional music to a wide range of hybrids. | 2001-06-13T11:54:04Z | 2023-10-27T09:54:22Z | [
"Template:Celts",
"Template:Short description",
"Template:Use British English",
"Template:Celtic music",
"Template:Folk music",
"Template:Use dmy dates",
"Template:Multiple issues",
"Template:Main",
"Template:Cite web",
"Template:Authority control",
"Template:Reflist",
"Template:Cite journal",
"Template:Curlie",
"Template:Celtic nations",
"Template:About",
"Template:Listen",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Celtic_music |
5,267 | Constellation | Four views of the constellation Orion:
A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object.
The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily.
Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent c. 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name.
In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name.
Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major.
The word constellation comes from the Late Latin term cōnstellātiō, which can be translated as "set of stars"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (astron). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. Today, there are 88 IAU designated constellations.
A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, and 23½° south.
Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy.
The 88 constellations recognized by the International Astronomical Union as well as those that cultures have recognized throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. H.A. Rey, who wrote popular books on astronomy, pointed out the imaginative nature of the constellations and their mythological and artistic basis, and the practical use of identifying them through definite images, according to the classical names they were given.
It has been suggested that the 17,000-year-old cave paintings in Lascaux, southern France, depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists.
Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations.
The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age.
The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names.
Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including עיש ‘Ayish "bier", כסיל chesil "fool" and כימה chimah "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth מַזָּרוֹת, translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations.
There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century.
In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy.
Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently.
Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable.
A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus.
Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy.
Ancient Greece
A lot of well-known constellations also have histories that connect to ancient Greece.
Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca.
Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina.
The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations.
The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC.
The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci.
Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756.
Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco.
A list of 88 constellations was produced for the International Astronomical Union in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of 0.5 inches (13 mm).
In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the International Astronomical Union (IAU) formally accepted 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern.
The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come.
The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published.
The Great Rift, a series of dark patches in the Milky Way, is more visible and striking in the southern hemisphere than in the northern. It vividly stands out when conditions are otherwise so dark that the Milky Way's central region casts shadows on the ground. Some cultures have discerned shapes in these patches and have given names to these "dark cloud constellations". Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars. | [
{
"paragraph_id": 0,
"text": "Four views of the constellation Orion:",
"title": ""
},
{
"paragraph_id": 1,
"text": "A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent c. 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The word constellation comes from the Late Latin term cōnstellātiō, which can be translated as \"set of stars\"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (astron). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. Today, there are 88 IAU designated constellations.",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, and 23½° south.",
"title": "Terminology"
},
{
"paragraph_id": 8,
"text": "Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy.",
"title": "Terminology"
},
{
"paragraph_id": 9,
"text": "The 88 constellations recognized by the International Astronomical Union as well as those that cultures have recognized throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. H.A. Rey, who wrote popular books on astronomy, pointed out the imaginative nature of the constellations and their mythological and artistic basis, and the practical use of identifying them through definite images, according to the classical names they were given.",
"title": "Identification"
},
{
"paragraph_id": 10,
"text": "It has been suggested that the 17,000-year-old cave paintings in Lascaux, southern France, depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists.",
"title": "History of the early constellations"
},
{
"paragraph_id": 11,
"text": "Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations.",
"title": "History of the early constellations"
},
{
"paragraph_id": 12,
"text": "The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age.",
"title": "History of the early constellations"
},
{
"paragraph_id": 13,
"text": "The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names.",
"title": "History of the early constellations"
},
{
"paragraph_id": 14,
"text": "Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including עיש ‘Ayish \"bier\", כסיל chesil \"fool\" and כימה chimah \"heap\" (Job 9:9, 38:31–32), rendered as \"Arcturus, Orion and Pleiades\" by the KJV, but ‘Ayish \"the bier\" actually corresponding to Ursa Major. The term Mazzaroth מַזָּרוֹת, translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations.",
"title": "History of the early constellations"
},
{
"paragraph_id": 15,
"text": "There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the \"heavenly bodies\". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century.",
"title": "History of the early constellations"
},
{
"paragraph_id": 16,
"text": "In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy.",
"title": "History of the early constellations"
},
{
"paragraph_id": 17,
"text": "Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently.",
"title": "History of the early constellations"
},
{
"paragraph_id": 18,
"text": "Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable.",
"title": "History of the early constellations"
},
{
"paragraph_id": 19,
"text": "A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus.",
"title": "History of the early constellations"
},
{
"paragraph_id": 20,
"text": "Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy.",
"title": "History of the early constellations"
},
{
"paragraph_id": 21,
"text": "Ancient Greece",
"title": "History of the early constellations"
},
{
"paragraph_id": 22,
"text": "A lot of well-known constellations also have histories that connect to ancient Greece.",
"title": "History of the early constellations"
},
{
"paragraph_id": 23,
"text": "Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 24,
"text": "Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 25,
"text": "The 1603 star atlas \"Uranometria\" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 26,
"text": "The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 27,
"text": "The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 28,
"text": "Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 29,
"text": "Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 30,
"text": "A list of 88 constellations was produced for the International Astronomical Union in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of 0.5 inches (13 mm).",
"title": "Early modern astronomy"
},
{
"paragraph_id": 31,
"text": "In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the International Astronomical Union (IAU) formally accepted 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 32,
"text": "The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 33,
"text": "The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published.",
"title": "Early modern astronomy"
},
{
"paragraph_id": 34,
"text": "The Great Rift, a series of dark patches in the Milky Way, is more visible and striking in the southern hemisphere than in the northern. It vividly stands out when conditions are otherwise so dark that the Milky Way's central region casts shadows on the ground. Some cultures have discerned shapes in these patches and have given names to these \"dark cloud constellations\". Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the \"emu in the sky\" whose head is formed by the Coalsack, a dark nebula, instead of the stars.",
"title": "Dark cloud constellations"
}
] | A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object. The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily. Twelve ancient constellations belong to the zodiac. The origins of the zodiac remain historically uncertain; its astrological divisions became prominent c. 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name. In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name. Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major. | 2001-09-17T20:29:45Z | 2023-12-13T01:01:35Z | [
"Template:Navconstel",
"Template:Zodiac",
"Template:Lang",
"Template:See also",
"Template:ISBN",
"Template:Sister project links",
"Template:Portal bar",
"Template:Circa",
"Template:Citation needed",
"Template:Further",
"Template:Clarify",
"Template:Cite journal",
"Template:Further reading cleanup",
"Template:LCCN",
"Template:Short description",
"Template:About",
"Template:Multiple image",
"Template:Main",
"Template:Cite book",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Convert",
"Template:Reflist",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Constellation |
5,269 | Character | Character or Characters may refer to: | [
{
"paragraph_id": 0,
"text": "Character or Characters may refer to:",
"title": ""
}
] | Character or Characters may refer to: | 2023-04-21T03:24:50Z | [
"Template:Look from",
"Template:In title",
"Template:Disambiguation",
"Template:Wiktionary",
"Template:TOC right"
] | https://en.wikipedia.org/wiki/Character |
|
5,270 | Car (disambiguation) | A car is a wheeled motor vehicle used for transporting passengers.
Car(s), CAR(s), or The Car(s) may also refer to: | [
{
"paragraph_id": 0,
"text": "A car is a wheeled motor vehicle used for transporting passengers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Car(s), CAR(s), or The Car(s) may also refer to:",
"title": ""
}
] | A car is a wheeled motor vehicle used for transporting passengers. Car(s), CAR(s), or The Car(s) may also refer to: | 2001-03-16T01:14:20Z | 2023-12-04T00:04:47Z | [
"Template:TOC right",
"Template:Lang",
"Template:Srt",
"Template:In title",
"Template:Look from",
"Template:Disambiguation",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Car_(disambiguation) |
5,272 | Printer (computing) | In the field of computing, a printer is considered a peripheral device that serves the purpose of creating a permanent representation of text or graphics, usually on paper. While the majority of outputs produced by printers are readable by humans, there are instances where barcode printers have found a utility beyond this traditional use. Different types of printers are available for use, including inkjet printers, thermal printers, laser printers, and 3D printers.
The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000.
The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966.
The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson.
The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints.
The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace.
The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today.
Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process being Fused deposition modeling.
Personal printers are mainly designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners.
Networked or shared printers are "designed for high-volume, high-speed printing". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm. An ID Card printer is used for printing plastic ID cards. These can now be customised with important features such as holographic overlays, HoloKotes and watermarks. This is either a direct to card printer (the more feasible option, or a retransfer printer. A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user.
A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs.
A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.
A card printer is an electronic desktop printer with single card feeders which print and personalize plastic cards. In this respect they differ from, for example, label printers which have a continuous supply feed. Card dimensions are usually 85.60 × 53.98 mm, standardized under ISO/IEC 7810 as ID-1. This format is also used in EC-cards, telephone cards, credit cards, driver's licenses and health insurance cards. This is commonly known as the bank card format. Card printers are controlled by corresponding printer drivers or by means of a specific programming language. Generally card printers are designed with laminating, striping, and punching functions, and use desktop or web-based software. The hardware features of a card printer differentiate a card printer from the more traditional printers, as ID cards are usually made of PVC plastic and require laminating and punching. Different card printers can accept different card thickness and dimensions.
The principle is the same for practically all card printers: the plastic card is passed through a thermal print head at the same time as a color ribbon. The color from the ribbon is transferred onto the card through the heat given out from the print head. The standard performance for card printing is 300 dpi (300 dots per inch, equivalent to 11.8 dots per mm). There are different printing processes, which vary in their detail:
There are basically two categories of card printer software: desktop-based, and web-based (online). The biggest difference between the two is whether or not a customer has a printer on their network that is capable of printing identification cards. If a business already owns an ID card printer, then a desktop-based badge maker is probably suitable for their needs. Typically, large organizations who have high employee turnover will have their own printer. A desktop-based badge maker is also required if a company needs their IDs make instantly. An example of this is the private construction site that has restricted access. However, if a company does not already have a local (or network) printer that has the features they need, then the web-based option is a perhaps a more affordable solution. The web-based solution is good for small businesses that do not anticipate a lot of rapid growth, or organizations who either can not afford a card printer, or do not have the resources to learn how to set up and use one. Generally speaking, desktop-based solutions involve software, a database (or spreadsheet) and can be installed on a single computer or network.
Alongside the basic function of printing cards, card printers can also read and encode magnetic stripes as well as contact and contact free RFID chip cards (smart cards). Thus card printers enable the encoding of plastic cards both visually and logically. Plastic cards can also be laminated after printing. Plastic cards are laminated after printing to achieve a considerable increase in durability and a greater degree of counterfeit prevention. Some card printers come with an option to print both sides at the same time, which cuts down the time taken to print and less margin of error. In such printers one side of id card is printed and then the card is flipped in the flip station and other side is printed.
Alongside the traditional uses in time attendance and access control (in particular with photo personalization), countless other applications have been found for plastic cards, e.g. for personalized customer and members' cards, for sports ticketing and in local public transport systems for the production of season tickets, for the production of school and college identity cards as well as for the production of national ID cards.
The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies.
A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.
Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.
The following printing technologies are routinely found in modern printers:
A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.
Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.
Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers.
Solid ink printers, also known as phase-change ink or hot-melt ink printers, are a type of thermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks are CMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink was first used by Data Products and Howtek, Inc., in 1984. Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001.
A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.
Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colors can be achieved with special papers and different temperatures and heating rates for different colors; these colored sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink").
The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.
Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used.
Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.
The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.
Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second.
The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).
Dot-matrix printers can be broadly divided into two major classes:
Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.
In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use.
Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.
Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century.
Line printers print an entire line of text at a time. Four principal designs exist.
In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print.
Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers.
Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.)
Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in 36 to 54 inches (910 to 1,370 mm) width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers.
Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings.
A number of other sorts of printers are important for historical reasons, or for special purpose uses.
Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device.
Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers.
Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster.
The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially color images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America.
The data received by a printer may be:
Some printers can process all four types of data, others not.
Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it.
A monochrome printer can only produce monochrome images, with only shades of a single color. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too
A color printer can produce images of multiple colors. A photo printer is a color printer that can produce images that mimic the color range (gamut) and resolution of prints made from photographic film.
The page yield is the number of pages that can be printed from a [toner cartridge] or [ink cartridge]—before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a specific cartridge depends on a number of factors.
For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield.
In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP).
Retailers often apply the "razor and blades" model: a company may sell a printer at cost and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.
Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.
Printer steganography is a type of steganography – "hiding data within data" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.
As of 2020-2021, the largest worldwide vendor of printers is Hewlett-Packard, followed by Canon, Brother, Seiko Epson and Kyocera. Other known vendors include NEC, Ricoh, Xerox, Lexmark, OKI, Sharp, Konica Minolta, Samsung, Kodak, Dell, Toshiba, Star Micronics, Citizen and Panasonic. | [
{
"paragraph_id": 0,
"text": "In the field of computing, a printer is considered a peripheral device that serves the purpose of creating a permanent representation of text or graphics, usually on paper. While the majority of outputs produced by printers are readable by humans, there are instances where barcode printers have found a utility beyond this traditional use. Different types of printers are available for use, including inkjet printers, thermal printers, laser printers, and 3D printers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a \"physical backup\" is of little benefit today.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process being Fused deposition modeling.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Personal printers are mainly designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners.",
"title": "Types"
},
{
"paragraph_id": 9,
"text": "Networked or shared printers are \"designed for high-volume, high-speed printing\". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm. An ID Card printer is used for printing plastic ID cards. These can now be customised with important features such as holographic overlays, HoloKotes and watermarks. This is either a direct to card printer (the more feasible option, or a retransfer printer. A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs.",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "A card printer is an electronic desktop printer with single card feeders which print and personalize plastic cards. In this respect they differ from, for example, label printers which have a continuous supply feed. Card dimensions are usually 85.60 × 53.98 mm, standardized under ISO/IEC 7810 as ID-1. This format is also used in EC-cards, telephone cards, credit cards, driver's licenses and health insurance cards. This is commonly known as the bank card format. Card printers are controlled by corresponding printer drivers or by means of a specific programming language. Generally card printers are designed with laminating, striping, and punching functions, and use desktop or web-based software. The hardware features of a card printer differentiate a card printer from the more traditional printers, as ID cards are usually made of PVC plastic and require laminating and punching. Different card printers can accept different card thickness and dimensions.",
"title": "Types"
},
{
"paragraph_id": 13,
"text": "The principle is the same for practically all card printers: the plastic card is passed through a thermal print head at the same time as a color ribbon. The color from the ribbon is transferred onto the card through the heat given out from the print head. The standard performance for card printing is 300 dpi (300 dots per inch, equivalent to 11.8 dots per mm). There are different printing processes, which vary in their detail:",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "There are basically two categories of card printer software: desktop-based, and web-based (online). The biggest difference between the two is whether or not a customer has a printer on their network that is capable of printing identification cards. If a business already owns an ID card printer, then a desktop-based badge maker is probably suitable for their needs. Typically, large organizations who have high employee turnover will have their own printer. A desktop-based badge maker is also required if a company needs their IDs make instantly. An example of this is the private construction site that has restricted access. However, if a company does not already have a local (or network) printer that has the features they need, then the web-based option is a perhaps a more affordable solution. The web-based solution is good for small businesses that do not anticipate a lot of rapid growth, or organizations who either can not afford a card printer, or do not have the resources to learn how to set up and use one. Generally speaking, desktop-based solutions involve software, a database (or spreadsheet) and can be installed on a single computer or network.",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "Alongside the basic function of printing cards, card printers can also read and encode magnetic stripes as well as contact and contact free RFID chip cards (smart cards). Thus card printers enable the encoding of plastic cards both visually and logically. Plastic cards can also be laminated after printing. Plastic cards are laminated after printing to achieve a considerable increase in durability and a greater degree of counterfeit prevention. Some card printers come with an option to print both sides at the same time, which cuts down the time taken to print and less margin of error. In such printers one side of id card is printed and then the card is flipped in the flip station and other side is printed.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "Alongside the traditional uses in time attendance and access control (in particular with photo personalization), countless other applications have been found for plastic cards, e.g. for personalized customer and members' cards, for sports ticketing and in local public transport systems for the production of season tickets, for the production of school and college identity cards as well as for the production of national ID cards.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies.",
"title": "Technology"
},
{
"paragraph_id": 18,
"text": "A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.",
"title": "Technology"
},
{
"paragraph_id": 19,
"text": "Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.",
"title": "Technology"
},
{
"paragraph_id": 20,
"text": "The following printing technologies are routinely found in modern printers:",
"title": "Technology"
},
{
"paragraph_id": 21,
"text": "A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.",
"title": "Technology"
},
{
"paragraph_id": 22,
"text": "Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.",
"title": "Technology"
},
{
"paragraph_id": 23,
"text": "Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers.",
"title": "Technology"
},
{
"paragraph_id": 24,
"text": "Solid ink printers, also known as phase-change ink or hot-melt ink printers, are a type of thermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks are CMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink was first used by Data Products and Howtek, Inc., in 1984. Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001.",
"title": "Technology"
},
{
"paragraph_id": 25,
"text": "A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.",
"title": "Technology"
},
{
"paragraph_id": 26,
"text": "Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colors can be achieved with special papers and different temperatures and heating rates for different colors; these colored sheets are not required in black-and-white output. One example is Zink (a portmanteau of \"zero ink\").",
"title": "Technology"
},
{
"paragraph_id": 27,
"text": "The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.",
"title": "Technology"
},
{
"paragraph_id": 28,
"text": "Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by \"overstriking\", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used.",
"title": "Technology"
},
{
"paragraph_id": 29,
"text": "Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known \"golf ball\" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.",
"title": "Technology"
},
{
"paragraph_id": 30,
"text": "The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a \"typebox\" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.",
"title": "Technology"
},
{
"paragraph_id": 31,
"text": "Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the \"daisy wheel\", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second.",
"title": "Technology"
},
{
"paragraph_id": 32,
"text": "The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).",
"title": "Technology"
},
{
"paragraph_id": 33,
"text": "Dot-matrix printers can be broadly divided into two major classes:",
"title": "Technology"
},
{
"paragraph_id": 34,
"text": "Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.",
"title": "Technology"
},
{
"paragraph_id": 35,
"text": "In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use.",
"title": "Technology"
},
{
"paragraph_id": 36,
"text": "Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.",
"title": "Technology"
},
{
"paragraph_id": 37,
"text": "Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century.",
"title": "Technology"
},
{
"paragraph_id": 38,
"text": "Line printers print an entire line of text at a time. Four principal designs exist.",
"title": "Technology"
},
{
"paragraph_id": 39,
"text": "In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print.",
"title": "Technology"
},
{
"paragraph_id": 40,
"text": "Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations \"lp\", \"lpr\", or \"LPT\" to refer to printers.",
"title": "Technology"
},
{
"paragraph_id": 41,
"text": "Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.)",
"title": "Technology"
},
{
"paragraph_id": 42,
"text": "Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in 36 to 54 inches (910 to 1,370 mm) width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers.",
"title": "Technology"
},
{
"paragraph_id": 43,
"text": "Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings.",
"title": "Technology"
},
{
"paragraph_id": 44,
"text": "A number of other sorts of printers are important for historical reasons, or for special purpose uses.",
"title": "Technology"
},
{
"paragraph_id": 45,
"text": "Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device.",
"title": "Attributes"
},
{
"paragraph_id": 46,
"text": "Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers.",
"title": "Attributes"
},
{
"paragraph_id": 47,
"text": "Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster.",
"title": "Attributes"
},
{
"paragraph_id": 48,
"text": "The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially color images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America.",
"title": "Attributes"
},
{
"paragraph_id": 49,
"text": "The data received by a printer may be:",
"title": "Attributes"
},
{
"paragraph_id": 50,
"text": "Some printers can process all four types of data, others not.",
"title": "Attributes"
},
{
"paragraph_id": 51,
"text": "Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it.",
"title": "Attributes"
},
{
"paragraph_id": 52,
"text": "A monochrome printer can only produce monochrome images, with only shades of a single color. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too",
"title": "Attributes"
},
{
"paragraph_id": 53,
"text": "A color printer can produce images of multiple colors. A photo printer is a color printer that can produce images that mimic the color range (gamut) and resolution of prints made from photographic film.",
"title": "Attributes"
},
{
"paragraph_id": 54,
"text": "The page yield is the number of pages that can be printed from a [toner cartridge] or [ink cartridge]—before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a specific cartridge depends on a number of factors.",
"title": "Attributes"
},
{
"paragraph_id": 55,
"text": "For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield.",
"title": "Attributes"
},
{
"paragraph_id": 56,
"text": "In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP).",
"title": "Attributes"
},
{
"paragraph_id": 57,
"text": "Retailers often apply the \"razor and blades\" model: a company may sell a printer at cost and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.",
"title": "Attributes"
},
{
"paragraph_id": 58,
"text": "Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: \"cheap printer – expensive ink\" or \"expensive printer – cheap ink\". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.",
"title": "Attributes"
},
{
"paragraph_id": 59,
"text": "Printer steganography is a type of steganography – \"hiding data within data\" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.",
"title": "Attributes"
},
{
"paragraph_id": 60,
"text": "As of 2020-2021, the largest worldwide vendor of printers is Hewlett-Packard, followed by Canon, Brother, Seiko Epson and Kyocera. Other known vendors include NEC, Ricoh, Xerox, Lexmark, OKI, Sharp, Konica Minolta, Samsung, Kodak, Dell, Toshiba, Star Micronics, Citizen and Panasonic.",
"title": "Manufacturers and market share"
}
] | In the field of computing, a printer is considered a peripheral device that serves the purpose of creating a permanent representation of text or graphics, usually on paper. While the majority of outputs produced by printers are readable by humans, there are instances where barcode printers have found a utility beyond this traditional use. Different types of printers are available for use, including inkjet printers, thermal printers, laser printers, and 3D printers. | 2001-03-17T06:42:56Z | 2023-12-21T23:53:35Z | [
"Template:Reflist",
"Template:Webarchive",
"Template:Fact",
"Template:Div col end",
"Template:Authority control",
"Template:Convert",
"Template:Div col",
"Template:Cite news",
"Template:Citation",
"Template:Cite book",
"Template:Subscription",
"Template:Use American English",
"Template:Use DMY dates",
"Template:Commons category-inline",
"Template:Anchor",
"Template:Cite web",
"Template:Cite journal",
"Template:Basic computer components",
"Template:Short description",
"Template:Main article"
] | https://en.wikipedia.org/wiki/Printer_(computing) |
5,278 | Copyright | A copyright is a type of intellectual property that gives the creator of an original work, or another owner of the right, the exclusive, legally secured right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.
Some jurisdictions require "fixing" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.
Copyrights can be granted by public law and are in that case considered "territorial rights". This means that copyrights granted by the law of a certain state do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works "cross" national borders or national rights are inconsistent.
Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without a formal registration. When the copyright of a work expires, it enters the public domain.
The concept of copyright developed after the printing press came into use in Europe in the 15th and 16th centuries. It was associated with a common law and rooted in the civil law system. The printing press made it much cheaper to produce works, but as there was initially no copyright law, anyone could buy or rent a press and print any text. Popular new works were immediately re-set and re-published by competitors, so printers needed a constant stream of new material. Fees paid to authors for new works were high, and significantly supplemented the incomes of many academics.
Printing brought profound social changes. The rise in literacy across Europe led to a dramatic increase in the demand for reading matter. Prices of reprints were low, so publications could be bought by poorer people, creating a mass audience. In German language markets before the advent of copyright, technical materials, like popular fiction, were inexpensive and widely available; it has been suggested this contributed to Germany's industrial and economic success. After copyright law became established (in 1710 in England and Scotland, and in the 1840s in German-speaking areas) the low-price mass market vanished, and fewer, more expensive editions were published; distribution of scientific and technical information was greatly reduced.
The concept of copyright first developed in England. In reaction to the printing of "scandalous books and pamphlets", the English Parliament passed the Licensing of the Press Act 1662, which required all intended publications to be registered with the government-approved Stationers' Company, giving the Stationers the right to regulate what material could be printed.
The Statute of Anne, enacted in 1710 in England and Scotland, provided the first legislation to protect copyrights (but not authors' rights). The Copyright Act of 1814 extended more rights for authors but did not protect British from reprinting in the US. The Berne International Copyright Convention of 1886 finally provided protection for authors among the countries who signed the agreement, although the US did not join the Berne Convention until 1989.
In the US, the Constitution grants Congress the right to establish copyright and patent laws. Shortly after the Constitution was passed, Congress enacted the Copyright Act of 1790, modeling it after the Statute of Anne. While the national law protected authors’ published works, authority was granted to the states to protect authors’ unpublished works. The most recent major overhaul of copyright in the US, the 1976 Copyright Act, extended federal copyright to works as soon as they are created and "fixed", without requiring publication or registration. State law continues to apply to unpublished works that are not otherwise copyrighted by federal law. This act also changed the calculation of copyright term from a fixed term (then a maximum of fifty-six years) to "life of the author plus 50 years". These changes brought the US closer to conformity with the Berne Convention, and in 1989 the United States further revised its copyright law and joined the Berne Convention officially.
Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized. Different cultural attitudes, social organizations, economic models and legal frameworks are seen to account for why copyright emerged in Europe and not, for example, in Asia. In the Middle Ages in Europe, there was generally a lack of any concept of literary property due to the general relations of production, the specific organization of literary production and the role of culture in society. The latter refers to the tendency of oral societies, such as that of Europe in the medieval period, to view knowledge as the product and expression of the collective, rather than to see it as individual property. However, with copyright laws, intellectual production comes to be seen as a product of an individual, with attendant rights. The most significant point is that patent and copyright laws support the expansion of the range of creative human activities that can be commodified. This parallels the ways in which capitalism led to the commodification of many aspects of social life that earlier had no monetary or economic value per se.
Copyright has developed into a concept that has a significant effect on nearly every modern industry, including not just literary work, but also forms of creative work such as sound recordings, films, photographs, software, and architecture.
Often seen as the first real copyright law, the 1709 British Statute of Anne gave authors and the publishers to whom they did chose to license their works, the right to publish the author's creations for a fixed period, after which the copyright expired. It was "An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or the Purchasers of such Copies, during the Times therein mentioned." The act also alluded to individual rights of the artist. It began, "Whereas Printers, Booksellers, and other Persons, have of late frequently taken the Liberty of Printing ... Books, and other Writings, without the Consent of the Authors ... to their very great Detriment, and too often to the Ruin of them and their Families:".
A right to benefit financially from the work is articulated, and court rulings and legislation have recognized a right to control the work, such as ensuring that the integrity of it is preserved. An irrevocable right to be recognized as the work's creator appears in some countries' copyright laws.
The Copyright Clause of the United States, Constitution (1787) authorized copyright legislation: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." That is, by guaranteeing them a period of time in which they alone could profit from their works, they would be enabled and encouraged to invest the time required to create them, and this would be good for society as a whole. A right to profit from the work has been the philosophical underpinning for much legislation extending the duration of copyright, to the life of the creator and beyond, to their heirs.
The original length of copyright in the United States was 14 years, and it had to be explicitly applied for. If the author wished, they could apply for a second 14‑year monopoly grant, but after that the work entered the public domain, so it could be used and built upon by others.
In many jurisdictions of the European continent, comparable legal concepts to copyright did exist from the 16th century on but did change under Napoleonic rule into another legal concept: authors' rights or creator's right laws, from French: droits d'auteur and German Urheberrecht. In many modern day publications the terms copyright and authors' rights are being mixed, or used as translations, but in a juridical sense the legal concepts do essentially differ. Authors' rights are, generally speaking, from the start absolute property rights of an author of original work that one doesn't have to apply for. The law is automatically connecting an original work as intellectual property to its creator. Although the concepts throughout the years have been mingled globally, due to international treaties and contracts, distinct differences between jurisdictions continue to exist.
Creator's law was enacted rather late in German speaking states and the economic historian Eckhard Höffner argues that the absence of possibilities to maintain copyright laws in all these states in the early 19th century, encouraged the publishing of low-priced paperbacks for the masses. This was profitable for authors and led to a proliferation of books, enhanced knowledge, and was ultimately an important factor in the ascendency of Germany as a power during that century. After the introduction of creator's rights, German publishers started to follow English customs, in issuing only expensive book editions for wealthy customers.
Empirical evidence derived from the exogenous differential introduction of author's right (Italian: diritto d’autore) in Napoleonic Italy shows that "basic copyrights increased both the number and the quality of operas, measured by their popularity and durability".
The 1886 Berne Convention first established recognition of authors' rights among sovereign nations, rather than merely bilaterally. Under the Berne Convention, protective rights for creative works do not have to be asserted or declared, as they are automatically in force at creation: an author need not "register" or "apply for" these protective rights in countries adhering to the Berne Convention. As soon as a work is "fixed", that is, written or recorded on some physical medium, its author is automatically entitled to all intellectual property rights in the work, and to any derivative works unless and until the author explicitly disclaims them, or until the rights expires. The Berne Convention also resulted in foreign authors being treated equivalently to domestic authors, in any country signed onto the convention. The UK signed the Berne Convention in 1887 but did not implement large parts of it until 100 years later with the passage of the Copyright, Designs and Patents Act 1988. Specially, for educational and scientific research purposes, the Berne Convention provides the developing countries issue compulsory licenses for the translation or reproduction of copyrighted works within the limits prescribed by the convention. This was a special provision that had been added at the time of 1971 revision of the convention, because of the strong demands of the developing countries. The United States did not sign the Berne Convention until 1989.
The United States and most Latin American countries instead entered into the Buenos Aires Convention in 1910, which required a copyright notice on the work (such as all rights reserved), and permitted signatory nations to limit the duration of copyrights to shorter and renewable terms. The Universal Copyright Convention was drafted in 1952 as another less demanding alternative to the Berne Convention, and ratified by nations such as the Soviet Union and developing nations.
The regulations of the Berne Convention are incorporated into the World Trade Organization's TRIPS agreement (1995), thus giving the Berne Convention effectively near-global application.
In 1961, the United International Bureaux for the Protection of Intellectual Property signed the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations. In 1996, this organization was succeeded by the founding of the World Intellectual Property Organization, which launched the 1996 WIPO Performances and Phonograms Treaty and the 2002 WIPO Copyright Treaty, which enacted greater restrictions on the use of technology to copy works in the nations that ratified it. The Trans-Pacific Partnership includes intellectual Property Provisions relating to copyright.
Copyright laws and authors' right laws are standardized somewhat through these international conventions such as the Berne Convention and Universal Copyright Convention. These multilateral treaties have been ratified by nearly all countries, and international organizations such as the European Union require their member states to comply with them. All member states of the World Trade Organization are obliged to establish minimum levels of copyright protection. Nevertheless, important differences between the national regimes continue to exist.
The original holder of the copyright may be the employer of the author rather than the author themself if the work is a "work for hire". For example, in English law the Copyright, Designs and Patents Act 1988 provides that if a copyrighted work is made by an employee in the course of that employment, the copyright is automatically owned by the employer which would be a "Work for Hire". Typically, the first owner of a copyright is the person who created the work i.e. the author. But when more than one person creates the work, then a case of joint authorship can be made provided some criteria are met.
Copyright may apply to a wide range of creative, intellectual, or artistic forms, or "works". Specifics vary by jurisdiction, but these can include poems, theses, fictional characters, plays and other literary works, motion pictures, choreography, musical compositions, sound recordings, paintings, drawings, sculptures, photographs, computer software, radio and television broadcasts, and industrial designs. Graphic designs and industrial designs may have separate or overlapping laws applied to them in some jurisdictions.
Copyright does not cover ideas and information themselves, only the form or manner in which they are expressed. For example, the copyright to a Mickey Mouse cartoon restricts others from making copies of the cartoon or creating derivative works based on Disney's particular anthropomorphic mouse, but does not prohibit the creation of other works about anthropomorphic mice in general, so long as they are different enough to not be judged copies of Disney's.
Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some "skill, labour, and judgment" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead.
Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other.
In all countries where the Berne Convention standards apply, copyright is automatic, and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce their exclusive rights. However, while registration is not needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.)
A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to themself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work.
The Berne Convention allows member countries to decide whether creative works must be "fixed" to enjoy copyright. Article 2, Section 2 of the Berne Convention states: "It shall be a matter for legislation in the countries of the Union to prescribe that works in general or any specified categories of works shall not be protected unless they have been fixed in some material form." Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be "fixed in a tangible medium of expression" to obtain copyright protection. US law requires that the fixation be stable and permanent enough to be "perceived, reproduced or communicated for a period of more than transitory duration". Similarly, Canadian courts consider fixation to require that the work be "expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance".
Note this provision of US law: c) Effect of Berne Convention.—No right or interest in a work eligible for protection under this title may be claimed by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Any rights in a work eligible for protection under this title that derive from this title, other Federal or State statutes, or the common law, shall not be expanded or reduced by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto.
Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle), the abbreviation "Copr.", or the word "Copyright", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle), which indicates a sound recording copyright, with the letter P indicating a "phonorecord". In addition, the phrase All rights reserved which indicates that the copyright holder reserves, or holds for their own use was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however.
In 1989 the United States enacted the Berne Convention Implementation Act, amending the 1976 Copyright Act to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of "innocent infringement" being successful.
Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See Legal aspects of file sharing)
In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court.
"... by 1978, the scope was expanded to apply to any 'expression' that has been 'fixed' in any medium, this protection granted automatically whether the maker wants it or not, no registration required."
For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed "unauthorized edition", not copyright infringement.
Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales.
According to the IP Commission Report the annual cost of intellectual property theft to the US economy "continues to exceed $225 billion in counterfeit goods, pirated software, and theft of trade secrets and could be as high as $600 billion." A 2019 study sponsored by the US Chamber of Commerce Global Innovation Policy Center (GIPC), in partnership with NERA Economic Consulting "estimates that global online piracy costs the U.S. economy at least $29.2 billion in lost revenue each year." An August 2021 report by the Digital Citizens Alliance states that "online criminals who offer stolen movies, TV shows, games, and live events through websites and apps are reaping $1.34 billion in annual advertising revenues." This comes as a result of users visiting pirate websites who are then subjected to pirated content, malware, and fraud.
According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights.
With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner's permission, often through a license. The owner's use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work, and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit:
Moral rights are concerned with the non-economic rights of a creator. They protect the creator's connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights:
These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors’ economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the US Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork – including copyright law's derivative work right, state moral rights statutes, and contract law – are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the US moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole.
The Copyright Law in the United States, several exclusive rights are granted to the holder of a copyright, as are listed below:
The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase "exclusive right" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a "negative right", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit him/her to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right.
UK copyright law gives creators both economic rights and moral rights. While ‘copying’ someone else's work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, ‘mutilating’ it might infringe the creator's moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to ‘derogatory treatment’, that is the right of integrity.
Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957.
Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire.
The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those.
In the United States, all books and other works, except for sound recordings, published before 1928 have expired copyrights and are in the public domain. The applicable date for sound recordings in the United States is before 1923. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country.
But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the US, the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries.
In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was strongly promoted by corporations which had valuable copyrights which otherwise would have expired, and has been the subject of substantial criticism on this point.
In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. United States copyright law does not cover names, titles, short phrases or listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover, such as trademarks and patents.
The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b).
Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores.
Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. While this doctrine permits the transfer of the particular legitimate copy involved, it does not permit making or distributing additional copies.
In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation.
In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying one's own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible.
Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are:
In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to "format shift" that work from one medium to another for personal, private use, or to "time shift" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer.
In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders.
Section 1008. Prohibition on certain infringement actions No action may be brought under this title alleging infringement of copyright based on the manufacture, importation, or distribution of a digital audio recording device, a digital audio recording medium, an analog recording device, or an analog recording medium, or based on the noncommercial use by a consumer of such a device or medium for making digital musical recordings or analog musical recordings.
Later acts amended US Copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution.
EU copyright laws recognise the right of EU member states to implement some national exceptions to copyright. Examples of those exceptions are:
It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired people without permission from the copyright holder.
In the US there is a Religious Service Exemption (1976 law, section 110[3]), namely "performance of a non-dramatic literary or musical work or of a dramatico-musical work of a religious nature or display of a work, in the course of services at a place of worship or other religious assembly" shall not constitute infringement of copyright.
In Canada, items deemed useful articles such as clothing designs are exempted from copyright protection under the Copyright Act if reproduced more than 50 times. Fast fashion brands may reproduce clothing designs from smaller companies without violating copyright protections.
A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and their work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time.
A transfer or licence may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the US Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under US law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction.
Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify.
Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licences are the GNU General Public License, BSD licenses and some Creative Commons licenses.
Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable.
Terms of use have traditionally been negotiated on an individual basis between copyright holder and potential licensee. Therefore, a general CC license outlining which rights the copyright holder is willing to waive enables the general public to use such works more freely. Six general types of CC licenses are available (although some of them are not properly free per the above definitions and per Creative Commons' own advice). These are based upon copyright-holder stipulations such as whether they are willing to allow modifications to the work, whether they permit the creation of derivative works and whether they are willing to permit commercial use of the work. As of 2009 approximately 130 million individuals had received such licenses.
Some sources are critical of particular aspects of the copyright system. This is known as a debate over copynorms. Particularly to the background of uploading content to internet platforms and the digital exchange of original work, there is discussion about the copyright aspects of downloading and streaming, the copyright aspects of hyperlinking and framing.
Concerns are often couched in the language of digital rights, digital freedom, database rights, open data or censorship. Discussions include Free Culture, a 2004 book by Lawrence Lessig. Lessig coined the term permission culture to describe a worst-case system. Good Copy Bad Copy (documentary) and RiP!: A Remix Manifesto, discuss copyright. Some suggest an alternative compensation system. In Europe consumers are acting up against the raising costs of music, film and books, and as a result Pirate Parties have been created. Some groups reject copyright altogether, taking an anti-copyright stance. The perceived inability to enforce copyright online leads some to advocate ignoring legal statutes when on the web.
Copyright, like other intellectual property rights, is subject to a statutorily determined term. Once the term of a copyright has expired, the formerly copyrighted work enters the public domain and may be used or exploited by anyone without obtaining permission, and normally without payment. However, in paying public domain regimes the user may still have to pay royalties to the state or to an authors' association. Courts in common law countries, such as the United States and the United Kingdom, have rejected the doctrine of a common law copyright. Public domain works should not be confused with works that are publicly available. Works posted in the internet, for example, are publicly available, but are not generally in the public domain. Copying such works may therefore violate the author's copyright. | [
{
"paragraph_id": 0,
"text": "A copyright is a type of intellectual property that gives the creator of an original work, or another owner of the right, the exclusive, legally secured right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Some jurisdictions require \"fixing\" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Copyrights can be granted by public law and are in that case considered \"territorial rights\". This means that copyrights granted by the law of a certain state do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works \"cross\" national borders or national rights are inconsistent.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without a formal registration. When the copyright of a work expires, it enters the public domain.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The concept of copyright developed after the printing press came into use in Europe in the 15th and 16th centuries. It was associated with a common law and rooted in the civil law system. The printing press made it much cheaper to produce works, but as there was initially no copyright law, anyone could buy or rent a press and print any text. Popular new works were immediately re-set and re-published by competitors, so printers needed a constant stream of new material. Fees paid to authors for new works were high, and significantly supplemented the incomes of many academics.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Printing brought profound social changes. The rise in literacy across Europe led to a dramatic increase in the demand for reading matter. Prices of reprints were low, so publications could be bought by poorer people, creating a mass audience. In German language markets before the advent of copyright, technical materials, like popular fiction, were inexpensive and widely available; it has been suggested this contributed to Germany's industrial and economic success. After copyright law became established (in 1710 in England and Scotland, and in the 1840s in German-speaking areas) the low-price mass market vanished, and fewer, more expensive editions were published; distribution of scientific and technical information was greatly reduced.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The concept of copyright first developed in England. In reaction to the printing of \"scandalous books and pamphlets\", the English Parliament passed the Licensing of the Press Act 1662, which required all intended publications to be registered with the government-approved Stationers' Company, giving the Stationers the right to regulate what material could be printed.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Statute of Anne, enacted in 1710 in England and Scotland, provided the first legislation to protect copyrights (but not authors' rights). The Copyright Act of 1814 extended more rights for authors but did not protect British from reprinting in the US. The Berne International Copyright Convention of 1886 finally provided protection for authors among the countries who signed the agreement, although the US did not join the Berne Convention until 1989.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the US, the Constitution grants Congress the right to establish copyright and patent laws. Shortly after the Constitution was passed, Congress enacted the Copyright Act of 1790, modeling it after the Statute of Anne. While the national law protected authors’ published works, authority was granted to the states to protect authors’ unpublished works. The most recent major overhaul of copyright in the US, the 1976 Copyright Act, extended federal copyright to works as soon as they are created and \"fixed\", without requiring publication or registration. State law continues to apply to unpublished works that are not otherwise copyrighted by federal law. This act also changed the calculation of copyright term from a fixed term (then a maximum of fifty-six years) to \"life of the author plus 50 years\". These changes brought the US closer to conformity with the Berne Convention, and in 1989 the United States further revised its copyright law and joined the Berne Convention officially.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized. Different cultural attitudes, social organizations, economic models and legal frameworks are seen to account for why copyright emerged in Europe and not, for example, in Asia. In the Middle Ages in Europe, there was generally a lack of any concept of literary property due to the general relations of production, the specific organization of literary production and the role of culture in society. The latter refers to the tendency of oral societies, such as that of Europe in the medieval period, to view knowledge as the product and expression of the collective, rather than to see it as individual property. However, with copyright laws, intellectual production comes to be seen as a product of an individual, with attendant rights. The most significant point is that patent and copyright laws support the expansion of the range of creative human activities that can be commodified. This parallels the ways in which capitalism led to the commodification of many aspects of social life that earlier had no monetary or economic value per se.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Copyright has developed into a concept that has a significant effect on nearly every modern industry, including not just literary work, but also forms of creative work such as sound recordings, films, photographs, software, and architecture.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Often seen as the first real copyright law, the 1709 British Statute of Anne gave authors and the publishers to whom they did chose to license their works, the right to publish the author's creations for a fixed period, after which the copyright expired. It was \"An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or the Purchasers of such Copies, during the Times therein mentioned.\" The act also alluded to individual rights of the artist. It began, \"Whereas Printers, Booksellers, and other Persons, have of late frequently taken the Liberty of Printing ... Books, and other Writings, without the Consent of the Authors ... to their very great Detriment, and too often to the Ruin of them and their Families:\".",
"title": "History"
},
{
"paragraph_id": 12,
"text": "A right to benefit financially from the work is articulated, and court rulings and legislation have recognized a right to control the work, such as ensuring that the integrity of it is preserved. An irrevocable right to be recognized as the work's creator appears in some countries' copyright laws.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The Copyright Clause of the United States, Constitution (1787) authorized copyright legislation: \"To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.\" That is, by guaranteeing them a period of time in which they alone could profit from their works, they would be enabled and encouraged to invest the time required to create them, and this would be good for society as a whole. A right to profit from the work has been the philosophical underpinning for much legislation extending the duration of copyright, to the life of the creator and beyond, to their heirs.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The original length of copyright in the United States was 14 years, and it had to be explicitly applied for. If the author wished, they could apply for a second 14‑year monopoly grant, but after that the work entered the public domain, so it could be used and built upon by others.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In many jurisdictions of the European continent, comparable legal concepts to copyright did exist from the 16th century on but did change under Napoleonic rule into another legal concept: authors' rights or creator's right laws, from French: droits d'auteur and German Urheberrecht. In many modern day publications the terms copyright and authors' rights are being mixed, or used as translations, but in a juridical sense the legal concepts do essentially differ. Authors' rights are, generally speaking, from the start absolute property rights of an author of original work that one doesn't have to apply for. The law is automatically connecting an original work as intellectual property to its creator. Although the concepts throughout the years have been mingled globally, due to international treaties and contracts, distinct differences between jurisdictions continue to exist.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Creator's law was enacted rather late in German speaking states and the economic historian Eckhard Höffner argues that the absence of possibilities to maintain copyright laws in all these states in the early 19th century, encouraged the publishing of low-priced paperbacks for the masses. This was profitable for authors and led to a proliferation of books, enhanced knowledge, and was ultimately an important factor in the ascendency of Germany as a power during that century. After the introduction of creator's rights, German publishers started to follow English customs, in issuing only expensive book editions for wealthy customers.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Empirical evidence derived from the exogenous differential introduction of author's right (Italian: diritto d’autore) in Napoleonic Italy shows that \"basic copyrights increased both the number and the quality of operas, measured by their popularity and durability\".",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The 1886 Berne Convention first established recognition of authors' rights among sovereign nations, rather than merely bilaterally. Under the Berne Convention, protective rights for creative works do not have to be asserted or declared, as they are automatically in force at creation: an author need not \"register\" or \"apply for\" these protective rights in countries adhering to the Berne Convention. As soon as a work is \"fixed\", that is, written or recorded on some physical medium, its author is automatically entitled to all intellectual property rights in the work, and to any derivative works unless and until the author explicitly disclaims them, or until the rights expires. The Berne Convention also resulted in foreign authors being treated equivalently to domestic authors, in any country signed onto the convention. The UK signed the Berne Convention in 1887 but did not implement large parts of it until 100 years later with the passage of the Copyright, Designs and Patents Act 1988. Specially, for educational and scientific research purposes, the Berne Convention provides the developing countries issue compulsory licenses for the translation or reproduction of copyrighted works within the limits prescribed by the convention. This was a special provision that had been added at the time of 1971 revision of the convention, because of the strong demands of the developing countries. The United States did not sign the Berne Convention until 1989.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The United States and most Latin American countries instead entered into the Buenos Aires Convention in 1910, which required a copyright notice on the work (such as all rights reserved), and permitted signatory nations to limit the duration of copyrights to shorter and renewable terms. The Universal Copyright Convention was drafted in 1952 as another less demanding alternative to the Berne Convention, and ratified by nations such as the Soviet Union and developing nations.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The regulations of the Berne Convention are incorporated into the World Trade Organization's TRIPS agreement (1995), thus giving the Berne Convention effectively near-global application.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In 1961, the United International Bureaux for the Protection of Intellectual Property signed the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations. In 1996, this organization was succeeded by the founding of the World Intellectual Property Organization, which launched the 1996 WIPO Performances and Phonograms Treaty and the 2002 WIPO Copyright Treaty, which enacted greater restrictions on the use of technology to copy works in the nations that ratified it. The Trans-Pacific Partnership includes intellectual Property Provisions relating to copyright.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Copyright laws and authors' right laws are standardized somewhat through these international conventions such as the Berne Convention and Universal Copyright Convention. These multilateral treaties have been ratified by nearly all countries, and international organizations such as the European Union require their member states to comply with them. All member states of the World Trade Organization are obliged to establish minimum levels of copyright protection. Nevertheless, important differences between the national regimes continue to exist.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The original holder of the copyright may be the employer of the author rather than the author themself if the work is a \"work for hire\". For example, in English law the Copyright, Designs and Patents Act 1988 provides that if a copyrighted work is made by an employee in the course of that employment, the copyright is automatically owned by the employer which would be a \"Work for Hire\". Typically, the first owner of a copyright is the person who created the work i.e. the author. But when more than one person creates the work, then a case of joint authorship can be made provided some criteria are met.",
"title": "Obtaining protection"
},
{
"paragraph_id": 24,
"text": "Copyright may apply to a wide range of creative, intellectual, or artistic forms, or \"works\". Specifics vary by jurisdiction, but these can include poems, theses, fictional characters, plays and other literary works, motion pictures, choreography, musical compositions, sound recordings, paintings, drawings, sculptures, photographs, computer software, radio and television broadcasts, and industrial designs. Graphic designs and industrial designs may have separate or overlapping laws applied to them in some jurisdictions.",
"title": "Obtaining protection"
},
{
"paragraph_id": 25,
"text": "Copyright does not cover ideas and information themselves, only the form or manner in which they are expressed. For example, the copyright to a Mickey Mouse cartoon restricts others from making copies of the cartoon or creating derivative works based on Disney's particular anthropomorphic mouse, but does not prohibit the creation of other works about anthropomorphic mice in general, so long as they are different enough to not be judged copies of Disney's.",
"title": "Obtaining protection"
},
{
"paragraph_id": 26,
"text": "Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some \"skill, labour, and judgment\" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead.",
"title": "Obtaining protection"
},
{
"paragraph_id": 27,
"text": "Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other.",
"title": "Obtaining protection"
},
{
"paragraph_id": 28,
"text": "In all countries where the Berne Convention standards apply, copyright is automatic, and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce their exclusive rights. However, while registration is not needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.)",
"title": "Obtaining protection"
},
{
"paragraph_id": 29,
"text": "A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to themself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work.",
"title": "Obtaining protection"
},
{
"paragraph_id": 30,
"text": "The Berne Convention allows member countries to decide whether creative works must be \"fixed\" to enjoy copyright. Article 2, Section 2 of the Berne Convention states: \"It shall be a matter for legislation in the countries of the Union to prescribe that works in general or any specified categories of works shall not be protected unless they have been fixed in some material form.\" Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be \"fixed in a tangible medium of expression\" to obtain copyright protection. US law requires that the fixation be stable and permanent enough to be \"perceived, reproduced or communicated for a period of more than transitory duration\". Similarly, Canadian courts consider fixation to require that the work be \"expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance\".",
"title": "Obtaining protection"
},
{
"paragraph_id": 31,
"text": "Note this provision of US law: c) Effect of Berne Convention.—No right or interest in a work eligible for protection under this title may be claimed by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Any rights in a work eligible for protection under this title that derive from this title, other Federal or State statutes, or the common law, shall not be expanded or reduced by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto.",
"title": "Obtaining protection"
},
{
"paragraph_id": 32,
"text": "Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle), the abbreviation \"Copr.\", or the word \"Copyright\", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle), which indicates a sound recording copyright, with the letter P indicating a \"phonorecord\". In addition, the phrase All rights reserved which indicates that the copyright holder reserves, or holds for their own use was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however.",
"title": "Obtaining protection"
},
{
"paragraph_id": 33,
"text": "In 1989 the United States enacted the Berne Convention Implementation Act, amending the 1976 Copyright Act to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of \"innocent infringement\" being successful.",
"title": "Obtaining protection"
},
{
"paragraph_id": 34,
"text": "Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See Legal aspects of file sharing)",
"title": "Enforcement"
},
{
"paragraph_id": 35,
"text": "In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court.",
"title": "Enforcement"
},
{
"paragraph_id": 36,
"text": "\"... by 1978, the scope was expanded to apply to any 'expression' that has been 'fixed' in any medium, this protection granted automatically whether the maker wants it or not, no registration required.\"",
"title": "Enforcement"
},
{
"paragraph_id": 37,
"text": "For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed \"unauthorized edition\", not copyright infringement.",
"title": "Enforcement"
},
{
"paragraph_id": 38,
"text": "Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales.",
"title": "Enforcement"
},
{
"paragraph_id": 39,
"text": "According to the IP Commission Report the annual cost of intellectual property theft to the US economy \"continues to exceed $225 billion in counterfeit goods, pirated software, and theft of trade secrets and could be as high as $600 billion.\" A 2019 study sponsored by the US Chamber of Commerce Global Innovation Policy Center (GIPC), in partnership with NERA Economic Consulting \"estimates that global online piracy costs the U.S. economy at least $29.2 billion in lost revenue each year.\" An August 2021 report by the Digital Citizens Alliance states that \"online criminals who offer stolen movies, TV shows, games, and live events through websites and apps are reaping $1.34 billion in annual advertising revenues.\" This comes as a result of users visiting pirate websites who are then subjected to pirated content, malware, and fraud.",
"title": "Enforcement"
},
{
"paragraph_id": 40,
"text": "According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights.",
"title": "Rights granted"
},
{
"paragraph_id": 41,
"text": "With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner's permission, often through a license. The owner's use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work, and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit:",
"title": "Rights granted"
},
{
"paragraph_id": 42,
"text": "Moral rights are concerned with the non-economic rights of a creator. They protect the creator's connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights:",
"title": "Rights granted"
},
{
"paragraph_id": 43,
"text": "These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors’ economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the US Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork – including copyright law's derivative work right, state moral rights statutes, and contract law – are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the US moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole.",
"title": "Rights granted"
},
{
"paragraph_id": 44,
"text": "The Copyright Law in the United States, several exclusive rights are granted to the holder of a copyright, as are listed below:",
"title": "Rights granted"
},
{
"paragraph_id": 45,
"text": "The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase \"exclusive right\" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a \"negative right\", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit him/her to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right.",
"title": "Rights granted"
},
{
"paragraph_id": 46,
"text": "UK copyright law gives creators both economic rights and moral rights. While ‘copying’ someone else's work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, ‘mutilating’ it might infringe the creator's moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to ‘derogatory treatment’, that is the right of integrity.",
"title": "Rights granted"
},
{
"paragraph_id": 47,
"text": "Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957.",
"title": "Rights granted"
},
{
"paragraph_id": 48,
"text": "Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire.",
"title": "Rights granted"
},
{
"paragraph_id": 49,
"text": "The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those.",
"title": "Rights granted"
},
{
"paragraph_id": 50,
"text": "In the United States, all books and other works, except for sound recordings, published before 1928 have expired copyrights and are in the public domain. The applicable date for sound recordings in the United States is before 1923. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country.",
"title": "Rights granted"
},
{
"paragraph_id": 51,
"text": "But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the US, the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries.",
"title": "Rights granted"
},
{
"paragraph_id": 52,
"text": "In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was strongly promoted by corporations which had valuable copyrights which otherwise would have expired, and has been the subject of substantial criticism on this point.",
"title": "Rights granted"
},
{
"paragraph_id": 53,
"text": "In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. United States copyright law does not cover names, titles, short phrases or listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover, such as trademarks and patents.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 54,
"text": "The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b).",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 55,
"text": "Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 56,
"text": "Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. While this doctrine permits the transfer of the particular legitimate copy involved, it does not permit making or distributing additional copies.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 57,
"text": "In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 58,
"text": "In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying one's own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 59,
"text": "Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are:",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 60,
"text": "In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to \"format shift\" that work from one medium to another for personal, private use, or to \"time shift\" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 61,
"text": "In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 62,
"text": "Section 1008. Prohibition on certain infringement actions No action may be brought under this title alleging infringement of copyright based on the manufacture, importation, or distribution of a digital audio recording device, a digital audio recording medium, an analog recording device, or an analog recording medium, or based on the noncommercial use by a consumer of such a device or medium for making digital musical recordings or analog musical recordings.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 63,
"text": "Later acts amended US Copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 64,
"text": "EU copyright laws recognise the right of EU member states to implement some national exceptions to copyright. Examples of those exceptions are:",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 65,
"text": "It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired people without permission from the copyright holder.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 66,
"text": "In the US there is a Religious Service Exemption (1976 law, section 110[3]), namely \"performance of a non-dramatic literary or musical work or of a dramatico-musical work of a religious nature or display of a work, in the course of services at a place of worship or other religious assembly\" shall not constitute infringement of copyright.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 67,
"text": "In Canada, items deemed useful articles such as clothing designs are exempted from copyright protection under the Copyright Act if reproduced more than 50 times. Fast fashion brands may reproduce clothing designs from smaller companies without violating copyright protections.",
"title": "Limitations and exceptions"
},
{
"paragraph_id": 68,
"text": "A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and their work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time.",
"title": " Transfer, assignment and licensing"
},
{
"paragraph_id": 69,
"text": "A transfer or licence may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the US Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under US law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction.",
"title": " Transfer, assignment and licensing"
},
{
"paragraph_id": 70,
"text": "Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify.",
"title": " Transfer, assignment and licensing"
},
{
"paragraph_id": 71,
"text": "Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licences are the GNU General Public License, BSD licenses and some Creative Commons licenses.",
"title": " Transfer, assignment and licensing"
},
{
"paragraph_id": 72,
"text": "Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable.",
"title": " Transfer, assignment and licensing"
},
{
"paragraph_id": 73,
"text": "Terms of use have traditionally been negotiated on an individual basis between copyright holder and potential licensee. Therefore, a general CC license outlining which rights the copyright holder is willing to waive enables the general public to use such works more freely. Six general types of CC licenses are available (although some of them are not properly free per the above definitions and per Creative Commons' own advice). These are based upon copyright-holder stipulations such as whether they are willing to allow modifications to the work, whether they permit the creation of derivative works and whether they are willing to permit commercial use of the work. As of 2009 approximately 130 million individuals had received such licenses.",
"title": " Transfer, assignment and licensing"
},
{
"paragraph_id": 74,
"text": "Some sources are critical of particular aspects of the copyright system. This is known as a debate over copynorms. Particularly to the background of uploading content to internet platforms and the digital exchange of original work, there is discussion about the copyright aspects of downloading and streaming, the copyright aspects of hyperlinking and framing.",
"title": "Criticism"
},
{
"paragraph_id": 75,
"text": "Concerns are often couched in the language of digital rights, digital freedom, database rights, open data or censorship. Discussions include Free Culture, a 2004 book by Lawrence Lessig. Lessig coined the term permission culture to describe a worst-case system. Good Copy Bad Copy (documentary) and RiP!: A Remix Manifesto, discuss copyright. Some suggest an alternative compensation system. In Europe consumers are acting up against the raising costs of music, film and books, and as a result Pirate Parties have been created. Some groups reject copyright altogether, taking an anti-copyright stance. The perceived inability to enforce copyright online leads some to advocate ignoring legal statutes when on the web.",
"title": "Criticism"
},
{
"paragraph_id": 76,
"text": "Copyright, like other intellectual property rights, is subject to a statutorily determined term. Once the term of a copyright has expired, the formerly copyrighted work enters the public domain and may be used or exploited by anyone without obtaining permission, and normally without payment. However, in paying public domain regimes the user may still have to pay royalties to the state or to an authors' association. Courts in common law countries, such as the United States and the United Kingdom, have rejected the doctrine of a common law copyright. Public domain works should not be confused with works that are publicly available. Works posted in the internet, for example, are publicly available, but are not generally in the public domain. Copying such works may therefore violate the author's copyright.",
"title": "Public domain"
}
] | A copyright is a type of intellectual property that gives the creator of an original work, or another owner of the right, the exclusive, legally secured right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States. Some jurisdictions require "fixing" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution. Copyrights can be granted by public law and are in that case considered "territorial rights". This means that copyrights granted by the law of a certain state do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works "cross" national borders or national rights are inconsistent. Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without a formal registration. When the copyright of a work expires, it enters the public domain. | 2001-09-30T06:43:41Z | 2023-12-29T08:25:07Z | [
"Template:Smallcaps",
"Template:Commons",
"Template:EB9 Poster",
"Template:Authority control",
"Template:Distinguish",
"Template:Use American English",
"Template:Blockquote",
"Template:As of",
"Template:UnitedStatesCode",
"Template:Better source needed",
"Template:Cite book",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Refbegin",
"Template:Refend",
"Template:Wikiquote",
"Template:About",
"Template:Sfn",
"Template:Cite encyclopedia",
"Template:ISBN",
"Template:See also",
"Template:Citation needed",
"Template:Div col",
"Template:USPL",
"Template:Cite journal",
"Template:UnitedStatesCodeSec",
"Template:Intellectual property activism",
"Template:Anchor",
"Template:Reflist",
"Template:Cite web",
"Template:Citation",
"Template:For",
"Template:Use dmy dates",
"Template:Main",
"Template:Div col end",
"Template:Copyright law by country",
"Template:Pp-semi-indef",
"Template:Intellectual property",
"Template:Portal",
"Template:Curlie",
"Template:Wikisource",
"Template:Library resources box",
"Template:Short description",
"Template:Pp-move",
"Template:Cite news",
"Template:EB1911 poster"
] | https://en.wikipedia.org/wiki/Copyright |
5,282 | Catalan language | Catalan (/ˈkætələn, -æn, ˌkætəˈlæn/; autonym: català, Eastern Catalan: [kətəˈla]), known in the Valencian Community and Carche as Valencian (autonym: valencià), is a Western Romance language. It is the official language of Andorra, and an official language of two autonomous communities in eastern Spain: Catalonia and the Balearic Islands. It is also an official language in Valencia, where it is called Valencian. It has semi-official status in the Italian comune of Alghero, and it is spoken in the Pyrénées-Orientales department of France and in two further areas in eastern Spain: the eastern strip of Aragon and the Carche area in the Region of Murcia. The Catalan-speaking territories are often called the Països Catalans or "Catalan Countries".
The language evolved from Vulgar Latin in the Middle Ages around the eastern Pyrenees. Nineteenth-century Spain saw a Catalan literary revival, culminating in the early 1900s.
The word Catalan is derived from the territorial name of Catalonia, itself of disputed etymology. The main theory suggests that Catalunya (Latin Gathia Launia) derives from the name Gothia or Gauthia ("Land of the Goths"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, whence Gothland > Gothlandia > Gothalania > Catalonia theoretically derived.
In English, the term referring to a person first appears in the mid 14th century as Catelaner, followed in the 15th century as Catellain (from French). It is attested a language name since at least 1652. The word Catalan can be pronounced in English as /ˈkætələn/, /ˈkætəlæn/ or /ˌkætəˈlæn/.
The endonym is pronounced [kətəˈla] in the Eastern Catalan dialects, and [kataˈla] in the Western dialects. In the Valencian Community and Carche, the term valencià [valensiˈa, ba-] is frequently used instead. Thus, the name "Valencian", although often employed for referring to the varieties specific to the Valencian Community and Carche, is also used by Valencians as a name for the language as a whole, synonymous with "Catalan". Both uses of the term have their respective entries in the dictionaries by the Acadèmia Valenciana de la Llengua and the Institut d'Estudis Catalans. See also status of Valencian below.
By the 9th century, Catalan had evolved from Vulgar Latin on both sides of the eastern end of the Pyrenees, as well as the territories of the Roman province of Hispania Tarraconensis to the south. From the 8th century onwards the Catalan counts extended their territory southwards and westwards at the expense of the Muslims, bringing their language with them. This process was given definitive impetus with the separation of the County of Barcelona from the Carolingian Empire in 988.
In the 11th century, documents written in macaronic Latin begin to show Catalan elements, with texts written almost completely in Romance appearing by 1080. Old Catalan shared many features with Gallo-Romance, diverging from Old Occitan between the 11th and 14th centuries.
During the 11th and 12th centuries the Catalan rulers expanded southward to the Ebro river, and in the 13th century they conquered the Land of Valencia and the Balearic Islands. The city of Alghero in Sardinia was repopulated with Catalan speakers in the 14th century. The language also reached Murcia, which became Spanish-speaking in the 15th century.
In the Low Middle Ages, Catalan went through a golden age, reaching a peak of maturity and cultural richness. Examples include the work of Majorcan polymath Ramon Llull (1232–1315), the Four Great Chronicles (13th–14th centuries), and the Valencian school of poetry culminating in Ausiàs March (1397–1459). By the 15th century, the city of Valencia had become the sociocultural center of the Crown of Aragon, and Catalan was present all over the Mediterranean world. During this period, the Royal Chancery propagated a highly standardized language. Catalan was widely used as an official language in Sicily until the 15th century, and in Sardinia until the 17th. During this period, the language was what Costa Carreras terms "one of the 'great languages' of medieval Europe".
Martorell's outstanding novel of chivalry Tirant lo Blanc (1490) shows a transition from Medieval to Renaissance values, something that can also be seen in Metge's work. The first book produced with movable type in the Iberian Peninsula was printed in Catalan.
With the union of the crowns of Castille and Aragon in 1479, the Spanish kings ruled over different kingdoms, each with its own cultural, linguistic and political particularities, and they had to swear by the laws of each territory before the respective parliaments. But after the War of the Spanish Succession, Spain became an absolute monarchy under Philip V, which led to the assimilation of the Crown of Aragon by the Crown of Castile through the Nueva Planta decrees, as a first step in the creation of the Spanish nation-state; as in other contemporary European states, this meant the imposition of the political and cultural characteristics of the dominant groups. Since the political unification of 1714, Spanish assimilation policies towards national minorities have been a constant.
The process of assimilation began with secret instructions to the corregidores of the Catalan territory: they "will take the utmost care to introduce the Castilian language, for which purpose he will give the most temperate and disguised measures so that the effect is achieved, without the care being noticed." From there, actions in the service of assimilation, discreet or aggressive, were continued, and reached to the last detail, such as, in 1799, the Royal Certificate forbidding anyone to "represent, sing and dance pieces that were not in Spanish." Anyway, the use of Spanish gradually became more prestigious and marked the start of the decline of Catalan. Starting in the 16th century, Catalan literature came under the influence of Spanish, and the nobles, part of the urban and literary classes became bilingual.
With the Treaty of the Pyrenees (1659), Spain ceded the northern part of Catalonia to France, and soon thereafter the local Catalan varieties came under the influence of French, which in 1700 became the sole official language of the region.
Shortly after the French Revolution (1789), the French First Republic prohibited official use of, and enacted discriminating policies against, the regional languages of France, such as Catalan, Alsatian, Breton, Occitan, Flemish, and Basque.
Following the French establishment of the colony of Algeria from 1830 onward, it received several waves of Catalan-speaking settlers. People from the Spanish Alicante province settled around Oran, whereas Algiers received immigration from Northern Catalonia and Menorca.
Their speech was known as patuet. By 1911, the number of Catalan speakers was around 100,000. After the declaration of independence of Algeria in 1962, almost all the Catalan speakers fled to Northern Catalonia (as Pieds-Noirs) or Alacant.
The government of France formally recognizes only French as an official language. Nevertheless, on 10 December 2007, the General Council of the Pyrénées-Orientales officially recognized Catalan as one of the languages of the department and seeks to further promote it in public life and education.
In 1807, the Statistics Office of the French Ministry of the Interior asked the prefects for an official survey on the limits of the French language. The survey found that in Roussillon, almost only Catalan was spoken, and since Napoleon wanted to incorporate Catalonia into France, as happened in 1812, the consul in Barcelona was also asked. He declared that Catalan "is taught in schools, it is printed and spoken, not only among the lower class, but also among people of first quality, also in social gatherings, as in visits and congresses", indicating that it was spoken everywhere "with the exception of the royal courts". He also indicated that Catalan was spoken "in the Kingdom of Valencia, in the islands of Mallorca, Menorca, Ibiza, Sardinia, Corsica and much of Sicily, in the Vall d "Aran and Cerdaña".
The defeat of the pro-Habsburg coalition in the War of Spanish Succession (1714) initiated a series of laws which, among other centralizing measures, imposed the use of Spanish in legal documentation all over Spain. Because of this, use of the Catalan language declined into the 18th century.
However, the 19th century saw a Catalan literary revival (Renaixença), which has continued up to the present day. This period starts with Aribau's Ode to the Homeland (1833); followed in the second half of the 19th century, and the early 20th by the work of Verdaguer (poetry), Oller (realist novel), and Guimerà (drama). In the 19th century, the region of Carche, in the province of Murcia was repopulated with Valencian speakers. Catalan spelling was standardized in 1913 and the language became official during the Second Spanish Republic (1931–1939). The Second Spanish Republic saw a brief period of tolerance, with most restrictions against Catalan lifted. The Generalitat (the autonomous government of Catalonia, established during the Republic in 1931) made a normal use of Catalan in its administration and put efforts to promote it at social level, including in schools and the University of Barcelona.
The Catalan language and culture were still vibrant during the Spanish Civil War (1936–1939), but were crushed at an unprecedented level throughout the subsequent decades due to Francoist dictatorship (1939–1975), which abolished the official status of Catalan and imposed the use of Spanish in schools and in public administration in all of Spain, while banning the use of Catalan in them. Between 1939 and 1943 newspapers and book printing in Catalan almost disappeared. Francisco Franco's desire for a homogenous Spanish population resonated with some Catalans in favor of his regime, primarily members of the upper class, who began to reject the use of Catalan. Despite all of these hardships, Catalan continued to be used privately within households, and it was able to survive Franco's dictatorship. At the end of World War II, however, some of the harsh mesures began to be lifted and, while Spanish language remained the sole promoted one, limited number of Catalan literature began to be tolerated. Several prominent Catalan authors resisted the suppression through literature. Private initiative contests were created to reward works in Catalan, among them Joan Martorell prize (1947), Víctor Català prize (1953) Carles Riba award (1950), or the Honor Award of Catalan Letters (1969). The first Catalan-language TV show was broadcast in 1964. At the same time, oppression of the Catalan language and identity was carried out in schools, through governmental bodies, and in religious centers.
In addition to the loss of prestige for Catalan and its prohibition in schools, migration during the 1950s into Catalonia from other parts of Spain also contributed to the diminished use of the language. These migrants were often unaware of the existence of Catalan, and thus felt no need to learn or use it. Catalonia was the economic powerhouse of Spain, so these migrations continued to occur from all corners of the country. Employment opportunities were reduced for those who were not bilingual. Daily newspapers remained exclusively in Spanish until after Franco's death, when the first one in Catalan since the end of the Civil War, Avui, began to be published in 1976.
Since the Spanish transition to democracy (1975–1982), Catalan has been institutionalized as an official language, language of education, and language of mass media; all of which have contributed to its increased prestige. In Catalonia, there is an unparalleled large bilingual European non-state linguistic community. The teaching of Catalan is mandatory in all schools, but it is possible to use Spanish for studying in the public education system of Catalonia in two situations – if the teacher assigned to a class chooses to use Spanish, or during the learning process of one or more recently arrived immigrant students. There is also some intergenerational shift towards Catalan.
More recently, several Spanish political forces have tried to increase the use of Spanish in the Catalan educational system. As a result, in May 2022 the Spanish Supreme Court urged the Catalan regional government to enforce a measure by which 25% of all lessons must be taught in Spanish.
According to the Statistical Institute of Catalonia, in 2013 the Catalan language is the second most commonly used in Catalonia, after Spanish, as a native or self-defining language: 7% of the population self-identifies with both Catalan and Spanish equally, 36.4% with Catalan and 47.5% only Spanish. In 2003 the same studies concluded no language preference for self-identification within the population above 15 years old: 5% self-identified with both languages, 44.3% with Catalan and 47.5% with Spanish. To promote use of Catalan, the Generalitat de Catalunya (Catalonia's official Autonomous government) spends part of its annual budget on the promotion of the use of Catalan in Catalonia and in other territories, with entities such as Consorci per a la Normalització Lingüística (Consortium for Linguistic Normalization)
In Andorra, Catalan has always been the sole official language. Since the promulgation of the 1993 constitution, several policies favoring Catalan have been enforced, like Catalan medium education.
On the other hand, there are several language shift processes currently taking place. In the Northern Catalonia area of France, Catalan has followed the same trend as the other minority languages of France, with most of its native speakers being 60 or older (as of 2004). Catalan is studied as a foreign language by 30% of the primary education students, and by 15% of the secondary. The cultural association La Bressola promotes a network of community-run schools engaged in Catalan language immersion programs.
In Alicante province, Catalan is being replaced by Spanish and in Alghero by Italian. There is also well ingrained diglossia in the Valencian Community, Ibiza, and to a lesser extent, in the rest of the Balearic islands.
During the 20th century many Catalans emigrated or went into exile to Venezuela, Mexico, Cuba, Argentina, and other South American countries. They formed a large number of Catalan colonies that today continue to maintain the Catalan language. They also founded many Catalan casals (associations).
One classification of Catalan is given by Pèire Bèc:
However, the ascription of Catalan to the Occitano-Romance branch of Gallo-Romance languages is not shared by all linguists and philologists, particularly among Spanish ones, such as Ramón Menéndez Pidal.
Catalan bears varying degrees of similarity to the linguistic varieties subsumed under the cover term Occitan language (see also differences between Occitan and Catalan and Gallo-Romance languages). Thus, as it should be expected from closely related languages, Catalan today shares many traits with other Romance languages.
Some include Catalan in Occitan, as the linguistic distance between this language and some Occitan dialects (such as the Gascon language) is similar to the distance among different Occitan dialects. Catalan was considered a dialect of Occitan until the end of the 19th century and still today remains its closest relative.
Catalan shares many traits with the other neighboring Romance languages (Occitan, French, Italian, Sardinian as well as Spanish and Portuguese among others). However, despite being spoken mostly on the Iberian Peninsula, Catalan has marked differences with the Iberian Romance group (Spanish and Portuguese) in terms of pronunciation, grammar, and especially vocabulary; it shows instead its closest affinity with languages native to France and northern Italy, particularly Occitan and to a lesser extent Gallo-Romance (Franco-Provençal, French, Gallo-Italian).
According to Ethnologue, the lexical similarity between Catalan and other Romance languages is: 87% with Italian; 85% with Portuguese and Spanish; 76% with Ladin and Romansh; 75% with Sardinian; and 73% with Romanian.
During much of its history, and especially during the Francoist dictatorship (1939–1975), the Catalan language was ridiculed as a mere dialect of Spanish. This view, based on political and ideological considerations, has no linguistic validity. Spanish and Catalan have important differences in their sound systems, lexicon, and grammatical features, placing the language in features closer to Occitan (and French).
There is evidence that, at least from the 2nd century a.d., the vocabulary and phonology of Roman Tarraconensis was different from the rest of Roman Hispania. Differentiation arose generally because Spanish, Asturian, and Galician-Portuguese share certain peripheral archaisms (Spanish hervir, Asturian and Portuguese ferver vs. Catalan bullir, Occitan bolir "to boil") and innovatory regionalisms (Sp novillo, Ast nuviellu vs. Cat torell, Oc taurèl "bullock"), while Catalan has a shared history with the Western Romance innovative core, especially Occitan.
Like all Romance languages, Catalan has a handful of native words which are unique to it, or rare elsewhere. These include:
The Gothic superstrate produced different outcomes in Spanish and Catalan. For example, Catalan fang "mud" and rostir "to roast", of Germanic origin, contrast with Spanish lodo and asar, of Latin origin; whereas Catalan filosa "spinning wheel" and templa "temple", of Latin origin, contrast with Spanish rueca and sien, of Germanic origin.
The same happens with Arabic loanwords. Thus, Catalan alfàbia "large earthenware jar" and rajola "tile", of Arabic origin, contrast with Spanish tinaja and teja, of Latin origin; whereas Catalan oli "oil" and oliva "olive", of Latin origin, contrast with Spanish aceite and aceituna. However, the Arabic element is generally much more prevalent in Spanish.
Situated between two large linguistic blocks (Iberian Romance and Gallo-Romance), Catalan has many unique lexical choices, such as enyorar "to miss somebody", apaivagar "to calm somebody down", and rebutjar "reject".
Traditionally Catalan-speaking territories are sometimes called the Països Catalans (Catalan Countries), a denomination based on cultural affinity and common heritage, that has also had a subsequent political interpretation but no official status. Various interpretations of the term may include some or all of these regions.
The number of people known to be fluent in Catalan varies depending on the sources used. A 2004 study did not count the total number of speakers, but estimated a total of 9–9.5 million by matching the percentage of speakers to the population of each area where Catalan is spoken. The web site of the Generalitat de Catalunya estimated that as of 2004 there were 9,118,882 speakers of Catalan. These figures only reflect potential speakers; today it is the native language of only 35.6% of the Catalan population. According to Ethnologue, Catalan had 4.1 million native speakers and 5.1 million second-language speakers in 2021.
According to a 2011 study the total number of Catalan speakers is over 9.8 million, with 5.9 million residing in Catalonia. More than half of them speak Catalan as a second language, with native speakers being about 4.4 million of those (more than 2.8 in Catalonia). Very few Catalan monoglots exist; basically, virtually all of the Catalan speakers in Spain are bilingual speakers of Catalan and Spanish, with a sizable population of Spanish-only speakers of immigrant origin (typically born outside Catalonia or whose parents were both born outside Catalonia) existing in the major Catalan urban areas as well.
In Roussillon, only a minority of French Catalans speak Catalan nowadays, with French being the majority language for the inhabitants after a continued process of language shift. According to a 2019 survey by the Catalan government, 31.5% of the inhabitants of Catalonia have Catalan as first language at home whereas 52.7% have Spanish, 2.8% both Catalan and Spanish and 10.8% other languages.
Spanish is the most spoken language in Barcelona (according to the linguistic census held by the Government of Catalonia in 2013) and it is understood almost universally. According to this census of 2013 Catalan is also very commonly spoken in the city of 1,501,262: it is understood by 95% of the population, while 72.3% over the age of 2 can speak it (1,137,816), 79% can read it (1,246.555), and 53% can write it (835,080). The proportion in Barcelona who can speak it, 72.3%, is lower than that of the overall Catalan population, of whom 81.2% over the age of 15 speak the language. Knowledge of Catalan has increased significantly in recent decades thanks to a language immersion educational system. An important social characteristic of the Catalan language is that all the areas where it is spoken are bilingual in practice: together with the French language in Roussillon, with Italian in Alghero, with Spanish and French in Andorra and with Spanish in the rest of the territories.
(% of the population 15 years old and older).
(% of the population 15 years old and older).
Catalan phonology varies by dialect. Notable features include:
In contrast to other Romance languages, Catalan has many monosyllabic words, and these may end in a wide variety of consonants, including some consonant clusters. Additionally, Catalan has final obstruent devoicing, which gives rise to an abundance of such couplets as amic ("male friend") vs. amiga ("female friend").
Central Catalan pronunciation is considered to be standard for the language. The descriptions below are mostly representative of this variety. For the differences in pronunciation between the different dialects, see the section on pronunciation of dialects in this article.
Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: /a ɛ e i ɔ o u/, a common feature in Western Romance, with the exception of Spanish. Balearic also has instances of stressed /ə/. Dialects differ in the different degrees of vowel reduction, and the incidence of the pair /ɛ e/.
In Central Catalan, unstressed vowels reduce to three: /a e ɛ/ > [ə]; /o ɔ u/ > [u]; /i/ remains distinct. The other dialects have different vowel reduction processes (see the section pronunciation of dialects in this article).
The consonant system of Catalan is rather conservative.
Catalan sociolinguistics studies the situation of Catalan in the world and the different varieties that this language presents. It is a subdiscipline of Catalan philology and other affine studies and has as an objective to analyze the relation between the Catalan language, the speakers and the close reality (including the one of other languages in contact).
The dialects of the Catalan language feature a relative uniformity, especially when compared to other Romance languages; both in terms of vocabulary, semantics, syntax, morphology, and phonology. Mutual intelligibility between dialects is very high, estimates ranging from 90% to 95%. The only exception is the isolated idiosyncratic Algherese dialect.
Catalan is split in two major dialectal blocks: Eastern and Western. The main difference lies in the treatment of unstressed a and e; which have merged to /ə/ in Eastern dialects, but which remain distinct as /a/ and /e/ in Western dialects. There are a few other differences in pronunciation, verbal morphology, and vocabulary.
Western Catalan comprises the two dialects of Northwestern Catalan and Valencian; the Eastern block comprises four dialects: Central Catalan, Balearic, Rossellonese, and Algherese. Each dialect can be further subdivided in several subdialects. The terms "Catalan" and "Valencian" (respectively used in Catalonia and the Valencian Community) refer to two varieties of the same language. There are two institutions regulating the two standard varieties, the Institute of Catalan Studies in Catalonia and the Valencian Academy of the Language in the Valencian Community.
Central Catalan is considered the standard pronunciation of the language and has the largest number of speakers. It is spoken in the densely populated regions of the Barcelona province, the eastern half of the province of Tarragona, and most of the province of Girona.
Catalan has an inflectional grammar. Nouns have two genders (masculine, feminine), and two numbers (singular, plural). Pronouns additionally can have a neuter gender, and some are also inflected for case and politeness, and can be combined in very complex ways. Verbs are split in several paradigms and are inflected for person, number, tense, aspect, mood, and gender. In terms of pronunciation, Catalan has many words ending in a wide variety of consonants and some consonant clusters, in contrast with many other Romance languages.
Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: /a ɛ e i ɔ o u/, a common feature in Western Romance, except Spanish. Balearic has also instances of stressed /ə/. Dialects differ in the different degrees of vowel reduction, and the incidence of the pair /ɛ e/.
In Eastern Catalan (except Majorcan), unstressed vowels reduce to three: /a e ɛ/ > [ə]; /o ɔ u/ > [u]; /i/ remains distinct. There are a few instances of unreduced [e], [o] in some words. Algherese has lowered [ə] to [a].
In Majorcan, unstressed vowels reduce to four: /a e ɛ/ follow the Eastern Catalan reduction pattern; however /o ɔ/ reduce to [o], with /u/ remaining distinct, as in Western Catalan.
In Western Catalan, unstressed vowels reduce to five: /e ɛ/ > [e]; /o ɔ/ > [o]; /a u i/ remain distinct. This reduction pattern, inherited from Proto-Romance, is also found in Italian and Portuguese. Some Western dialects present further reduction or vowel harmony in some cases.
Central, Western, and Balearic differ in the lexical incidence of stressed /e/ and /ɛ/. Usually, words with /ɛ/ in Central Catalan correspond to /ə/ in Balearic and /e/ in Western Catalan. Words with /e/ in Balearic almost always have /e/ in Central and Western Catalan as well. As a result, Central Catalan has a much higher incidence of /ɛ/.
Western Catalan: In verbs, the ending for 1st-person present indicative is -e in verbs of the 1st conjugation and -∅ in verbs of the 2nd and 3rd conjugations in most of the Valencian Community, or -o in all verb conjugations in the Northern Valencian Community and Western Catalonia.E.g. parle, tem, sent (Valencian); parlo, temo, sento (Northwestern Catalan).
Eastern Catalan: In verbs, the ending for 1st-person present indicative is -o, -i, or -∅ in all conjugations. E.g. parlo (Central), parl (Balearic), and parli (Northern), all meaning ('I speak').
Western Catalan: In verbs, the inchoative endings are -isc/-esc, -ix, -ixen, -isca/-esca.
Eastern Catalan: In verbs, the inchoative endings are -eixo, -eix, -eixen, -eixi.
Western Catalan: In nouns and adjectives, maintenance of /n/ of medieval plurals in proparoxytone words.E.g. hòmens 'men', jóvens 'youth'.
Eastern Catalan: In nouns and adjectives, loss of /n/ of medieval plurals in proparoxytone words.E.g. homes 'men', joves 'youth' (Ibicencan, however, follows the model of Western Catalan in this case).
Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element.
Standard Catalan, virtually accepted by all speakers, is mostly based on Eastern Catalan, which is the most widely used dialect. Nevertheless, the standards of the Valencian Community and the Balearics admit alternative forms, mostly traditional ones, which are not current in eastern Catalonia.
The most notable difference between both standards is some tonic ⟨e⟩ accentuation, for instance: francès, anglès (IEC) – francés, anglés (AVL). Nevertheless, AVL's standard keeps the grave accent ⟨è⟩, while pronouncing it as /e/ rather than /ɛ/, in some words like: què ('what'), or València. Other divergences include the use of ⟨tl⟩ (AVL) in some words instead of ⟨tll⟩ like in ametla/ametlla ('almond'), espatla/espatlla ('back'), the use of elided demonstratives (este 'this', eixe 'that') in the same level as reinforced ones (aquest, aqueix) or the use of many verbal forms common in Valencian, and some of these common in the rest of Western Catalan too, like subjunctive mood or inchoative conjugation in -ix- at the same level as -eix- or the priority use of -e morpheme in 1st person singular in present indicative (-ar verbs): jo compre instead of jo compro ('I buy').
In the Balearic Islands, IEC's standard is used but adapted for the Balearic dialect by the University of the Balearic Islands's philological section. In this way, for instance, IEC says it is correct writing cantam as much as cantem ('we sing'), but the university says that the priority form in the Balearic Islands must be cantam in all fields. Another feature of the Balearic standard is the non-ending in the 1st person singular present indicative: jo compr ('I buy'), jo tem ('I fear'), jo dorm ('I sleep').
In Alghero, the IEC has adapted its standard to the Algherese dialect. In this standard one can find, among other features: the definite article lo instead of el, special possessive pronouns and determinants la mia ('mine'), lo sou/la sua ('his/her'), lo tou/la tua ('yours'), and so on, the use of -v- /v/ in the imperfect tense in all conjugations: cantava, creixiva, llegiva; the use of many archaic words, usual words in Algherese: manco instead of menys ('less'), calqui u instead of algú ('someone'), qual/quala instead of quin/quina ('which'), and so on; and the adaptation of weak pronouns. In 1999, Catalan (Algherese dialect) was among the twelve minority languages officially recognized as Italy's "historical linguistic minorities" by the Italian State under Law No. 482/1999.
In 2011, the Aragonese government passed a decree approving the statutes of a new language regulator of Catalan in La Franja (the so-called Catalan-speaking areas of Aragon) as originally provided for by Law 10/2009. The new entity, designated as Institut Aragonès del Català, shall allow a facultative education in Catalan and a standardization of the Catalan language in La Franja.
Valencian is classified as a Western dialect, along with the northwestern varieties spoken in Western Catalonia (provinces of Lleida and the western half of Tarragona). Central Catalan has 90% to 95% inherent intelligibility for speakers of Valencian.
Linguists, including Valencian scholars, deal with Catalan and Valencian as the same language. The official regulating body of the language of the Valencian Community, the Valencian Academy of Language (Acadèmia Valenciana de la Llengua, AVL) declares the linguistic unity between Valencian and Catalan varieties.
[T]he historical patrimonial language of the Valencian people, from a philological standpoint, is the same shared by the autonomous communities of Catalonia and Balearic islands, and Principality of Andorra. Additionally, it is the patrimonial historical language of other territories of the ancient Crown of Aragon [...] The different varieties of these territories constitute a language, that is, a "linguistic system" [...] From this group of varieties, Valencian has the same hierarchy and dignity as any other dialectal modality of that linguistic system [...]
Ruling of the Valencian Language Academy of 9 February 2005, extract of point 1.
The AVL, created by the Valencian parliament, is in charge of dictating the official rules governing the use of Valencian, and its standard is based on the Norms of Castelló (Normes de Castelló). Currently, everyone who writes in Valencian uses this standard, except the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian an independent standard.
Despite the position of the official organizations, an opinion poll carried out between 2001 and 2004 showed that the majority of the Valencian people consider Valencian different from Catalan. This position is promoted by people who do not use Valencian regularly. Furthermore, the data indicates that younger generations educated in Valencian are much less likely to hold these views. A minority of Valencian scholars active in fields other than linguistics defends the position of the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian a standard independent from Catalan.
This clash of opinions has sparked much controversy. For example, during the drafting of the European Constitution in 2004, the Spanish government supplied the EU with translations of the text into Basque, Galician, Catalan, and Valencian, but the latter two were identical.
Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element.
Literary Catalan allows the use of words from different dialects, except those of very restricted use. However, from the 19th century onwards, there has been a tendency towards favoring words of Northern dialects to the detriment of others, even though nowadays there is a greater freedom of choice.
Like other languages, Catalan has a large list of loanwords from Greek and Latin. This process started very early, and one can find such examples in Ramon Llull's work. In the 14th and 15th centuries Catalan had a far greater number of Greco-Latin loanwords than other Romance languages, as is attested for example in Roís de Corella's writings. The incorporation of learned, or "bookish" words from its own ancestor language, Latin, into Catalan is arguably another form of lexical borrowing through the influence of written language and the liturgical language of the Church. Throughout the Middle Ages and into the early modern period, most literate Catalan speakers were also literate in Latin; and thus they easily adopted Latin words into their writing—and eventually speech—in Catalan.
The process of morphological derivation in Catalan follows the same principles as the other Romance languages, where agglutination is common. Many times, several affixes are appended to a preexisting lexeme, and some sound alternations can occur, for example elèctric [əˈlɛktrik] ("electrical") vs. electricitat [ələktrisiˈtat]. Prefixes are usually appended to verbs, as in preveure ("foresee").
There is greater regularity in the process of word-compounding, where one can find compounded words formed much like those in English.
Catalan uses the Latin script, with some added symbols and digraphs. The Catalan orthography is systematic and largely phonologically based. Standardization of Catalan was among the topics discussed during the First International Congress of the Catalan Language, held in Barcelona October 1906. Subsequently, the Philological Section of the Institut d'Estudis Catalans (IEC, founded in 1911) published the Normes ortogràfiques in 1913 under the direction of Antoni Maria Alcover and Pompeu Fabra. In 1932, Valencian writers and intellectuals gathered in Castelló de la Plana to make a formal adoption of the so-called Normes de Castelló, a set of guidelines following Pompeu Fabra's Catalan language norms.
The grammar of Catalan is similar to other Romance languages. Features include:
In gender inflection, the most notable feature is (compared to Portuguese, Spanish or Italian), the loss of the typical masculine suffix -o. Thus, the alternance of -o/-a, has been replaced by ø/-a. There are only a few exceptions, like minso/minsa ("scarce"). Many not completely predictable morphological alternations may occur, such as:
Catalan has few suppletive couplets, like Italian and Spanish, and unlike French. Thus, Catalan has noi/noia ("boy"/"girl") and gall/gallina ("cock"/"hen"), whereas French has garçon/fille and coq/poule.
There is a tendency to abandon traditionally gender-invariable adjectives in favor of marked ones, something prevalent in Occitan and French. Thus, one can find bullent/bullenta ("boiling") in contrast with traditional bullent/bullent.
As in the other Western Romance languages, the main plural expression is the suffix -s, which may create morphological alternations similar to the ones found in gender inflection, albeit more rarely. The most important one is the addition of -o- before certain consonant groups, a phonetic phenomenon that does not affect feminine forms: el pols/els polsos ("the pulse"/"the pulses") vs. la pols/les pols ("the dust"/"the dusts").
The inflection of determinatives is complex, specially because of the high number of elisions, but is similar to the neighboring languages. Catalan has more contractions of preposition + article than Spanish, like dels ("of + the [plural]"), but not as many as Italian (which has sul, col, nel, etc.).
Central Catalan has abandoned almost completely unstressed possessives (mon, etc.) in favor of constructions of article + stressed forms (el meu, etc.), a feature shared with Italian.
The morphology of Catalan personal pronouns is complex, especially in unstressed forms, which are numerous (13 distinct forms, compared to 11 in Spanish or 9 in Italian). Features include the gender-neutral ho and the great degree of freedom when combining different unstressed pronouns (65 combinations).
Catalan pronouns exhibit T–V distinction, like all other Romance languages (and most European languages, but not Modern English). This feature implies the use of a different set of second person pronouns for formality.
This flexibility allows Catalan to use extraposition extensively, much more than French or Spanish. Thus, Catalan can have m'hi recomanaren ("they recommended me to him"), whereas in French one must say ils m'ont recommandé à lui, and Spanish me recomendaron a él. This allows the placement of almost any nominal term as a sentence topic, without having to use so often the passive voice (as in French or English), or identifying the direct object with a preposition (as in Spanish).
Like all the Romance languages, Catalan verbal inflection is more complex than the nominal. Suffixation is omnipresent, whereas morphological alternations play a secondary role. Vowel alternances are active, as well as infixation and suppletion. However, these are not as productive as in Spanish, and are mostly restricted to irregular verbs.
The Catalan verbal system is basically common to all Western Romance, except that most dialects have replaced the synthetic indicative perfect with a periphrastic form of anar ("to go") + infinitive.
Catalan verbs are traditionally divided into three conjugations, with vowel themes -a-, -e-, -i-, the last two being split into two subtypes. However, this division is mostly theoretical. Only the first conjugation is nowadays productive (with about 3500 common verbs), whereas the third (the subtype of servir, with about 700 common verbs) is semiproductive. The verbs of the second conjugation are fewer than 100, and it is not possible to create new ones, except by compounding.
The grammar of Catalan follows the general pattern of Western Romance languages. The primary word order is subject–verb–object. However, word order is very flexible. Commonly, verb-subject constructions are used to achieve a semantic effect. The sentence "The train has arrived" could be translated as Ha arribat el tren or El tren ha arribat. Both sentences mean "the train has arrived", but the former puts a focus on the train, while the latter puts a focus on the arrival. This subtle distinction is described as "what you might say while waiting in the station" versus "what you might say on the train."
In Spain, every person officially has two surnames, one of which is the father's first surname and the other is the mother's first surname. The law contemplates the possibility of joining both surnames with the Catalan conjunction i ("and").
Selected text from Manuel de Pedrolo's 1970 novel Un amor fora ciutat ("A love affair outside the city").
Institutions
About the Catalan language
Monolingual dictionaries
Bilingual and multilingual dictionaries
Automated translation systems
Phrasebooks
Learning resources
Catalan-language online encyclopedia | [
{
"paragraph_id": 0,
"text": "Catalan (/ˈkætələn, -æn, ˌkætəˈlæn/; autonym: català, Eastern Catalan: [kətəˈla]), known in the Valencian Community and Carche as Valencian (autonym: valencià), is a Western Romance language. It is the official language of Andorra, and an official language of two autonomous communities in eastern Spain: Catalonia and the Balearic Islands. It is also an official language in Valencia, where it is called Valencian. It has semi-official status in the Italian comune of Alghero, and it is spoken in the Pyrénées-Orientales department of France and in two further areas in eastern Spain: the eastern strip of Aragon and the Carche area in the Region of Murcia. The Catalan-speaking territories are often called the Països Catalans or \"Catalan Countries\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "The language evolved from Vulgar Latin in the Middle Ages around the eastern Pyrenees. Nineteenth-century Spain saw a Catalan literary revival, culminating in the early 1900s.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The word Catalan is derived from the territorial name of Catalonia, itself of disputed etymology. The main theory suggests that Catalunya (Latin Gathia Launia) derives from the name Gothia or Gauthia (\"Land of the Goths\"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, whence Gothland > Gothlandia > Gothalania > Catalonia theoretically derived.",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 3,
"text": "In English, the term referring to a person first appears in the mid 14th century as Catelaner, followed in the 15th century as Catellain (from French). It is attested a language name since at least 1652. The word Catalan can be pronounced in English as /ˈkætələn/, /ˈkætəlæn/ or /ˌkætəˈlæn/.",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 4,
"text": "The endonym is pronounced [kətəˈla] in the Eastern Catalan dialects, and [kataˈla] in the Western dialects. In the Valencian Community and Carche, the term valencià [valensiˈa, ba-] is frequently used instead. Thus, the name \"Valencian\", although often employed for referring to the varieties specific to the Valencian Community and Carche, is also used by Valencians as a name for the language as a whole, synonymous with \"Catalan\". Both uses of the term have their respective entries in the dictionaries by the Acadèmia Valenciana de la Llengua and the Institut d'Estudis Catalans. See also status of Valencian below.",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 5,
"text": "By the 9th century, Catalan had evolved from Vulgar Latin on both sides of the eastern end of the Pyrenees, as well as the territories of the Roman province of Hispania Tarraconensis to the south. From the 8th century onwards the Catalan counts extended their territory southwards and westwards at the expense of the Muslims, bringing their language with them. This process was given definitive impetus with the separation of the County of Barcelona from the Carolingian Empire in 988.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the 11th century, documents written in macaronic Latin begin to show Catalan elements, with texts written almost completely in Romance appearing by 1080. Old Catalan shared many features with Gallo-Romance, diverging from Old Occitan between the 11th and 14th centuries.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "During the 11th and 12th centuries the Catalan rulers expanded southward to the Ebro river, and in the 13th century they conquered the Land of Valencia and the Balearic Islands. The city of Alghero in Sardinia was repopulated with Catalan speakers in the 14th century. The language also reached Murcia, which became Spanish-speaking in the 15th century.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the Low Middle Ages, Catalan went through a golden age, reaching a peak of maturity and cultural richness. Examples include the work of Majorcan polymath Ramon Llull (1232–1315), the Four Great Chronicles (13th–14th centuries), and the Valencian school of poetry culminating in Ausiàs March (1397–1459). By the 15th century, the city of Valencia had become the sociocultural center of the Crown of Aragon, and Catalan was present all over the Mediterranean world. During this period, the Royal Chancery propagated a highly standardized language. Catalan was widely used as an official language in Sicily until the 15th century, and in Sardinia until the 17th. During this period, the language was what Costa Carreras terms \"one of the 'great languages' of medieval Europe\".",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Martorell's outstanding novel of chivalry Tirant lo Blanc (1490) shows a transition from Medieval to Renaissance values, something that can also be seen in Metge's work. The first book produced with movable type in the Iberian Peninsula was printed in Catalan.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "With the union of the crowns of Castille and Aragon in 1479, the Spanish kings ruled over different kingdoms, each with its own cultural, linguistic and political particularities, and they had to swear by the laws of each territory before the respective parliaments. But after the War of the Spanish Succession, Spain became an absolute monarchy under Philip V, which led to the assimilation of the Crown of Aragon by the Crown of Castile through the Nueva Planta decrees, as a first step in the creation of the Spanish nation-state; as in other contemporary European states, this meant the imposition of the political and cultural characteristics of the dominant groups. Since the political unification of 1714, Spanish assimilation policies towards national minorities have been a constant.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The process of assimilation began with secret instructions to the corregidores of the Catalan territory: they \"will take the utmost care to introduce the Castilian language, for which purpose he will give the most temperate and disguised measures so that the effect is achieved, without the care being noticed.\" From there, actions in the service of assimilation, discreet or aggressive, were continued, and reached to the last detail, such as, in 1799, the Royal Certificate forbidding anyone to \"represent, sing and dance pieces that were not in Spanish.\" Anyway, the use of Spanish gradually became more prestigious and marked the start of the decline of Catalan. Starting in the 16th century, Catalan literature came under the influence of Spanish, and the nobles, part of the urban and literary classes became bilingual.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "With the Treaty of the Pyrenees (1659), Spain ceded the northern part of Catalonia to France, and soon thereafter the local Catalan varieties came under the influence of French, which in 1700 became the sole official language of the region.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Shortly after the French Revolution (1789), the French First Republic prohibited official use of, and enacted discriminating policies against, the regional languages of France, such as Catalan, Alsatian, Breton, Occitan, Flemish, and Basque.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Following the French establishment of the colony of Algeria from 1830 onward, it received several waves of Catalan-speaking settlers. People from the Spanish Alicante province settled around Oran, whereas Algiers received immigration from Northern Catalonia and Menorca.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Their speech was known as patuet. By 1911, the number of Catalan speakers was around 100,000. After the declaration of independence of Algeria in 1962, almost all the Catalan speakers fled to Northern Catalonia (as Pieds-Noirs) or Alacant.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The government of France formally recognizes only French as an official language. Nevertheless, on 10 December 2007, the General Council of the Pyrénées-Orientales officially recognized Catalan as one of the languages of the department and seeks to further promote it in public life and education.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1807, the Statistics Office of the French Ministry of the Interior asked the prefects for an official survey on the limits of the French language. The survey found that in Roussillon, almost only Catalan was spoken, and since Napoleon wanted to incorporate Catalonia into France, as happened in 1812, the consul in Barcelona was also asked. He declared that Catalan \"is taught in schools, it is printed and spoken, not only among the lower class, but also among people of first quality, also in social gatherings, as in visits and congresses\", indicating that it was spoken everywhere \"with the exception of the royal courts\". He also indicated that Catalan was spoken \"in the Kingdom of Valencia, in the islands of Mallorca, Menorca, Ibiza, Sardinia, Corsica and much of Sicily, in the Vall d \"Aran and Cerdaña\".",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The defeat of the pro-Habsburg coalition in the War of Spanish Succession (1714) initiated a series of laws which, among other centralizing measures, imposed the use of Spanish in legal documentation all over Spain. Because of this, use of the Catalan language declined into the 18th century.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "However, the 19th century saw a Catalan literary revival (Renaixença), which has continued up to the present day. This period starts with Aribau's Ode to the Homeland (1833); followed in the second half of the 19th century, and the early 20th by the work of Verdaguer (poetry), Oller (realist novel), and Guimerà (drama). In the 19th century, the region of Carche, in the province of Murcia was repopulated with Valencian speakers. Catalan spelling was standardized in 1913 and the language became official during the Second Spanish Republic (1931–1939). The Second Spanish Republic saw a brief period of tolerance, with most restrictions against Catalan lifted. The Generalitat (the autonomous government of Catalonia, established during the Republic in 1931) made a normal use of Catalan in its administration and put efforts to promote it at social level, including in schools and the University of Barcelona.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The Catalan language and culture were still vibrant during the Spanish Civil War (1936–1939), but were crushed at an unprecedented level throughout the subsequent decades due to Francoist dictatorship (1939–1975), which abolished the official status of Catalan and imposed the use of Spanish in schools and in public administration in all of Spain, while banning the use of Catalan in them. Between 1939 and 1943 newspapers and book printing in Catalan almost disappeared. Francisco Franco's desire for a homogenous Spanish population resonated with some Catalans in favor of his regime, primarily members of the upper class, who began to reject the use of Catalan. Despite all of these hardships, Catalan continued to be used privately within households, and it was able to survive Franco's dictatorship. At the end of World War II, however, some of the harsh mesures began to be lifted and, while Spanish language remained the sole promoted one, limited number of Catalan literature began to be tolerated. Several prominent Catalan authors resisted the suppression through literature. Private initiative contests were created to reward works in Catalan, among them Joan Martorell prize (1947), Víctor Català prize (1953) Carles Riba award (1950), or the Honor Award of Catalan Letters (1969). The first Catalan-language TV show was broadcast in 1964. At the same time, oppression of the Catalan language and identity was carried out in schools, through governmental bodies, and in religious centers.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In addition to the loss of prestige for Catalan and its prohibition in schools, migration during the 1950s into Catalonia from other parts of Spain also contributed to the diminished use of the language. These migrants were often unaware of the existence of Catalan, and thus felt no need to learn or use it. Catalonia was the economic powerhouse of Spain, so these migrations continued to occur from all corners of the country. Employment opportunities were reduced for those who were not bilingual. Daily newspapers remained exclusively in Spanish until after Franco's death, when the first one in Catalan since the end of the Civil War, Avui, began to be published in 1976.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Since the Spanish transition to democracy (1975–1982), Catalan has been institutionalized as an official language, language of education, and language of mass media; all of which have contributed to its increased prestige. In Catalonia, there is an unparalleled large bilingual European non-state linguistic community. The teaching of Catalan is mandatory in all schools, but it is possible to use Spanish for studying in the public education system of Catalonia in two situations – if the teacher assigned to a class chooses to use Spanish, or during the learning process of one or more recently arrived immigrant students. There is also some intergenerational shift towards Catalan.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "More recently, several Spanish political forces have tried to increase the use of Spanish in the Catalan educational system. As a result, in May 2022 the Spanish Supreme Court urged the Catalan regional government to enforce a measure by which 25% of all lessons must be taught in Spanish.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "According to the Statistical Institute of Catalonia, in 2013 the Catalan language is the second most commonly used in Catalonia, after Spanish, as a native or self-defining language: 7% of the population self-identifies with both Catalan and Spanish equally, 36.4% with Catalan and 47.5% only Spanish. In 2003 the same studies concluded no language preference for self-identification within the population above 15 years old: 5% self-identified with both languages, 44.3% with Catalan and 47.5% with Spanish. To promote use of Catalan, the Generalitat de Catalunya (Catalonia's official Autonomous government) spends part of its annual budget on the promotion of the use of Catalan in Catalonia and in other territories, with entities such as Consorci per a la Normalització Lingüística (Consortium for Linguistic Normalization)",
"title": "History"
},
{
"paragraph_id": 25,
"text": "In Andorra, Catalan has always been the sole official language. Since the promulgation of the 1993 constitution, several policies favoring Catalan have been enforced, like Catalan medium education.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "On the other hand, there are several language shift processes currently taking place. In the Northern Catalonia area of France, Catalan has followed the same trend as the other minority languages of France, with most of its native speakers being 60 or older (as of 2004). Catalan is studied as a foreign language by 30% of the primary education students, and by 15% of the secondary. The cultural association La Bressola promotes a network of community-run schools engaged in Catalan language immersion programs.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In Alicante province, Catalan is being replaced by Spanish and in Alghero by Italian. There is also well ingrained diglossia in the Valencian Community, Ibiza, and to a lesser extent, in the rest of the Balearic islands.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "During the 20th century many Catalans emigrated or went into exile to Venezuela, Mexico, Cuba, Argentina, and other South American countries. They formed a large number of Catalan colonies that today continue to maintain the Catalan language. They also founded many Catalan casals (associations).",
"title": "History"
},
{
"paragraph_id": 29,
"text": "One classification of Catalan is given by Pèire Bèc:",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 30,
"text": "However, the ascription of Catalan to the Occitano-Romance branch of Gallo-Romance languages is not shared by all linguists and philologists, particularly among Spanish ones, such as Ramón Menéndez Pidal.",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 31,
"text": "Catalan bears varying degrees of similarity to the linguistic varieties subsumed under the cover term Occitan language (see also differences between Occitan and Catalan and Gallo-Romance languages). Thus, as it should be expected from closely related languages, Catalan today shares many traits with other Romance languages.",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 32,
"text": "Some include Catalan in Occitan, as the linguistic distance between this language and some Occitan dialects (such as the Gascon language) is similar to the distance among different Occitan dialects. Catalan was considered a dialect of Occitan until the end of the 19th century and still today remains its closest relative.",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 33,
"text": "Catalan shares many traits with the other neighboring Romance languages (Occitan, French, Italian, Sardinian as well as Spanish and Portuguese among others). However, despite being spoken mostly on the Iberian Peninsula, Catalan has marked differences with the Iberian Romance group (Spanish and Portuguese) in terms of pronunciation, grammar, and especially vocabulary; it shows instead its closest affinity with languages native to France and northern Italy, particularly Occitan and to a lesser extent Gallo-Romance (Franco-Provençal, French, Gallo-Italian).",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 34,
"text": "According to Ethnologue, the lexical similarity between Catalan and other Romance languages is: 87% with Italian; 85% with Portuguese and Spanish; 76% with Ladin and Romansh; 75% with Sardinian; and 73% with Romanian.",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 35,
"text": "During much of its history, and especially during the Francoist dictatorship (1939–1975), the Catalan language was ridiculed as a mere dialect of Spanish. This view, based on political and ideological considerations, has no linguistic validity. Spanish and Catalan have important differences in their sound systems, lexicon, and grammatical features, placing the language in features closer to Occitan (and French).",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 36,
"text": "There is evidence that, at least from the 2nd century a.d., the vocabulary and phonology of Roman Tarraconensis was different from the rest of Roman Hispania. Differentiation arose generally because Spanish, Asturian, and Galician-Portuguese share certain peripheral archaisms (Spanish hervir, Asturian and Portuguese ferver vs. Catalan bullir, Occitan bolir \"to boil\") and innovatory regionalisms (Sp novillo, Ast nuviellu vs. Cat torell, Oc taurèl \"bullock\"), while Catalan has a shared history with the Western Romance innovative core, especially Occitan.",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 37,
"text": "Like all Romance languages, Catalan has a handful of native words which are unique to it, or rare elsewhere. These include:",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 38,
"text": "The Gothic superstrate produced different outcomes in Spanish and Catalan. For example, Catalan fang \"mud\" and rostir \"to roast\", of Germanic origin, contrast with Spanish lodo and asar, of Latin origin; whereas Catalan filosa \"spinning wheel\" and templa \"temple\", of Latin origin, contrast with Spanish rueca and sien, of Germanic origin.",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 39,
"text": "The same happens with Arabic loanwords. Thus, Catalan alfàbia \"large earthenware jar\" and rajola \"tile\", of Arabic origin, contrast with Spanish tinaja and teja, of Latin origin; whereas Catalan oli \"oil\" and oliva \"olive\", of Latin origin, contrast with Spanish aceite and aceituna. However, the Arabic element is generally much more prevalent in Spanish.",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 40,
"text": "Situated between two large linguistic blocks (Iberian Romance and Gallo-Romance), Catalan has many unique lexical choices, such as enyorar \"to miss somebody\", apaivagar \"to calm somebody down\", and rebutjar \"reject\".",
"title": "Classification and relationship with other Romance languages"
},
{
"paragraph_id": 41,
"text": "Traditionally Catalan-speaking territories are sometimes called the Països Catalans (Catalan Countries), a denomination based on cultural affinity and common heritage, that has also had a subsequent political interpretation but no official status. Various interpretations of the term may include some or all of these regions.",
"title": "Geographic distribution"
},
{
"paragraph_id": 42,
"text": "The number of people known to be fluent in Catalan varies depending on the sources used. A 2004 study did not count the total number of speakers, but estimated a total of 9–9.5 million by matching the percentage of speakers to the population of each area where Catalan is spoken. The web site of the Generalitat de Catalunya estimated that as of 2004 there were 9,118,882 speakers of Catalan. These figures only reflect potential speakers; today it is the native language of only 35.6% of the Catalan population. According to Ethnologue, Catalan had 4.1 million native speakers and 5.1 million second-language speakers in 2021.",
"title": "Geographic distribution"
},
{
"paragraph_id": 43,
"text": "According to a 2011 study the total number of Catalan speakers is over 9.8 million, with 5.9 million residing in Catalonia. More than half of them speak Catalan as a second language, with native speakers being about 4.4 million of those (more than 2.8 in Catalonia). Very few Catalan monoglots exist; basically, virtually all of the Catalan speakers in Spain are bilingual speakers of Catalan and Spanish, with a sizable population of Spanish-only speakers of immigrant origin (typically born outside Catalonia or whose parents were both born outside Catalonia) existing in the major Catalan urban areas as well.",
"title": "Geographic distribution"
},
{
"paragraph_id": 44,
"text": "In Roussillon, only a minority of French Catalans speak Catalan nowadays, with French being the majority language for the inhabitants after a continued process of language shift. According to a 2019 survey by the Catalan government, 31.5% of the inhabitants of Catalonia have Catalan as first language at home whereas 52.7% have Spanish, 2.8% both Catalan and Spanish and 10.8% other languages.",
"title": "Geographic distribution"
},
{
"paragraph_id": 45,
"text": "Spanish is the most spoken language in Barcelona (according to the linguistic census held by the Government of Catalonia in 2013) and it is understood almost universally. According to this census of 2013 Catalan is also very commonly spoken in the city of 1,501,262: it is understood by 95% of the population, while 72.3% over the age of 2 can speak it (1,137,816), 79% can read it (1,246.555), and 53% can write it (835,080). The proportion in Barcelona who can speak it, 72.3%, is lower than that of the overall Catalan population, of whom 81.2% over the age of 15 speak the language. Knowledge of Catalan has increased significantly in recent decades thanks to a language immersion educational system. An important social characteristic of the Catalan language is that all the areas where it is spoken are bilingual in practice: together with the French language in Roussillon, with Italian in Alghero, with Spanish and French in Andorra and with Spanish in the rest of the territories.",
"title": "Geographic distribution"
},
{
"paragraph_id": 46,
"text": "(% of the population 15 years old and older).",
"title": "Geographic distribution"
},
{
"paragraph_id": 47,
"text": "(% of the population 15 years old and older).",
"title": "Geographic distribution"
},
{
"paragraph_id": 48,
"text": "",
"title": "Geographic distribution"
},
{
"paragraph_id": 49,
"text": "Catalan phonology varies by dialect. Notable features include:",
"title": "Phonology"
},
{
"paragraph_id": 50,
"text": "In contrast to other Romance languages, Catalan has many monosyllabic words, and these may end in a wide variety of consonants, including some consonant clusters. Additionally, Catalan has final obstruent devoicing, which gives rise to an abundance of such couplets as amic (\"male friend\") vs. amiga (\"female friend\").",
"title": "Phonology"
},
{
"paragraph_id": 51,
"text": "Central Catalan pronunciation is considered to be standard for the language. The descriptions below are mostly representative of this variety. For the differences in pronunciation between the different dialects, see the section on pronunciation of dialects in this article.",
"title": "Phonology"
},
{
"paragraph_id": 52,
"text": "Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: /a ɛ e i ɔ o u/, a common feature in Western Romance, with the exception of Spanish. Balearic also has instances of stressed /ə/. Dialects differ in the different degrees of vowel reduction, and the incidence of the pair /ɛ e/.",
"title": "Phonology"
},
{
"paragraph_id": 53,
"text": "In Central Catalan, unstressed vowels reduce to three: /a e ɛ/ > [ə]; /o ɔ u/ > [u]; /i/ remains distinct. The other dialects have different vowel reduction processes (see the section pronunciation of dialects in this article).",
"title": "Phonology"
},
{
"paragraph_id": 54,
"text": "The consonant system of Catalan is rather conservative.",
"title": "Phonology"
},
{
"paragraph_id": 55,
"text": "Catalan sociolinguistics studies the situation of Catalan in the world and the different varieties that this language presents. It is a subdiscipline of Catalan philology and other affine studies and has as an objective to analyze the relation between the Catalan language, the speakers and the close reality (including the one of other languages in contact).",
"title": "Sociolinguistics"
},
{
"paragraph_id": 56,
"text": "The dialects of the Catalan language feature a relative uniformity, especially when compared to other Romance languages; both in terms of vocabulary, semantics, syntax, morphology, and phonology. Mutual intelligibility between dialects is very high, estimates ranging from 90% to 95%. The only exception is the isolated idiosyncratic Algherese dialect.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 57,
"text": "Catalan is split in two major dialectal blocks: Eastern and Western. The main difference lies in the treatment of unstressed a and e; which have merged to /ə/ in Eastern dialects, but which remain distinct as /a/ and /e/ in Western dialects. There are a few other differences in pronunciation, verbal morphology, and vocabulary.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 58,
"text": "Western Catalan comprises the two dialects of Northwestern Catalan and Valencian; the Eastern block comprises four dialects: Central Catalan, Balearic, Rossellonese, and Algherese. Each dialect can be further subdivided in several subdialects. The terms \"Catalan\" and \"Valencian\" (respectively used in Catalonia and the Valencian Community) refer to two varieties of the same language. There are two institutions regulating the two standard varieties, the Institute of Catalan Studies in Catalonia and the Valencian Academy of the Language in the Valencian Community.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 59,
"text": "Central Catalan is considered the standard pronunciation of the language and has the largest number of speakers. It is spoken in the densely populated regions of the Barcelona province, the eastern half of the province of Tarragona, and most of the province of Girona.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 60,
"text": "Catalan has an inflectional grammar. Nouns have two genders (masculine, feminine), and two numbers (singular, plural). Pronouns additionally can have a neuter gender, and some are also inflected for case and politeness, and can be combined in very complex ways. Verbs are split in several paradigms and are inflected for person, number, tense, aspect, mood, and gender. In terms of pronunciation, Catalan has many words ending in a wide variety of consonants and some consonant clusters, in contrast with many other Romance languages.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 61,
"text": "Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: /a ɛ e i ɔ o u/, a common feature in Western Romance, except Spanish. Balearic has also instances of stressed /ə/. Dialects differ in the different degrees of vowel reduction, and the incidence of the pair /ɛ e/.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 62,
"text": "In Eastern Catalan (except Majorcan), unstressed vowels reduce to three: /a e ɛ/ > [ə]; /o ɔ u/ > [u]; /i/ remains distinct. There are a few instances of unreduced [e], [o] in some words. Algherese has lowered [ə] to [a].",
"title": "Sociolinguistics"
},
{
"paragraph_id": 63,
"text": "In Majorcan, unstressed vowels reduce to four: /a e ɛ/ follow the Eastern Catalan reduction pattern; however /o ɔ/ reduce to [o], with /u/ remaining distinct, as in Western Catalan.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 64,
"text": "In Western Catalan, unstressed vowels reduce to five: /e ɛ/ > [e]; /o ɔ/ > [o]; /a u i/ remain distinct. This reduction pattern, inherited from Proto-Romance, is also found in Italian and Portuguese. Some Western dialects present further reduction or vowel harmony in some cases.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 65,
"text": "Central, Western, and Balearic differ in the lexical incidence of stressed /e/ and /ɛ/. Usually, words with /ɛ/ in Central Catalan correspond to /ə/ in Balearic and /e/ in Western Catalan. Words with /e/ in Balearic almost always have /e/ in Central and Western Catalan as well. As a result, Central Catalan has a much higher incidence of /ɛ/.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 66,
"text": "Western Catalan: In verbs, the ending for 1st-person present indicative is -e in verbs of the 1st conjugation and -∅ in verbs of the 2nd and 3rd conjugations in most of the Valencian Community, or -o in all verb conjugations in the Northern Valencian Community and Western Catalonia.E.g. parle, tem, sent (Valencian); parlo, temo, sento (Northwestern Catalan).",
"title": "Sociolinguistics"
},
{
"paragraph_id": 67,
"text": "Eastern Catalan: In verbs, the ending for 1st-person present indicative is -o, -i, or -∅ in all conjugations. E.g. parlo (Central), parl (Balearic), and parli (Northern), all meaning ('I speak').",
"title": "Sociolinguistics"
},
{
"paragraph_id": 68,
"text": "Western Catalan: In verbs, the inchoative endings are -isc/-esc, -ix, -ixen, -isca/-esca.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 69,
"text": "Eastern Catalan: In verbs, the inchoative endings are -eixo, -eix, -eixen, -eixi.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 70,
"text": "Western Catalan: In nouns and adjectives, maintenance of /n/ of medieval plurals in proparoxytone words.E.g. hòmens 'men', jóvens 'youth'.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 71,
"text": "Eastern Catalan: In nouns and adjectives, loss of /n/ of medieval plurals in proparoxytone words.E.g. homes 'men', joves 'youth' (Ibicencan, however, follows the model of Western Catalan in this case).",
"title": "Sociolinguistics"
},
{
"paragraph_id": 72,
"text": "Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element.",
"title": "Sociolinguistics"
},
{
"paragraph_id": 73,
"text": "Standard Catalan, virtually accepted by all speakers, is mostly based on Eastern Catalan, which is the most widely used dialect. Nevertheless, the standards of the Valencian Community and the Balearics admit alternative forms, mostly traditional ones, which are not current in eastern Catalonia.",
"title": "Standards"
},
{
"paragraph_id": 74,
"text": "The most notable difference between both standards is some tonic ⟨e⟩ accentuation, for instance: francès, anglès (IEC) – francés, anglés (AVL). Nevertheless, AVL's standard keeps the grave accent ⟨è⟩, while pronouncing it as /e/ rather than /ɛ/, in some words like: què ('what'), or València. Other divergences include the use of ⟨tl⟩ (AVL) in some words instead of ⟨tll⟩ like in ametla/ametlla ('almond'), espatla/espatlla ('back'), the use of elided demonstratives (este 'this', eixe 'that') in the same level as reinforced ones (aquest, aqueix) or the use of many verbal forms common in Valencian, and some of these common in the rest of Western Catalan too, like subjunctive mood or inchoative conjugation in -ix- at the same level as -eix- or the priority use of -e morpheme in 1st person singular in present indicative (-ar verbs): jo compre instead of jo compro ('I buy').",
"title": "Standards"
},
{
"paragraph_id": 75,
"text": "In the Balearic Islands, IEC's standard is used but adapted for the Balearic dialect by the University of the Balearic Islands's philological section. In this way, for instance, IEC says it is correct writing cantam as much as cantem ('we sing'), but the university says that the priority form in the Balearic Islands must be cantam in all fields. Another feature of the Balearic standard is the non-ending in the 1st person singular present indicative: jo compr ('I buy'), jo tem ('I fear'), jo dorm ('I sleep').",
"title": "Standards"
},
{
"paragraph_id": 76,
"text": "In Alghero, the IEC has adapted its standard to the Algherese dialect. In this standard one can find, among other features: the definite article lo instead of el, special possessive pronouns and determinants la mia ('mine'), lo sou/la sua ('his/her'), lo tou/la tua ('yours'), and so on, the use of -v- /v/ in the imperfect tense in all conjugations: cantava, creixiva, llegiva; the use of many archaic words, usual words in Algherese: manco instead of menys ('less'), calqui u instead of algú ('someone'), qual/quala instead of quin/quina ('which'), and so on; and the adaptation of weak pronouns. In 1999, Catalan (Algherese dialect) was among the twelve minority languages officially recognized as Italy's \"historical linguistic minorities\" by the Italian State under Law No. 482/1999.",
"title": "Standards"
},
{
"paragraph_id": 77,
"text": "In 2011, the Aragonese government passed a decree approving the statutes of a new language regulator of Catalan in La Franja (the so-called Catalan-speaking areas of Aragon) as originally provided for by Law 10/2009. The new entity, designated as Institut Aragonès del Català, shall allow a facultative education in Catalan and a standardization of the Catalan language in La Franja.",
"title": "Standards"
},
{
"paragraph_id": 78,
"text": "Valencian is classified as a Western dialect, along with the northwestern varieties spoken in Western Catalonia (provinces of Lleida and the western half of Tarragona). Central Catalan has 90% to 95% inherent intelligibility for speakers of Valencian.",
"title": " Status of Valencian"
},
{
"paragraph_id": 79,
"text": "Linguists, including Valencian scholars, deal with Catalan and Valencian as the same language. The official regulating body of the language of the Valencian Community, the Valencian Academy of Language (Acadèmia Valenciana de la Llengua, AVL) declares the linguistic unity between Valencian and Catalan varieties.",
"title": " Status of Valencian"
},
{
"paragraph_id": 80,
"text": "[T]he historical patrimonial language of the Valencian people, from a philological standpoint, is the same shared by the autonomous communities of Catalonia and Balearic islands, and Principality of Andorra. Additionally, it is the patrimonial historical language of other territories of the ancient Crown of Aragon [...] The different varieties of these territories constitute a language, that is, a \"linguistic system\" [...] From this group of varieties, Valencian has the same hierarchy and dignity as any other dialectal modality of that linguistic system [...]",
"title": " Status of Valencian"
},
{
"paragraph_id": 81,
"text": "Ruling of the Valencian Language Academy of 9 February 2005, extract of point 1.",
"title": " Status of Valencian"
},
{
"paragraph_id": 82,
"text": "The AVL, created by the Valencian parliament, is in charge of dictating the official rules governing the use of Valencian, and its standard is based on the Norms of Castelló (Normes de Castelló). Currently, everyone who writes in Valencian uses this standard, except the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian an independent standard.",
"title": " Status of Valencian"
},
{
"paragraph_id": 83,
"text": "Despite the position of the official organizations, an opinion poll carried out between 2001 and 2004 showed that the majority of the Valencian people consider Valencian different from Catalan. This position is promoted by people who do not use Valencian regularly. Furthermore, the data indicates that younger generations educated in Valencian are much less likely to hold these views. A minority of Valencian scholars active in fields other than linguistics defends the position of the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian a standard independent from Catalan.",
"title": " Status of Valencian"
},
{
"paragraph_id": 84,
"text": "This clash of opinions has sparked much controversy. For example, during the drafting of the European Constitution in 2004, the Spanish government supplied the EU with translations of the text into Basque, Galician, Catalan, and Valencian, but the latter two were identical.",
"title": " Status of Valencian"
},
{
"paragraph_id": 85,
"text": "Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element.",
"title": "Vocabulary"
},
{
"paragraph_id": 86,
"text": "Literary Catalan allows the use of words from different dialects, except those of very restricted use. However, from the 19th century onwards, there has been a tendency towards favoring words of Northern dialects to the detriment of others, even though nowadays there is a greater freedom of choice.",
"title": "Vocabulary"
},
{
"paragraph_id": 87,
"text": "Like other languages, Catalan has a large list of loanwords from Greek and Latin. This process started very early, and one can find such examples in Ramon Llull's work. In the 14th and 15th centuries Catalan had a far greater number of Greco-Latin loanwords than other Romance languages, as is attested for example in Roís de Corella's writings. The incorporation of learned, or \"bookish\" words from its own ancestor language, Latin, into Catalan is arguably another form of lexical borrowing through the influence of written language and the liturgical language of the Church. Throughout the Middle Ages and into the early modern period, most literate Catalan speakers were also literate in Latin; and thus they easily adopted Latin words into their writing—and eventually speech—in Catalan.",
"title": "Vocabulary"
},
{
"paragraph_id": 88,
"text": "The process of morphological derivation in Catalan follows the same principles as the other Romance languages, where agglutination is common. Many times, several affixes are appended to a preexisting lexeme, and some sound alternations can occur, for example elèctric [əˈlɛktrik] (\"electrical\") vs. electricitat [ələktrisiˈtat]. Prefixes are usually appended to verbs, as in preveure (\"foresee\").",
"title": "Vocabulary"
},
{
"paragraph_id": 89,
"text": "There is greater regularity in the process of word-compounding, where one can find compounded words formed much like those in English.",
"title": "Vocabulary"
},
{
"paragraph_id": 90,
"text": "Catalan uses the Latin script, with some added symbols and digraphs. The Catalan orthography is systematic and largely phonologically based. Standardization of Catalan was among the topics discussed during the First International Congress of the Catalan Language, held in Barcelona October 1906. Subsequently, the Philological Section of the Institut d'Estudis Catalans (IEC, founded in 1911) published the Normes ortogràfiques in 1913 under the direction of Antoni Maria Alcover and Pompeu Fabra. In 1932, Valencian writers and intellectuals gathered in Castelló de la Plana to make a formal adoption of the so-called Normes de Castelló, a set of guidelines following Pompeu Fabra's Catalan language norms.",
"title": "Writing system"
},
{
"paragraph_id": 91,
"text": "The grammar of Catalan is similar to other Romance languages. Features include:",
"title": "Grammar"
},
{
"paragraph_id": 92,
"text": "In gender inflection, the most notable feature is (compared to Portuguese, Spanish or Italian), the loss of the typical masculine suffix -o. Thus, the alternance of -o/-a, has been replaced by ø/-a. There are only a few exceptions, like minso/minsa (\"scarce\"). Many not completely predictable morphological alternations may occur, such as:",
"title": "Grammar"
},
{
"paragraph_id": 93,
"text": "Catalan has few suppletive couplets, like Italian and Spanish, and unlike French. Thus, Catalan has noi/noia (\"boy\"/\"girl\") and gall/gallina (\"cock\"/\"hen\"), whereas French has garçon/fille and coq/poule.",
"title": "Grammar"
},
{
"paragraph_id": 94,
"text": "There is a tendency to abandon traditionally gender-invariable adjectives in favor of marked ones, something prevalent in Occitan and French. Thus, one can find bullent/bullenta (\"boiling\") in contrast with traditional bullent/bullent.",
"title": "Grammar"
},
{
"paragraph_id": 95,
"text": "As in the other Western Romance languages, the main plural expression is the suffix -s, which may create morphological alternations similar to the ones found in gender inflection, albeit more rarely. The most important one is the addition of -o- before certain consonant groups, a phonetic phenomenon that does not affect feminine forms: el pols/els polsos (\"the pulse\"/\"the pulses\") vs. la pols/les pols (\"the dust\"/\"the dusts\").",
"title": "Grammar"
},
{
"paragraph_id": 96,
"text": "The inflection of determinatives is complex, specially because of the high number of elisions, but is similar to the neighboring languages. Catalan has more contractions of preposition + article than Spanish, like dels (\"of + the [plural]\"), but not as many as Italian (which has sul, col, nel, etc.).",
"title": "Grammar"
},
{
"paragraph_id": 97,
"text": "Central Catalan has abandoned almost completely unstressed possessives (mon, etc.) in favor of constructions of article + stressed forms (el meu, etc.), a feature shared with Italian.",
"title": "Grammar"
},
{
"paragraph_id": 98,
"text": "The morphology of Catalan personal pronouns is complex, especially in unstressed forms, which are numerous (13 distinct forms, compared to 11 in Spanish or 9 in Italian). Features include the gender-neutral ho and the great degree of freedom when combining different unstressed pronouns (65 combinations).",
"title": "Grammar"
},
{
"paragraph_id": 99,
"text": "Catalan pronouns exhibit T–V distinction, like all other Romance languages (and most European languages, but not Modern English). This feature implies the use of a different set of second person pronouns for formality.",
"title": "Grammar"
},
{
"paragraph_id": 100,
"text": "This flexibility allows Catalan to use extraposition extensively, much more than French or Spanish. Thus, Catalan can have m'hi recomanaren (\"they recommended me to him\"), whereas in French one must say ils m'ont recommandé à lui, and Spanish me recomendaron a él. This allows the placement of almost any nominal term as a sentence topic, without having to use so often the passive voice (as in French or English), or identifying the direct object with a preposition (as in Spanish).",
"title": "Grammar"
},
{
"paragraph_id": 101,
"text": "Like all the Romance languages, Catalan verbal inflection is more complex than the nominal. Suffixation is omnipresent, whereas morphological alternations play a secondary role. Vowel alternances are active, as well as infixation and suppletion. However, these are not as productive as in Spanish, and are mostly restricted to irregular verbs.",
"title": "Grammar"
},
{
"paragraph_id": 102,
"text": "The Catalan verbal system is basically common to all Western Romance, except that most dialects have replaced the synthetic indicative perfect with a periphrastic form of anar (\"to go\") + infinitive.",
"title": "Grammar"
},
{
"paragraph_id": 103,
"text": "Catalan verbs are traditionally divided into three conjugations, with vowel themes -a-, -e-, -i-, the last two being split into two subtypes. However, this division is mostly theoretical. Only the first conjugation is nowadays productive (with about 3500 common verbs), whereas the third (the subtype of servir, with about 700 common verbs) is semiproductive. The verbs of the second conjugation are fewer than 100, and it is not possible to create new ones, except by compounding.",
"title": "Grammar"
},
{
"paragraph_id": 104,
"text": "The grammar of Catalan follows the general pattern of Western Romance languages. The primary word order is subject–verb–object. However, word order is very flexible. Commonly, verb-subject constructions are used to achieve a semantic effect. The sentence \"The train has arrived\" could be translated as Ha arribat el tren or El tren ha arribat. Both sentences mean \"the train has arrived\", but the former puts a focus on the train, while the latter puts a focus on the arrival. This subtle distinction is described as \"what you might say while waiting in the station\" versus \"what you might say on the train.\"",
"title": "Grammar"
},
{
"paragraph_id": 105,
"text": "In Spain, every person officially has two surnames, one of which is the father's first surname and the other is the mother's first surname. The law contemplates the possibility of joining both surnames with the Catalan conjunction i (\"and\").",
"title": "Catalan names"
},
{
"paragraph_id": 106,
"text": "Selected text from Manuel de Pedrolo's 1970 novel Un amor fora ciutat (\"A love affair outside the city\").",
"title": "Sample text"
},
{
"paragraph_id": 107,
"text": "Institutions",
"title": "External links"
},
{
"paragraph_id": 108,
"text": "About the Catalan language",
"title": "External links"
},
{
"paragraph_id": 109,
"text": "Monolingual dictionaries",
"title": "External links"
},
{
"paragraph_id": 110,
"text": "Bilingual and multilingual dictionaries",
"title": "External links"
},
{
"paragraph_id": 111,
"text": "Automated translation systems",
"title": "External links"
},
{
"paragraph_id": 112,
"text": "Phrasebooks",
"title": "External links"
},
{
"paragraph_id": 113,
"text": "Learning resources",
"title": "External links"
},
{
"paragraph_id": 114,
"text": "Catalan-language online encyclopedia",
"title": "External links"
}
] | Catalan, known in the Valencian Community and Carche as Valencian, is a Western Romance language. It is the official language of Andorra, and an official language of two autonomous communities in eastern Spain: Catalonia and the Balearic Islands. It is also an official language in Valencia, where it is called Valencian. It has semi-official status in the Italian comune of Alghero, and it is spoken in the Pyrénées-Orientales department of France and in two further areas in eastern Spain: the eastern strip of Aragon and the Carche area in the Region of Murcia. The Catalan-speaking territories are often called the Països Catalans or "Catalan Countries". The language evolved from Vulgar Latin in the Middle Ages around the eastern Pyrenees. Nineteenth-century Spain saw a Catalan literary revival, culminating in the early 1900s. | 2001-10-17T15:37:51Z | 2023-12-30T00:31:09Z | [
"Template:Sm",
"Template:Cbignore",
"Template:Circa",
"Template:Wiktla",
"Template:Reflist",
"Template:Cite journal",
"Template:Smallcaps",
"Template:IPAlink",
"Template:Curlie",
"Template:Authority control",
"Template:Angbr IPA",
"Template:Anchor",
"Template:Div col end",
"Template:Short description",
"Template:See also",
"Template:Wiktca",
"Template:Citation needed",
"Template:Clear",
"Template:Clarify",
"Template:Div col",
"Template:Cite news",
"Template:Sister project links",
"Template:Image label small",
"Template:Note",
"Template:Small",
"Template:Navboxes",
"Template:POV inline",
"Template:Vague",
"Template:Notelist",
"Template:Citation",
"Template:Use dmy dates",
"Template:IPA",
"Template:IPA-ca",
"Template:Further",
"Template:Wiktspa",
"Template:Redirect",
"Template:IPAc-en",
"Template:IPAblink",
"Template:Expand section",
"Template:Wikisourcelang",
"Template:Infobox language",
"Template:Image label end",
"Template:Angbr",
"Template:Refbegin",
"Template:Clarify span",
"Template:Pp-pc",
"Template:Dead link",
"Template:Webarchive",
"Template:Harvnb",
"Template:Main",
"Template:Wikt-lang",
"Template:Image label begin",
"Template:Efn",
"Template:Cite book",
"Template:Lang",
"Template:Ref",
"Template:Quote box",
"Template:Portal",
"Template:Refend",
"Template:Sfn",
"Template:Ill",
"Template:Cite web",
"Template:External links"
] | https://en.wikipedia.org/wiki/Catalan_language |
5,285 | STS-51-F | STS-51-F (also known as Spacelab 2) was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985.
While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light.
During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an "Abort to Orbit" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude.
As with previous Spacelab missions, the crew was divided between two 12-hour shifts. Acton, Bridges and Henize made up the "Red Team" while Bartoe, England and Musgrave comprised the "Blue Team"; commander Fullerton could take either shift when needed. Challenger carried two Extravehicular Mobility Units (EMU) in the event of an emergency spacewalk, which would have been performed by England and Musgrave.
STS-51-F's first launch attempt on July 12, 1985, was halted with the countdown at T−3 seconds after main engine ignition, when a malfunction of the number two RS-25 coolant valve caused an automatic launch abort. Challenger launched successfully on its second attempt on July 29, 1985, at 17:00 p.m. EDT, after a delay of 1 hour 37 minutes due to a problem with the table maintenance block update uplink.
At 3 minutes 31 seconds into the ascent, one of the center engine's two high-pressure fuel turbopump turbine discharge temperature sensors failed. Two minutes and twelve seconds later, the second sensor failed, causing the shutdown of the center engine. This was the only in-flight RS-25 failure of the Space Shuttle program. Approximately 8 minutes into the flight, one of the same temperature sensors in the right engine failed, and the remaining right-engine temperature sensor displayed readings near the redline for engine shutdown. Booster Systems Engineer Jenny M. Howard acted quickly to recommend that the crew inhibit any further automatic RS-25 shutdowns based on readings from the remaining sensors, preventing the potential shutdown of a second engine and a possible abort mode that may have resulted in the loss of crew and vehicle (LOCV).
The failed RS-25 resulted in an Abort to Orbit (ATO) trajectory, whereby the shuttle achieved a lower-than-planned orbital altitude. The plan had been for a 385 km (239 mi) by 382 km (237 mi) orbit, but the mission was carried out at 265 km (165 mi) by 262 km (163 mi).
STS-51-F's primary payload was the laboratory module Spacelab 2. A special part of the modular Spacelab system, the "igloo", which was located at the head of a three-pallet train, provided on-site support to instruments mounted on pallets. The main mission objective was to verify performance of Spacelab systems, determine the interface capability of the orbiter, and measure the environment created by the spacecraft. Experiments covered life sciences, plasma physics, astronomy, high-energy astrophysics, solar physics, atmospheric physics and technology research. Despite mission replanning necessitated by Challenger's abort to orbit trajectory, the Spacelab mission was declared a success.
The flight marked the first time the European Space Agency (ESA) Instrument Pointing System (IPS) was tested in orbit. This unique pointing instrument was designed with an accuracy of one arcsecond. Initially, some problems were experienced when it was commanded to track the Sun, but a series of software fixes were made and the problem was corrected. In addition, Anthony W. England became the second amateur radio operator to transmit from space during the mission.
The Spacelab Infrared Telescope (IRT) was also flown on the mission. The IRT was a 15.2 cm (6.0 in) aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. It was thought heat emissions from the Shuttle corrupting long-wavelength data, but it still returned useful astronomical data. Another problem was that a piece of mylar insulation broke loose and floated in the line-of-sight of the telescope. IRT collected infrared data on 60% of the galactic plane. (see also List of largest infrared telescopes) A later space mission that experienced a stray light problem from debris was Gaia astrometry spacecraft launch in 2013 by the ESA - the source of the stray light was later identified as the fibers of the sunshield, protruding beyond the edges of the shield.
The Plasma Diagnostics Package (PDP), which had been previously flown on STS-3, made its return on the mission, and was part of a set of plasma physics experiments designed to study the Earth's ionosphere. During the third day of the mission, it was grappled out of the payload bay by the Remote Manipulator System (Canadarm) and released for six hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission.
In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to American president Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F.
Blue Team tested Coke, and Red Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while the sophisticated Coke system "dispensed soda kind of like what we're used to drinking on Earth", the Pepsi can was a shaving cream can with the Pepsi logo on a paper wrapper, which "dispensed soda filled with bubbles" that was "not very drinkable". Acton said that when he gives speeches in schools, audiences are much more interested in hearing about the cola experiment than in solar physics. Post-flight, the astronauts revealed that they preferred Tang, in part because it could be mixed on-orbit with existing chilled-water supplies, whereas there was no dedicated refrigeration equipment on board to chill the cans, which also fizzed excessively in microgravity.
In an experiment during the mission, thruster rockets were fired at a point over Tasmania and also above Boston to create two "holes" – plasma depletion regions – in the ionosphere. A worldwide group of geophysicists collaborated with the observations made from Spacelab 2.
Challenger landed at Edwards Air Force Base, California, on August 6, 1985, at 12:45:26 p.m. PDT. Its rollout distance was 2,612 m (8,570 ft). The mission had been extended by 17 orbits for additional payload activities due to the Abort to Orbit. The orbiter arrived back at Kennedy Space Center on August 11, 1985.
The mission insignia was designed by Houston, Texas artist Skip Bradley. Space Shuttle Challenger is depicted ascending toward the heavens in search of new knowledge in the field of solar and stellar astronomy, with its Spacelab 2 payload. The constellations Leo and Orion are shown in the positions they were in relative to the Sun during the flight. The nineteen stars indicate that the mission is the 19th shuttle flight.
One of the purposes of the mission was to test how suitable the Shuttle was for conducting infrared observations, and the IRT was operated on this mission. However, the orbiter was found to have some draw-backs for infrared astronomy, and this led to later infrared telescopes being free-flying from the Shuttle orbiter. | [
{
"paragraph_id": 0,
"text": "STS-51-F (also known as Spacelab 2) was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985.",
"title": ""
},
{
"paragraph_id": 1,
"text": "While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light.",
"title": ""
},
{
"paragraph_id": 2,
"text": "During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an \"Abort to Orbit\" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude.",
"title": ""
},
{
"paragraph_id": 3,
"text": "As with previous Spacelab missions, the crew was divided between two 12-hour shifts. Acton, Bridges and Henize made up the \"Red Team\" while Bartoe, England and Musgrave comprised the \"Blue Team\"; commander Fullerton could take either shift when needed. Challenger carried two Extravehicular Mobility Units (EMU) in the event of an emergency spacewalk, which would have been performed by England and Musgrave.",
"title": "Crew"
},
{
"paragraph_id": 4,
"text": "STS-51-F's first launch attempt on July 12, 1985, was halted with the countdown at T−3 seconds after main engine ignition, when a malfunction of the number two RS-25 coolant valve caused an automatic launch abort. Challenger launched successfully on its second attempt on July 29, 1985, at 17:00 p.m. EDT, after a delay of 1 hour 37 minutes due to a problem with the table maintenance block update uplink.",
"title": "Launch"
},
{
"paragraph_id": 5,
"text": "At 3 minutes 31 seconds into the ascent, one of the center engine's two high-pressure fuel turbopump turbine discharge temperature sensors failed. Two minutes and twelve seconds later, the second sensor failed, causing the shutdown of the center engine. This was the only in-flight RS-25 failure of the Space Shuttle program. Approximately 8 minutes into the flight, one of the same temperature sensors in the right engine failed, and the remaining right-engine temperature sensor displayed readings near the redline for engine shutdown. Booster Systems Engineer Jenny M. Howard acted quickly to recommend that the crew inhibit any further automatic RS-25 shutdowns based on readings from the remaining sensors, preventing the potential shutdown of a second engine and a possible abort mode that may have resulted in the loss of crew and vehicle (LOCV).",
"title": "Launch"
},
{
"paragraph_id": 6,
"text": "The failed RS-25 resulted in an Abort to Orbit (ATO) trajectory, whereby the shuttle achieved a lower-than-planned orbital altitude. The plan had been for a 385 km (239 mi) by 382 km (237 mi) orbit, but the mission was carried out at 265 km (165 mi) by 262 km (163 mi).",
"title": "Launch"
},
{
"paragraph_id": 7,
"text": "STS-51-F's primary payload was the laboratory module Spacelab 2. A special part of the modular Spacelab system, the \"igloo\", which was located at the head of a three-pallet train, provided on-site support to instruments mounted on pallets. The main mission objective was to verify performance of Spacelab systems, determine the interface capability of the orbiter, and measure the environment created by the spacecraft. Experiments covered life sciences, plasma physics, astronomy, high-energy astrophysics, solar physics, atmospheric physics and technology research. Despite mission replanning necessitated by Challenger's abort to orbit trajectory, the Spacelab mission was declared a success.",
"title": "Mission summary"
},
{
"paragraph_id": 8,
"text": "The flight marked the first time the European Space Agency (ESA) Instrument Pointing System (IPS) was tested in orbit. This unique pointing instrument was designed with an accuracy of one arcsecond. Initially, some problems were experienced when it was commanded to track the Sun, but a series of software fixes were made and the problem was corrected. In addition, Anthony W. England became the second amateur radio operator to transmit from space during the mission.",
"title": "Mission summary"
},
{
"paragraph_id": 9,
"text": "The Spacelab Infrared Telescope (IRT) was also flown on the mission. The IRT was a 15.2 cm (6.0 in) aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. It was thought heat emissions from the Shuttle corrupting long-wavelength data, but it still returned useful astronomical data. Another problem was that a piece of mylar insulation broke loose and floated in the line-of-sight of the telescope. IRT collected infrared data on 60% of the galactic plane. (see also List of largest infrared telescopes) A later space mission that experienced a stray light problem from debris was Gaia astrometry spacecraft launch in 2013 by the ESA - the source of the stray light was later identified as the fibers of the sunshield, protruding beyond the edges of the shield.",
"title": "Mission summary"
},
{
"paragraph_id": 10,
"text": "The Plasma Diagnostics Package (PDP), which had been previously flown on STS-3, made its return on the mission, and was part of a set of plasma physics experiments designed to study the Earth's ionosphere. During the third day of the mission, it was grappled out of the payload bay by the Remote Manipulator System (Canadarm) and released for six hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission.",
"title": "Mission summary"
},
{
"paragraph_id": 11,
"text": "In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to American president Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F.",
"title": "Mission summary"
},
{
"paragraph_id": 12,
"text": "Blue Team tested Coke, and Red Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while the sophisticated Coke system \"dispensed soda kind of like what we're used to drinking on Earth\", the Pepsi can was a shaving cream can with the Pepsi logo on a paper wrapper, which \"dispensed soda filled with bubbles\" that was \"not very drinkable\". Acton said that when he gives speeches in schools, audiences are much more interested in hearing about the cola experiment than in solar physics. Post-flight, the astronauts revealed that they preferred Tang, in part because it could be mixed on-orbit with existing chilled-water supplies, whereas there was no dedicated refrigeration equipment on board to chill the cans, which also fizzed excessively in microgravity.",
"title": "Mission summary"
},
{
"paragraph_id": 13,
"text": "In an experiment during the mission, thruster rockets were fired at a point over Tasmania and also above Boston to create two \"holes\" – plasma depletion regions – in the ionosphere. A worldwide group of geophysicists collaborated with the observations made from Spacelab 2.",
"title": "Mission summary"
},
{
"paragraph_id": 14,
"text": "Challenger landed at Edwards Air Force Base, California, on August 6, 1985, at 12:45:26 p.m. PDT. Its rollout distance was 2,612 m (8,570 ft). The mission had been extended by 17 orbits for additional payload activities due to the Abort to Orbit. The orbiter arrived back at Kennedy Space Center on August 11, 1985.",
"title": "Landing"
},
{
"paragraph_id": 15,
"text": "The mission insignia was designed by Houston, Texas artist Skip Bradley. Space Shuttle Challenger is depicted ascending toward the heavens in search of new knowledge in the field of solar and stellar astronomy, with its Spacelab 2 payload. The constellations Leo and Orion are shown in the positions they were in relative to the Sun during the flight. The nineteen stars indicate that the mission is the 19th shuttle flight.",
"title": "Mission insignia"
},
{
"paragraph_id": 16,
"text": "One of the purposes of the mission was to test how suitable the Shuttle was for conducting infrared observations, and the IRT was operated on this mission. However, the orbiter was found to have some draw-backs for infrared astronomy, and this led to later infrared telescopes being free-flying from the Shuttle orbiter.",
"title": "Legacy"
}
] | STS-51-F was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985. While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light. During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an "Abort to Orbit" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude. | 2002-02-25T15:51:15Z | 2023-11-20T21:55:54Z | [
"Template:Use mdy dates",
"Template:Infobox spaceflight",
"Template:LaunchAttempt",
"Template:Reflist",
"Template:Citation",
"Template:Short description",
"Template:Spaceflight crew",
"Template:Cvt",
"Template:Cite web",
"Template:Cite news",
"Template:OV",
"Template:Citation-attribution",
"Template:Cite magazine",
"Template:Commons category",
"Template:Webarchive",
"Template:All U.S. Space Shuttle Missions",
"Template:Use American English",
"Template:'",
"Template:R",
"Template:Portal",
"Template:Space Shuttle Challenger",
"Template:Orbital launches in 1985"
] | https://en.wikipedia.org/wiki/STS-51-F |
5,288 | Classical period (music) | The Classical period was an era of classical music between roughly 1750 and 1820.
The Classical period falls between the Baroque and the Romantic periods. Classical music has a lighter, clearer texture than Baroque music, but a more varying use of musical form, which is, in simpler terms, the rhythm and organization of any given piece of music. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially in liturgical vocal music and, later in the period, secular instrumental music. It also makes use of style galant which emphasized light elegance in place of the Baroque's dignified seriousness and impressive grandeur. Variety and contrast within a piece became more pronounced than before and the orchestra increased in size, range, and power.
The harpsichord was replaced as the main keyboard instrument by the piano (or fortepiano). Unlike the harpsichord, which plucks strings with quills, pianos strike the strings with leather-covered hammers when the keys are pressed, which enables the performer to play louder or softer (hence the original name "fortepiano," literally "loud soft") and play with more expression; in contrast, the force with which a performer plays the harpsichord keys does not change the sound. Instrumental music was considered important by Classical period composers. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony (performed by an orchestra) and the solo concerto, which featured a virtuoso solo performer playing a solo work for violin, piano, flute, or another instrument, accompanied by an orchestra. Vocal music, such as songs for a singer and piano (notably the work of Schubert), choral works, and opera (a staged dramatic work for singers and orchestra) were also important during this period.
The best-known composers from this period are Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven, and Franz Schubert; other names in this period include: Carl Philipp Emanuel Bach, Johann Christian Bach, Luigi Boccherini, Domenico Cimarosa, Joseph Martin Kraus, Muzio Clementi, Christoph Willibald Gluck, Carl Ditters von Dittersdorf, André Grétry, Pierre-Alexandre Monsigny, Leopold Mozart, Michael Haydn, Giovanni Paisiello, Johann Baptist Wanhal, François-André Danican Philidor, Niccolò Piccinni, Antonio Salieri, Etienne Nicolas Mehul, Georg Christoph Wagenseil, Georg Matthias Monn, Johann Gottlieb Graun, Carl Heinrich Graun, Franz Benda, Georg Anton Benda, Johann Georg Albrechtsberger, Mauro Giuliani, Christian Cannabich and the Chevalier de Saint-Georges. Beethoven is regarded either as a Romantic composer or a Classical period composer who was part of the transition to the Romantic era. Schubert is also a transitional figure, as were Johann Nepomuk Hummel, Luigi Cherubini, Gaspare Spontini, Gioachino Rossini, Carl Maria von Weber, John Field, Jan Ladislav Dussek and Niccolò Paganini. The period is sometimes referred to as the era of Viennese Classicism (German: Wiener Klassik), since Gluck, Haydn, Salieri, Mozart, Beethoven, and Schubert all worked in Vienna.
In the middle of the 18th century, Europe began to move toward a new style in architecture, literature, and the arts, generally known as Neoclassicism. This style sought to emulate the ideals of Classical antiquity, especially those of Classical Greece. Classical music used formality and emphasis on order and hierarchy, and a "clearer", "cleaner" style that used clearer divisions between parts (notably a clear, single melody accompanied by chords), brighter contrasts and "tone colors" (achieved by the use of dynamic changes and modulations to more keys). In contrast with the richly layered music of the Baroque era, Classical music moved towards simplicity rather than complexity. In addition, the typical size of orchestras began to increase, giving orchestras a more powerful sound.
The remarkable development of ideas in "natural philosophy" had already established itself in the public consciousness. In particular, Newton's physics was taken as a paradigm: structures should be well-founded in axioms and be both well-articulated and orderly. This taste for structural clarity began to affect music, which moved away from the layered polyphony of the Baroque period toward a style known as homophony, in which the melody is played over a subordinate harmony. This move meant that chords became a much more prevalent feature of music, even if they interrupted the melodic smoothness of a single part. As a result, the tonal structure of a piece of music became more audible.
The new style was also encouraged by changes in the economic order and social structure. As the 18th century progressed well, the nobility became the primary patrons of instrumental music, while public taste increasingly preferred lighter, funny comic operas. This led to changes in the way music was performed, the most crucial of which was the move to standard instrumental groups and the reduction in the importance of the continuo—the rhythmic and harmonic groundwork of a piece of music, typically played by a keyboard (harpsichord or organ) and usually accompanied by a varied group of bass instruments, including cello, double bass, bass viol, and theorbo. One way to trace the decline of the continuo and its figured chords is to examine the disappearance of the term obbligato, meaning a mandatory instrumental part in a work of chamber music. In Baroque compositions, additional instruments could be added to the continuo group according to the group or leader's preference; in Classical compositions, all parts were specifically noted, though not always notated, so the term "obbligato" became redundant. By 1800, basso continuo was practically extinct, except for the occasional use of a pipe organ continuo part in a religious Mass in the early 1800s.
Economic changes also had the effect of altering the balance of availability and quality of musicians. While in the late Baroque, a major composer would have the entire musical resources of a town to draw on, the musical forces available at an aristocratic hunting lodge or small court were smaller and more fixed in their level of ability. This was a spur to having simpler parts for ensemble musicians to play, and in the case of a resident virtuoso group, a spur to writing spectacular, idiomatic parts for certain instruments, as in the case of the Mannheim orchestra, or virtuoso solo parts for particularly skilled violinists or flutists. In addition, the appetite by audiences for a continual supply of new music carried over from the Baroque. This meant that works had to be performable with, at best, one or two rehearsals. Even after 1790 Mozart writes about "the rehearsal", with the implication that his concerts would have only one rehearsal.
Since there was a greater emphasis on a single melodic line, there was greater emphasis on notating that line for dynamics and phrasing. This contrasts with the Baroque era, when melodies were typically written with no dynamics, phrasing marks or ornaments, as it was assumed that the performer would improvise these elements on the spot. In the Classical era, it became more common for composers to indicate where they wanted performers to play ornaments such as trills or turns. The simplification of texture made such instrumental detail more important, and also made the use of characteristic rhythms, such as attention-getting opening fanfares, the funeral march rhythm, or the minuet genre, more important in establishing and unifying the tone of a single movement.
The Classical period also saw the gradual development of sonata form, a set of structural principles for music that reconciled the Classical preference for melodic material with harmonic development, which could be applied across musical genres. The sonata itself continued to be the principal form for solo and chamber music, while later in the Classical period the string quartet became a prominent genre. The symphony form for orchestra was created in this period (this is popularly attributed to Joseph Haydn). The concerto grosso (a concerto for more than one musician), a very popular form in the Baroque era, began to be replaced by the solo concerto, featuring only one soloist. Composers began to place more importance on the particular soloist's ability to show off virtuoso skills, with challenging, fast scale and arpeggio runs. Nonetheless, some concerti grossi remained, the most famous of which being Mozart's Sinfonia Concertante for Violin and Viola in E-flat major.
In the classical period, the theme consists of phrases with contrasting melodic figures and rhythms. These phrases are relatively brief, typically four bars in length, and can occasionally seem sparse or terse. The texture is mainly homophonic, with a clear melody above a subordinate chordal accompaniment, for instance an Alberti bass. This contrasts with the practice in Baroque music, where a piece or movement would typically have only one musical subject, which would then be worked out in a number of voices according to the principles of counterpoint, while maintaining a consistent rhythm or metre throughout. As a result, Classical music tends to have a lighter, clearer texture than the Baroque. The classical style draws on the style galant, a musical style which emphasised light elegance in place of the Baroque's dignified seriousness and impressive grandeur.
Structurally, Classical music generally has a clear musical form, with a well-defined contrast between tonic and dominant, introduced by clear cadences. Dynamics are used to highlight the structural characteristics of the piece. In particular, sonata form and its variants were developed during the early classical period and was frequently used. The Classical approach to structure again contrasts with the Baroque, where a composition would normally move between tonic and dominant and back again, but through a continual progress of chord changes and without a sense of "arrival" at the new key. While counterpoint was less emphasised in the classical period, it was by no means forgotten, especially later in the period, and composers still used counterpoint in "serious" works such as symphonies and string quartets, as well as religious pieces, such as Masses.
The classical musical style was supported by technical developments in instruments. The widespread adoption of equal temperament made classical musical structure possible, by ensuring that cadences in all keys sounded similar. The fortepiano and then the pianoforte replaced the harpsichord, enabling more dynamic contrast and more sustained melodies. Over the Classical period, keyboard instruments became richer, more sonorous and more powerful.
The orchestra increased in size and range, and became more standardised. The harpsichord or pipe organ basso continuo role in orchestra fell out of use between 1750 and 1775, leaving the string section. Woodwinds became a self-contained section, consisting of clarinets, oboes, flutes and bassoons.
While vocal music such as comic opera was popular, great importance was given to instrumental music. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony, concerto (usually for a virtuoso solo instrument accompanied by orchestra), and light pieces such as serenades and divertimentos. Sonata form developed and became the most important form. It was used to build up the first movement of most large-scale works in symphonies and string quartets. Sonata form was also used in other movements and in single, standalone pieces such as overtures.
In his book The Classical Style, author and pianist Charles Rosen claims that from 1755 to 1775, composers groped for a new style that was more effectively dramatic. In the High Baroque period, dramatic expression was limited to the representation of individual affects (the "doctrine of affections", or what Rosen terms "dramatic sentiment"). For example, in Handel's oratorio Jephtha, the composer renders four emotions separately, one for each character, in the quartet "O, spare your daughter". Eventually this depiction of individual emotions came to be seen as simplistic and unrealistic; composers sought to portray multiple emotions, simultaneously or progressively, within a single character or movement ("dramatic action"). Thus in the finale of act 2 of Mozart's Die Entführung aus dem Serail, the lovers move "from joy through suspicion and outrage to final reconciliation."
Musically speaking, this "dramatic action" required more musical variety. Whereas Baroque music was characterized by seamless flow within individual movements and largely uniform textures, composers after the High Baroque sought to interrupt this flow with abrupt changes in texture, dynamic, harmony, or tempo. Among the stylistic developments which followed the High Baroque, the most dramatic came to be called Empfindsamkeit, (roughly "sensitive style"), and its best-known practitioner was Carl Philipp Emanuel Bach. Composers of this style employed the above-discussed interruptions in the most abrupt manner, and the music can sound illogical at times. The Italian composer Domenico Scarlatti took these developments further. His more than five hundred single-movement keyboard sonatas also contain abrupt changes of texture, but these changes are organized into periods, balanced phrases that became a hallmark of the classical style. However, Scarlatti's changes in texture still sound sudden and unprepared. The outstanding achievement of the great classical composers (Haydn, Mozart and Beethoven) was their ability to make these dramatic surprises sound logically motivated, so that "the expressive and the elegant could join hands."
Between the death of J. S. Bach and the maturity of Haydn and Mozart (roughly 1750–1770), composers experimented with these new ideas, which can be seen in the music of Bach's sons. Johann Christian developed a style which we now call Roccoco, comprising simpler textures and harmonies, and which was "charming, undramatic, and a little empty." As mentioned previously, Carl Philipp Emmanuel sought to increase drama, and his music was "violent, expressive, brilliant, continuously surprising, and often incoherent." And finally Wilhelm Friedemann, J.S. Bach's eldest son, extended Baroque traditions in an idiomatic, unconventional way.
At first the new style took over Baroque forms—the ternary da capo aria, the sinfonia and the concerto—but composed with simpler parts, more notated ornamentation, rather than the improvised ornaments that were common in the Baroque era, and more emphatic division of pieces into sections. However, over time, the new aesthetic caused radical changes in how pieces were put together, and the basic formal layouts changed. Composers from this period sought dramatic effects, striking melodies, and clearer textures. One of the big textural changes was a shift away from the complex, dense polyphonic style of the Baroque, in which multiple interweaving melodic lines were played simultaneously, and towards homophony, a lighter texture which uses a clear single melody line accompanied by chords.
Baroque music generally uses many harmonic fantasies and polyphonic sections that focus less on the structure of the musical piece, and there was less emphasis on clear musical phrases. In the classical period, the harmonies became simpler. However, the structure of the piece, the phrases and small melodic or rhythmic motives, became much more important than in the Baroque period.
Another important break with the past was the radical overhaul of opera by Christoph Willibald Gluck, who cut away a great deal of the layering and improvisational ornaments and focused on the points of modulation and transition. By making these moments where the harmony changes more of a focus, he enabled powerful dramatic shifts in the emotional color of the music. To highlight these transitions, he used changes in instrumentation (orchestration), melody, and mode. Among the most successful composers of his time, Gluck spawned many emulators, including Antonio Salieri. Their emphasis on accessibility brought huge successes in opera, and in other vocal music such as songs, oratorios, and choruses. These were considered the most important kinds of music for performance and hence enjoyed greatest public success.
The phase between the Baroque and the rise of the Classical (around 1730), was home to various competing musical styles. The diversity of artistic paths are represented in the sons of Johann Sebastian Bach: Wilhelm Friedemann Bach, who continued the Baroque tradition in a personal way; Johann Christian Bach, who simplified textures of the Baroque and most clearly influenced Mozart; and Carl Philipp Emanuel Bach, who composed passionate and sometimes violently eccentric music of the Empfindsamkeit movement. Musical culture was caught at a crossroads: the masters of the older style had the technique, but the public hungered for the new. This is one of the reasons C. P. E. Bach was held in such high regard: he understood the older forms quite well and knew how to present them in new garb, with an enhanced variety of form.
By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music.
The "normal" orchestra ensemble—a body of strings supplemented by winds—and movements of particular rhythmic character were established by the late 1750s in Vienna. However, the length and weight of pieces was still set with some Baroque characteristics: individual movements still focused on one "affect" (musical mood) or had only one sharply contrasting middle section, and their length was not significantly greater than Baroque movements. There was not yet a clearly enunciated theory of how to compose in the new style. It was a moment ripe for a breakthrough.
The first great master of the style was the composer Joseph Haydn. In the late 1750s he began composing symphonies, and by 1761 he had composed a triptych (Morning, Noon, and Evening) solidly in the contemporary mode. As a vice-Kapellmeister and later Kapellmeister, his output expanded: he composed over forty symphonies in the 1760s alone. And while his fame grew, as his orchestra was expanded and his compositions were copied and disseminated, his voice was only one among many.
While some scholars suggest that Haydn was later overshadowed by Mozart and Beethoven, it would be difficult to overstate Haydn's centrality to the new style, and therefore to the future of Western art music as a whole. At the time, before the pre-eminence of Mozart or Beethoven, and with Johann Sebastian Bach known primarily to connoisseurs of keyboard music, Haydn reached a place in music that set him above all other composers except perhaps the Baroque era's George Frideric Handel. Haydn took existing ideas, and radically altered how they functioned—earning him the titles "father of the symphony" and "father of the string quartet".
One of the forces that worked as an impetus for his pressing forward was the first stirring of what would later be called Romanticism—the Sturm und Drang, or "storm and stress" phase in the arts, a short period where obvious and dramatic emotionalism was a stylistic preference. Haydn accordingly wanted more dramatic contrast and more emotionally appealing melodies, with sharpened character and individuality in his pieces. This period faded away in music and literature: however, it influenced what came afterward and would eventually be a component of aesthetic taste in later decades.
The Farewell Symphony, No. 45 in F♯ minor, exemplifies Haydn's integration of the differing demands of the new style, with surprising sharp turns and a long slow adagio to end the work. In 1772, Haydn completed his Opus 20 set of six string quartets, in which he deployed the polyphonic techniques he had gathered from the previous Baroque era to provide structural coherence capable of holding together his melodic ideas. For some, this marks the beginning of the "mature" Classical style, a transitional period in which reaction against late Baroque complexity yielded to integration of Baroque and Classical elements.
Haydn, having worked for over a decade as the music director for a prince, had far more resources and scope for composing than most other composers. His position also gave him the ability to shape the forces that would play his music, as he could select skilled musicians. This opportunity was not wasted, as Haydn, beginning quite early on his career, sought to press forward the technique of building and developing ideas in his music. His next important breakthrough was in the Opus 33 string quartets (1781), in which the melodic and the harmonic roles segue among the instruments: it is often momentarily unclear what is melody and what is harmony. This changes the way the ensemble works its way between dramatic moments of transition and climactic sections: the music flows smoothly and without obvious interruption. He then took this integrated style and began applying it to orchestral and vocal music.
Haydn's gift to music was a way of composing, a way of structuring works, which was at the same time in accord with the governing aesthetic of the new style. However, a younger contemporary, Wolfgang Amadeus Mozart, brought his genius to Haydn's ideas and applied them to two of the major genres of the day: opera, and the virtuoso concerto. Whereas Haydn spent much of his working life as a court composer, Mozart wanted public success in the concert life of cities, playing for the general public. This meant he needed to write operas and write and perform virtuoso pieces. Haydn was not a virtuoso at the international touring level; nor was he seeking to create operatic works that could play for many nights in front of a large audience. Mozart wanted to achieve both. Moreover, Mozart also had a taste for more chromatic chords (and greater contrasts in harmonic language generally), a greater love for creating a welter of melodies in a single work, and a more Italianate sensibility in music as a whole. He found, in Haydn's music and later in his study of the polyphony of J.S. Bach, the means to discipline and enrich his artistic gifts.
Mozart rapidly came to the attention of Haydn, who hailed the new composer, studied his works, and considered the younger man his only true peer in music. In Mozart, Haydn found a greater range of instrumentation, dramatic effect and melodic resource. The learning relationship moved in both directions. Mozart also had a great respect for the older, more experienced composer, and sought to learn from him.
Mozart's arrival in Vienna in 1780 brought an acceleration in the development of the Classical style. There, Mozart absorbed the fusion of Italianate brilliance and Germanic cohesiveness that had been brewing for the previous 20 years. His own taste for flashy brilliances, rhythmically complex melodies and figures, long cantilena melodies, and virtuoso flourishes was merged with an appreciation for formal coherence and internal connectedness. It is at this point that war and economic inflation halted a trend to larger orchestras and forced the disbanding or reduction of many theater orchestras. This pressed the Classical style inwards: toward seeking greater ensemble and technical challenges—for example, scattering the melody across woodwinds, or using a melody harmonized in thirds. This process placed a premium on small ensemble music, called chamber music. It also led to a trend for more public performance, giving a further boost to the string quartet and other small ensemble groupings.
It was during this decade that public taste began, increasingly, to recognize that Haydn and Mozart had reached a high standard of composition. By the time Mozart arrived at age 25, in 1781, the dominant styles of Vienna were recognizably connected to the emergence in the 1750s of the early Classical style. By the end of the 1780s, changes in performance practice, the relative standing of instrumental and vocal music, technical demands on musicians, and stylistic unity had become established in the composers who imitated Mozart and Haydn. During this decade Mozart composed his most famous operas, his six late symphonies that helped to redefine the genre, and a string of piano concerti that still stand at the pinnacle of these forms.
One composer who was influential in spreading the more serious style that Mozart and Haydn had formed is Muzio Clementi, a gifted virtuoso pianist who tied with Mozart in a musical "duel" before the emperor in which they each improvised on the piano and performed their compositions. Clementi's sonatas for the piano circulated widely, and he became the most successful composer in London during the 1780s. Also in London at this time was Jan Ladislav Dussek, who, like Clementi, encouraged piano makers to extend the range and other features of their instruments, and then fully exploited the newly opened up possibilities. The importance of London in the Classical period is often overlooked, but it served as the home to the Broadwood's factory for piano manufacturing and as the base for composers who, while less notable than the "Vienna School", had a decisive influence on what came later. They were composers of many fine works, notable in their own right. London's taste for virtuosity may well have encouraged the complex passage work and extended statements on tonic and dominant.
When Haydn and Mozart began composing, symphonies were played as single movements—before, between, or as interludes within other works—and many of them lasted only ten or twelve minutes; instrumental groups had varying standards of playing, and the continuo was a central part of music-making.
In the intervening years, the social world of music had seen dramatic changes. International publication and touring had grown explosively, and concert societies formed. Notation became more specific, more descriptive—and schematics for works had been simplified (yet became more varied in their exact working out). In 1790, just before Mozart's death, with his reputation spreading rapidly, Haydn was poised for a series of successes, notably his late oratorios and London symphonies. Composers in Paris, Rome, and all over Germany turned to Haydn and Mozart for their ideas on form.
In the 1790s, a new generation of composers, born around 1770, emerged. While they had grown up with the earlier styles, they heard in the recent works of Haydn and Mozart a vehicle for greater expression. In 1788 Luigi Cherubini settled in Paris and in 1791 composed Lodoiska, an opera that raised him to fame. Its style is clearly reflective of the mature Haydn and Mozart, and its instrumentation gave it a weight that had not yet been felt in the grand opera. His contemporary Étienne Méhul extended instrumental effects with his 1790 opera Euphrosine et Coradin, from which followed a series of successes. The final push towards change came from Gaspare Spontini, who was deeply admired by future romantic composers such as Weber, Berlioz and Wagner. The innovative harmonic language of his operas, their refined instrumentation and their "enchained" closed numbers (a structural pattern which was later adopted by Weber in Euryanthe and from him handed down, through Marschner, to Wagner), formed the basis from which French and German romantic opera had its beginnings.
The most fateful of the new generation was Ludwig van Beethoven, who launched his numbered works in 1794 with a set of three piano trios, which remain in the repertoire. Somewhat younger than the others, though equally accomplished because of his youthful study under Mozart and his native virtuosity, was Johann Nepomuk Hummel. Hummel studied under Haydn as well; he was a friend to Beethoven and Franz Schubert. He concentrated more on the piano than any other instrument, and his time in London in 1791 and 1792 generated the composition and publication in 1793 of three piano sonatas, opus 2, which idiomatically used Mozart's techniques of avoiding the expected cadence, and Clementi's sometimes modally uncertain virtuoso figuration. Taken together, these composers can be seen as the vanguard of a broad change in style and the center of music. They studied one another's works, copied one another's gestures in music, and on occasion behaved like quarrelsome rivals.
The crucial differences with the previous wave can be seen in the downward shift in melodies, increasing durations of movements, the acceptance of Mozart and Haydn as paradigmatic, the greater use of keyboard resources, the shift from "vocal" writing to "pianistic" writing, the growing pull of the minor and of modal ambiguity, and the increasing importance of varying accompanying figures to bring "texture" forward as an element in music. In short, the late Classical was seeking music that was internally more complex. The growth of concert societies and amateur orchestras, marking the importance of music as part of middle-class life, contributed to a booming market for pianos, piano music, and virtuosi to serve as exemplars. Hummel, Beethoven, and Clementi were all renowned for their improvising.
The direct influence of the Baroque continued to fade: the figured bass grew less prominent as a means of holding performance together, the performance practices of the mid-18th century continued to die out. However, at the same time, complete editions of Baroque masters began to become available, and the influence of Baroque style continued to grow, particularly in the ever more expansive use of brass. Another feature of the period is the growing number of performances where the composer was not present. This led to increased detail and specificity in notation; for example, there were fewer "optional" parts that stood separately from the main score.
The force of these shifts became apparent with Beethoven's 3rd Symphony, given the name Eroica, which is Italian for "heroic", by the composer. As with Stravinsky's The Rite of Spring, it may not have been the first in all of its innovations, but its aggressive use of every part of the Classical style set it apart from its contemporary works: in length, ambition, and harmonic resources as well making it the first symphony of the Romantic era.
The First Viennese School is a name mostly used to refer to three composers of the Classical period in late-18th-century Vienna: Haydn, Mozart, and Beethoven. Franz Schubert is occasionally added to the list.
In German-speaking countries, the term Wiener Klassik (lit. Viennese classical era/art) is used. That term is often more broadly applied to the Classical era in music as a whole, as a means to distinguish it from other periods that are colloquially referred to as classical, namely Baroque and Romantic music.
The term "Viennese School" was first used by Austrian musicologist Raphael Georg Kiesewetter in 1834, although he only counted Haydn and Mozart as members of the school. Other writers followed suit, and eventually Beethoven was added to the list. The designation "first" is added today to avoid confusion with the Second Viennese School.
Whilst, Schubert apart, these composers certainly knew each other (with Haydn and Mozart even being occasional chamber-music partners), there is no sense in which they were engaged in a collaborative effort in the sense that one would associate with 20th-century schools such as the Second Viennese School, or Les Six. Nor is there any significant sense in which one composer was "schooled" by another (in the way that Berg and Webern were taught by Schoenberg), though it is true that Beethoven for a time received lessons from Haydn.
Attempts to extend the First Viennese School to include such later figures as Anton Bruckner, Johannes Brahms, and Gustav Mahler are merely journalistic, and never encountered in academic musicology.
Musical eras and their prevalent styles, forms and instruments seldom disappear at once; instead, features are replaced over time, until the old approach is simply felt as "old-fashioned". The Classical style did not "die" suddenly; rather, it gradually got phased out under the weight of changes. To give just one example, while it is generally stated that the Classical era stopped using the harpsichord in orchestras, this did not happen all of a sudden at the start of the Classical era in 1750. Rather, orchestras slowly stopped using the harpsichord to play basso continuo until the practice was discontinued by the end of the 1700s.
One crucial change was the shift towards harmonies centering on "flatward" keys: shifts in the subdominant direction . In the Classical style, major key was far more common than minor, chromaticism being moderated through the use of "sharpward" modulation (e.g., a piece in C major modulating to G major, D major, or A major, all of which are keys with more sharps). As well, sections in the minor mode were often used for contrast. Beginning with Mozart and Clementi, there began a creeping colonization of the subdominant region (the ii or IV chord, which in the key of C major would be the keys of d minor or F major). With Schubert, subdominant modulations flourished after being introduced in contexts in which earlier composers would have confined themselves to dominant shifts (modulations to the dominant chord, e.g., in the key of C major, modulating to G major). This introduced darker colors to music, strengthened the minor mode, and made structure harder to maintain. Beethoven contributed to this by his increasing use of the fourth as a consonance, and modal ambiguity—for example, the opening of the Symphony No. 9 in D minor.
Ludwig van Beethoven, Franz Schubert, Carl Maria von Weber, Johann Nepomuk Hummel, and John Field are among the most prominent in this generation of "Proto-Romantics", along with the young Felix Mendelssohn. Their sense of form was strongly influenced by the Classical style. While they were not yet "learned" composers (imitating rules which were codified by others), they directly responded to works by Haydn, Mozart, Clementi, and others, as they encountered them. The instrumental forces at their disposal in orchestras were also quite "Classical" in number and variety, permitting similarity with Classical works.
However, the forces destined to end the hold of the Classical style gathered strength in the works of many of the above composers, particularly Beethoven. The most commonly cited one is harmonic innovation. Also important is the increasing focus on having a continuous and rhythmically uniform accompanying figuration: Beethoven's Moonlight Sonata was the model for hundreds of later pieces—where the shifting movement of a rhythmic figure provides much of the drama and interest of the work, while a melody drifts above it. Greater knowledge of works, greater instrumental expertise, increasing variety of instruments, the growth of concert societies, and the unstoppable domination of the increasingly more powerful piano (which was given a bolder, louder tone by technological developments such as the use of steel strings, heavy cast-iron frames and sympathetically vibrating strings) all created a huge audience for sophisticated music. All of these trends contributed to the shift to the "Romantic" style.
Drawing the line between these two styles is very difficult: some sections of Mozart's later works, taken alone, are indistinguishable in harmony and orchestration from music written 80 years later—and some composers continued to write in normative Classical styles into the early 20th century. Even before Beethoven's death, composers such as Louis Spohr were self-described Romantics, incorporating, for example, more extravagant chromaticism in their works (e.g., using chromatic harmonies in a piece's chord progression). Conversely, works such as Schubert's Symphony No. 5, written during the chronological end of the Classical era and dawn of the Romantic era, exhibit a deliberately anachronistic artistic paradigm, harking back to the compositional style of several decades before.
However, Vienna's fall as the most important musical center for orchestral composition during the late 1820s, precipitated by the deaths of Beethoven and Schubert, marked the Classical style's final eclipse—and the end of its continuous organic development of one composer learning in close proximity to others. Franz Liszt and Frédéric Chopin visited Vienna when they were young, but they then moved on to other cities. Composers such as Carl Czerny, while deeply influenced by Beethoven, also searched for new ideas and new forms to contain the larger world of musical expression and performance in which they lived.
Renewed interest in the formal balance and restraint of 18th century classical music led in the early 20th century to the development of so-called Neoclassical style, which numbered Stravinsky and Prokofiev among its proponents, at least at certain times in their careers.
The Baroque guitar, with four or five sets of double strings or "courses" and elaborately decorated soundhole, was a very different instrument from the early classical guitar which more closely resembles the modern instrument with the standard six strings. Judging by the number of instructional manuals published for the instrument – over three hundred texts were published by over two hundred authors between 1760 and 1860 – the classical period marked a golden age for guitar.
In the Baroque era, there was more variety in the bowed stringed instruments used in ensembles, with instruments such as the viola d'amore and a range of fretted viols being used, ranging from small viols to large bass viols. In the Classical period, the string section of the orchestra was standardized as just four instruments:
In the Baroque era, the double bass players were not usually given a separate part; instead, they typically played the same basso continuo bassline that the cellos and other low-pitched instruments (e.g., theorbo, serpent wind instrument, viols), albeit an octave below the cellos, because the double bass is a transposing instrument that sounds one octave lower than it is written. In the Classical era, some composers continued to write only one bass part for their symphony, labeled "bassi"; this bass part was played by cellists and double bassists. During the Classical era, some composers began to give the double basses their own part.
It was commonplace for all orchestras to have at least 2 winds, usually oboes, flutes, clarinets, or sometimes english horns (see Symphony No. 22 (Haydn). Patrons also usually employed an ensemble of entirely winds, called the harmonie, which would be employed for certain events. The harmonie would join the larger string orchestra sometimes to serve as the wind section. | [
{
"paragraph_id": 0,
"text": "The Classical period was an era of classical music between roughly 1750 and 1820.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Classical period falls between the Baroque and the Romantic periods. Classical music has a lighter, clearer texture than Baroque music, but a more varying use of musical form, which is, in simpler terms, the rhythm and organization of any given piece of music. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially in liturgical vocal music and, later in the period, secular instrumental music. It also makes use of style galant which emphasized light elegance in place of the Baroque's dignified seriousness and impressive grandeur. Variety and contrast within a piece became more pronounced than before and the orchestra increased in size, range, and power.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The harpsichord was replaced as the main keyboard instrument by the piano (or fortepiano). Unlike the harpsichord, which plucks strings with quills, pianos strike the strings with leather-covered hammers when the keys are pressed, which enables the performer to play louder or softer (hence the original name \"fortepiano,\" literally \"loud soft\") and play with more expression; in contrast, the force with which a performer plays the harpsichord keys does not change the sound. Instrumental music was considered important by Classical period composers. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony (performed by an orchestra) and the solo concerto, which featured a virtuoso solo performer playing a solo work for violin, piano, flute, or another instrument, accompanied by an orchestra. Vocal music, such as songs for a singer and piano (notably the work of Schubert), choral works, and opera (a staged dramatic work for singers and orchestra) were also important during this period.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The best-known composers from this period are Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven, and Franz Schubert; other names in this period include: Carl Philipp Emanuel Bach, Johann Christian Bach, Luigi Boccherini, Domenico Cimarosa, Joseph Martin Kraus, Muzio Clementi, Christoph Willibald Gluck, Carl Ditters von Dittersdorf, André Grétry, Pierre-Alexandre Monsigny, Leopold Mozart, Michael Haydn, Giovanni Paisiello, Johann Baptist Wanhal, François-André Danican Philidor, Niccolò Piccinni, Antonio Salieri, Etienne Nicolas Mehul, Georg Christoph Wagenseil, Georg Matthias Monn, Johann Gottlieb Graun, Carl Heinrich Graun, Franz Benda, Georg Anton Benda, Johann Georg Albrechtsberger, Mauro Giuliani, Christian Cannabich and the Chevalier de Saint-Georges. Beethoven is regarded either as a Romantic composer or a Classical period composer who was part of the transition to the Romantic era. Schubert is also a transitional figure, as were Johann Nepomuk Hummel, Luigi Cherubini, Gaspare Spontini, Gioachino Rossini, Carl Maria von Weber, John Field, Jan Ladislav Dussek and Niccolò Paganini. The period is sometimes referred to as the era of Viennese Classicism (German: Wiener Klassik), since Gluck, Haydn, Salieri, Mozart, Beethoven, and Schubert all worked in Vienna.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the middle of the 18th century, Europe began to move toward a new style in architecture, literature, and the arts, generally known as Neoclassicism. This style sought to emulate the ideals of Classical antiquity, especially those of Classical Greece. Classical music used formality and emphasis on order and hierarchy, and a \"clearer\", \"cleaner\" style that used clearer divisions between parts (notably a clear, single melody accompanied by chords), brighter contrasts and \"tone colors\" (achieved by the use of dynamic changes and modulations to more keys). In contrast with the richly layered music of the Baroque era, Classical music moved towards simplicity rather than complexity. In addition, the typical size of orchestras began to increase, giving orchestras a more powerful sound.",
"title": "Classicism"
},
{
"paragraph_id": 5,
"text": "The remarkable development of ideas in \"natural philosophy\" had already established itself in the public consciousness. In particular, Newton's physics was taken as a paradigm: structures should be well-founded in axioms and be both well-articulated and orderly. This taste for structural clarity began to affect music, which moved away from the layered polyphony of the Baroque period toward a style known as homophony, in which the melody is played over a subordinate harmony. This move meant that chords became a much more prevalent feature of music, even if they interrupted the melodic smoothness of a single part. As a result, the tonal structure of a piece of music became more audible.",
"title": "Classicism"
},
{
"paragraph_id": 6,
"text": "The new style was also encouraged by changes in the economic order and social structure. As the 18th century progressed well, the nobility became the primary patrons of instrumental music, while public taste increasingly preferred lighter, funny comic operas. This led to changes in the way music was performed, the most crucial of which was the move to standard instrumental groups and the reduction in the importance of the continuo—the rhythmic and harmonic groundwork of a piece of music, typically played by a keyboard (harpsichord or organ) and usually accompanied by a varied group of bass instruments, including cello, double bass, bass viol, and theorbo. One way to trace the decline of the continuo and its figured chords is to examine the disappearance of the term obbligato, meaning a mandatory instrumental part in a work of chamber music. In Baroque compositions, additional instruments could be added to the continuo group according to the group or leader's preference; in Classical compositions, all parts were specifically noted, though not always notated, so the term \"obbligato\" became redundant. By 1800, basso continuo was practically extinct, except for the occasional use of a pipe organ continuo part in a religious Mass in the early 1800s.",
"title": "Classicism"
},
{
"paragraph_id": 7,
"text": "Economic changes also had the effect of altering the balance of availability and quality of musicians. While in the late Baroque, a major composer would have the entire musical resources of a town to draw on, the musical forces available at an aristocratic hunting lodge or small court were smaller and more fixed in their level of ability. This was a spur to having simpler parts for ensemble musicians to play, and in the case of a resident virtuoso group, a spur to writing spectacular, idiomatic parts for certain instruments, as in the case of the Mannheim orchestra, or virtuoso solo parts for particularly skilled violinists or flutists. In addition, the appetite by audiences for a continual supply of new music carried over from the Baroque. This meant that works had to be performable with, at best, one or two rehearsals. Even after 1790 Mozart writes about \"the rehearsal\", with the implication that his concerts would have only one rehearsal.",
"title": "Classicism"
},
{
"paragraph_id": 8,
"text": "Since there was a greater emphasis on a single melodic line, there was greater emphasis on notating that line for dynamics and phrasing. This contrasts with the Baroque era, when melodies were typically written with no dynamics, phrasing marks or ornaments, as it was assumed that the performer would improvise these elements on the spot. In the Classical era, it became more common for composers to indicate where they wanted performers to play ornaments such as trills or turns. The simplification of texture made such instrumental detail more important, and also made the use of characteristic rhythms, such as attention-getting opening fanfares, the funeral march rhythm, or the minuet genre, more important in establishing and unifying the tone of a single movement.",
"title": "Classicism"
},
{
"paragraph_id": 9,
"text": "The Classical period also saw the gradual development of sonata form, a set of structural principles for music that reconciled the Classical preference for melodic material with harmonic development, which could be applied across musical genres. The sonata itself continued to be the principal form for solo and chamber music, while later in the Classical period the string quartet became a prominent genre. The symphony form for orchestra was created in this period (this is popularly attributed to Joseph Haydn). The concerto grosso (a concerto for more than one musician), a very popular form in the Baroque era, began to be replaced by the solo concerto, featuring only one soloist. Composers began to place more importance on the particular soloist's ability to show off virtuoso skills, with challenging, fast scale and arpeggio runs. Nonetheless, some concerti grossi remained, the most famous of which being Mozart's Sinfonia Concertante for Violin and Viola in E-flat major.",
"title": "Classicism"
},
{
"paragraph_id": 10,
"text": "In the classical period, the theme consists of phrases with contrasting melodic figures and rhythms. These phrases are relatively brief, typically four bars in length, and can occasionally seem sparse or terse. The texture is mainly homophonic, with a clear melody above a subordinate chordal accompaniment, for instance an Alberti bass. This contrasts with the practice in Baroque music, where a piece or movement would typically have only one musical subject, which would then be worked out in a number of voices according to the principles of counterpoint, while maintaining a consistent rhythm or metre throughout. As a result, Classical music tends to have a lighter, clearer texture than the Baroque. The classical style draws on the style galant, a musical style which emphasised light elegance in place of the Baroque's dignified seriousness and impressive grandeur.",
"title": "Main characteristics"
},
{
"paragraph_id": 11,
"text": "Structurally, Classical music generally has a clear musical form, with a well-defined contrast between tonic and dominant, introduced by clear cadences. Dynamics are used to highlight the structural characteristics of the piece. In particular, sonata form and its variants were developed during the early classical period and was frequently used. The Classical approach to structure again contrasts with the Baroque, where a composition would normally move between tonic and dominant and back again, but through a continual progress of chord changes and without a sense of \"arrival\" at the new key. While counterpoint was less emphasised in the classical period, it was by no means forgotten, especially later in the period, and composers still used counterpoint in \"serious\" works such as symphonies and string quartets, as well as religious pieces, such as Masses.",
"title": "Main characteristics"
},
{
"paragraph_id": 12,
"text": "The classical musical style was supported by technical developments in instruments. The widespread adoption of equal temperament made classical musical structure possible, by ensuring that cadences in all keys sounded similar. The fortepiano and then the pianoforte replaced the harpsichord, enabling more dynamic contrast and more sustained melodies. Over the Classical period, keyboard instruments became richer, more sonorous and more powerful.",
"title": "Main characteristics"
},
{
"paragraph_id": 13,
"text": "The orchestra increased in size and range, and became more standardised. The harpsichord or pipe organ basso continuo role in orchestra fell out of use between 1750 and 1775, leaving the string section. Woodwinds became a self-contained section, consisting of clarinets, oboes, flutes and bassoons.",
"title": "Main characteristics"
},
{
"paragraph_id": 14,
"text": "While vocal music such as comic opera was popular, great importance was given to instrumental music. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony, concerto (usually for a virtuoso solo instrument accompanied by orchestra), and light pieces such as serenades and divertimentos. Sonata form developed and became the most important form. It was used to build up the first movement of most large-scale works in symphonies and string quartets. Sonata form was also used in other movements and in single, standalone pieces such as overtures.",
"title": "Main characteristics"
},
{
"paragraph_id": 15,
"text": "In his book The Classical Style, author and pianist Charles Rosen claims that from 1755 to 1775, composers groped for a new style that was more effectively dramatic. In the High Baroque period, dramatic expression was limited to the representation of individual affects (the \"doctrine of affections\", or what Rosen terms \"dramatic sentiment\"). For example, in Handel's oratorio Jephtha, the composer renders four emotions separately, one for each character, in the quartet \"O, spare your daughter\". Eventually this depiction of individual emotions came to be seen as simplistic and unrealistic; composers sought to portray multiple emotions, simultaneously or progressively, within a single character or movement (\"dramatic action\"). Thus in the finale of act 2 of Mozart's Die Entführung aus dem Serail, the lovers move \"from joy through suspicion and outrage to final reconciliation.\"",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Musically speaking, this \"dramatic action\" required more musical variety. Whereas Baroque music was characterized by seamless flow within individual movements and largely uniform textures, composers after the High Baroque sought to interrupt this flow with abrupt changes in texture, dynamic, harmony, or tempo. Among the stylistic developments which followed the High Baroque, the most dramatic came to be called Empfindsamkeit, (roughly \"sensitive style\"), and its best-known practitioner was Carl Philipp Emanuel Bach. Composers of this style employed the above-discussed interruptions in the most abrupt manner, and the music can sound illogical at times. The Italian composer Domenico Scarlatti took these developments further. His more than five hundred single-movement keyboard sonatas also contain abrupt changes of texture, but these changes are organized into periods, balanced phrases that became a hallmark of the classical style. However, Scarlatti's changes in texture still sound sudden and unprepared. The outstanding achievement of the great classical composers (Haydn, Mozart and Beethoven) was their ability to make these dramatic surprises sound logically motivated, so that \"the expressive and the elegant could join hands.\"",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Between the death of J. S. Bach and the maturity of Haydn and Mozart (roughly 1750–1770), composers experimented with these new ideas, which can be seen in the music of Bach's sons. Johann Christian developed a style which we now call Roccoco, comprising simpler textures and harmonies, and which was \"charming, undramatic, and a little empty.\" As mentioned previously, Carl Philipp Emmanuel sought to increase drama, and his music was \"violent, expressive, brilliant, continuously surprising, and often incoherent.\" And finally Wilhelm Friedemann, J.S. Bach's eldest son, extended Baroque traditions in an idiomatic, unconventional way.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "At first the new style took over Baroque forms—the ternary da capo aria, the sinfonia and the concerto—but composed with simpler parts, more notated ornamentation, rather than the improvised ornaments that were common in the Baroque era, and more emphatic division of pieces into sections. However, over time, the new aesthetic caused radical changes in how pieces were put together, and the basic formal layouts changed. Composers from this period sought dramatic effects, striking melodies, and clearer textures. One of the big textural changes was a shift away from the complex, dense polyphonic style of the Baroque, in which multiple interweaving melodic lines were played simultaneously, and towards homophony, a lighter texture which uses a clear single melody line accompanied by chords.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Baroque music generally uses many harmonic fantasies and polyphonic sections that focus less on the structure of the musical piece, and there was less emphasis on clear musical phrases. In the classical period, the harmonies became simpler. However, the structure of the piece, the phrases and small melodic or rhythmic motives, became much more important than in the Baroque period.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Another important break with the past was the radical overhaul of opera by Christoph Willibald Gluck, who cut away a great deal of the layering and improvisational ornaments and focused on the points of modulation and transition. By making these moments where the harmony changes more of a focus, he enabled powerful dramatic shifts in the emotional color of the music. To highlight these transitions, he used changes in instrumentation (orchestration), melody, and mode. Among the most successful composers of his time, Gluck spawned many emulators, including Antonio Salieri. Their emphasis on accessibility brought huge successes in opera, and in other vocal music such as songs, oratorios, and choruses. These were considered the most important kinds of music for performance and hence enjoyed greatest public success.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The phase between the Baroque and the rise of the Classical (around 1730), was home to various competing musical styles. The diversity of artistic paths are represented in the sons of Johann Sebastian Bach: Wilhelm Friedemann Bach, who continued the Baroque tradition in a personal way; Johann Christian Bach, who simplified textures of the Baroque and most clearly influenced Mozart; and Carl Philipp Emanuel Bach, who composed passionate and sometimes violently eccentric music of the Empfindsamkeit movement. Musical culture was caught at a crossroads: the masters of the older style had the technique, but the public hungered for the new. This is one of the reasons C. P. E. Bach was held in such high regard: he understood the older forms quite well and knew how to present them in new garb, with an enhanced variety of form.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The \"normal\" orchestra ensemble—a body of strings supplemented by winds—and movements of particular rhythmic character were established by the late 1750s in Vienna. However, the length and weight of pieces was still set with some Baroque characteristics: individual movements still focused on one \"affect\" (musical mood) or had only one sharply contrasting middle section, and their length was not significantly greater than Baroque movements. There was not yet a clearly enunciated theory of how to compose in the new style. It was a moment ripe for a breakthrough.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The first great master of the style was the composer Joseph Haydn. In the late 1750s he began composing symphonies, and by 1761 he had composed a triptych (Morning, Noon, and Evening) solidly in the contemporary mode. As a vice-Kapellmeister and later Kapellmeister, his output expanded: he composed over forty symphonies in the 1760s alone. And while his fame grew, as his orchestra was expanded and his compositions were copied and disseminated, his voice was only one among many.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "While some scholars suggest that Haydn was later overshadowed by Mozart and Beethoven, it would be difficult to overstate Haydn's centrality to the new style, and therefore to the future of Western art music as a whole. At the time, before the pre-eminence of Mozart or Beethoven, and with Johann Sebastian Bach known primarily to connoisseurs of keyboard music, Haydn reached a place in music that set him above all other composers except perhaps the Baroque era's George Frideric Handel. Haydn took existing ideas, and radically altered how they functioned—earning him the titles \"father of the symphony\" and \"father of the string quartet\".",
"title": "History"
},
{
"paragraph_id": 26,
"text": "One of the forces that worked as an impetus for his pressing forward was the first stirring of what would later be called Romanticism—the Sturm und Drang, or \"storm and stress\" phase in the arts, a short period where obvious and dramatic emotionalism was a stylistic preference. Haydn accordingly wanted more dramatic contrast and more emotionally appealing melodies, with sharpened character and individuality in his pieces. This period faded away in music and literature: however, it influenced what came afterward and would eventually be a component of aesthetic taste in later decades.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The Farewell Symphony, No. 45 in F♯ minor, exemplifies Haydn's integration of the differing demands of the new style, with surprising sharp turns and a long slow adagio to end the work. In 1772, Haydn completed his Opus 20 set of six string quartets, in which he deployed the polyphonic techniques he had gathered from the previous Baroque era to provide structural coherence capable of holding together his melodic ideas. For some, this marks the beginning of the \"mature\" Classical style, a transitional period in which reaction against late Baroque complexity yielded to integration of Baroque and Classical elements.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Haydn, having worked for over a decade as the music director for a prince, had far more resources and scope for composing than most other composers. His position also gave him the ability to shape the forces that would play his music, as he could select skilled musicians. This opportunity was not wasted, as Haydn, beginning quite early on his career, sought to press forward the technique of building and developing ideas in his music. His next important breakthrough was in the Opus 33 string quartets (1781), in which the melodic and the harmonic roles segue among the instruments: it is often momentarily unclear what is melody and what is harmony. This changes the way the ensemble works its way between dramatic moments of transition and climactic sections: the music flows smoothly and without obvious interruption. He then took this integrated style and began applying it to orchestral and vocal music.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Haydn's gift to music was a way of composing, a way of structuring works, which was at the same time in accord with the governing aesthetic of the new style. However, a younger contemporary, Wolfgang Amadeus Mozart, brought his genius to Haydn's ideas and applied them to two of the major genres of the day: opera, and the virtuoso concerto. Whereas Haydn spent much of his working life as a court composer, Mozart wanted public success in the concert life of cities, playing for the general public. This meant he needed to write operas and write and perform virtuoso pieces. Haydn was not a virtuoso at the international touring level; nor was he seeking to create operatic works that could play for many nights in front of a large audience. Mozart wanted to achieve both. Moreover, Mozart also had a taste for more chromatic chords (and greater contrasts in harmonic language generally), a greater love for creating a welter of melodies in a single work, and a more Italianate sensibility in music as a whole. He found, in Haydn's music and later in his study of the polyphony of J.S. Bach, the means to discipline and enrich his artistic gifts.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Mozart rapidly came to the attention of Haydn, who hailed the new composer, studied his works, and considered the younger man his only true peer in music. In Mozart, Haydn found a greater range of instrumentation, dramatic effect and melodic resource. The learning relationship moved in both directions. Mozart also had a great respect for the older, more experienced composer, and sought to learn from him.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Mozart's arrival in Vienna in 1780 brought an acceleration in the development of the Classical style. There, Mozart absorbed the fusion of Italianate brilliance and Germanic cohesiveness that had been brewing for the previous 20 years. His own taste for flashy brilliances, rhythmically complex melodies and figures, long cantilena melodies, and virtuoso flourishes was merged with an appreciation for formal coherence and internal connectedness. It is at this point that war and economic inflation halted a trend to larger orchestras and forced the disbanding or reduction of many theater orchestras. This pressed the Classical style inwards: toward seeking greater ensemble and technical challenges—for example, scattering the melody across woodwinds, or using a melody harmonized in thirds. This process placed a premium on small ensemble music, called chamber music. It also led to a trend for more public performance, giving a further boost to the string quartet and other small ensemble groupings.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "It was during this decade that public taste began, increasingly, to recognize that Haydn and Mozart had reached a high standard of composition. By the time Mozart arrived at age 25, in 1781, the dominant styles of Vienna were recognizably connected to the emergence in the 1750s of the early Classical style. By the end of the 1780s, changes in performance practice, the relative standing of instrumental and vocal music, technical demands on musicians, and stylistic unity had become established in the composers who imitated Mozart and Haydn. During this decade Mozart composed his most famous operas, his six late symphonies that helped to redefine the genre, and a string of piano concerti that still stand at the pinnacle of these forms.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "One composer who was influential in spreading the more serious style that Mozart and Haydn had formed is Muzio Clementi, a gifted virtuoso pianist who tied with Mozart in a musical \"duel\" before the emperor in which they each improvised on the piano and performed their compositions. Clementi's sonatas for the piano circulated widely, and he became the most successful composer in London during the 1780s. Also in London at this time was Jan Ladislav Dussek, who, like Clementi, encouraged piano makers to extend the range and other features of their instruments, and then fully exploited the newly opened up possibilities. The importance of London in the Classical period is often overlooked, but it served as the home to the Broadwood's factory for piano manufacturing and as the base for composers who, while less notable than the \"Vienna School\", had a decisive influence on what came later. They were composers of many fine works, notable in their own right. London's taste for virtuosity may well have encouraged the complex passage work and extended statements on tonic and dominant.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "When Haydn and Mozart began composing, symphonies were played as single movements—before, between, or as interludes within other works—and many of them lasted only ten or twelve minutes; instrumental groups had varying standards of playing, and the continuo was a central part of music-making.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In the intervening years, the social world of music had seen dramatic changes. International publication and touring had grown explosively, and concert societies formed. Notation became more specific, more descriptive—and schematics for works had been simplified (yet became more varied in their exact working out). In 1790, just before Mozart's death, with his reputation spreading rapidly, Haydn was poised for a series of successes, notably his late oratorios and London symphonies. Composers in Paris, Rome, and all over Germany turned to Haydn and Mozart for their ideas on form.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "In the 1790s, a new generation of composers, born around 1770, emerged. While they had grown up with the earlier styles, they heard in the recent works of Haydn and Mozart a vehicle for greater expression. In 1788 Luigi Cherubini settled in Paris and in 1791 composed Lodoiska, an opera that raised him to fame. Its style is clearly reflective of the mature Haydn and Mozart, and its instrumentation gave it a weight that had not yet been felt in the grand opera. His contemporary Étienne Méhul extended instrumental effects with his 1790 opera Euphrosine et Coradin, from which followed a series of successes. The final push towards change came from Gaspare Spontini, who was deeply admired by future romantic composers such as Weber, Berlioz and Wagner. The innovative harmonic language of his operas, their refined instrumentation and their \"enchained\" closed numbers (a structural pattern which was later adopted by Weber in Euryanthe and from him handed down, through Marschner, to Wagner), formed the basis from which French and German romantic opera had its beginnings.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The most fateful of the new generation was Ludwig van Beethoven, who launched his numbered works in 1794 with a set of three piano trios, which remain in the repertoire. Somewhat younger than the others, though equally accomplished because of his youthful study under Mozart and his native virtuosity, was Johann Nepomuk Hummel. Hummel studied under Haydn as well; he was a friend to Beethoven and Franz Schubert. He concentrated more on the piano than any other instrument, and his time in London in 1791 and 1792 generated the composition and publication in 1793 of three piano sonatas, opus 2, which idiomatically used Mozart's techniques of avoiding the expected cadence, and Clementi's sometimes modally uncertain virtuoso figuration. Taken together, these composers can be seen as the vanguard of a broad change in style and the center of music. They studied one another's works, copied one another's gestures in music, and on occasion behaved like quarrelsome rivals.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The crucial differences with the previous wave can be seen in the downward shift in melodies, increasing durations of movements, the acceptance of Mozart and Haydn as paradigmatic, the greater use of keyboard resources, the shift from \"vocal\" writing to \"pianistic\" writing, the growing pull of the minor and of modal ambiguity, and the increasing importance of varying accompanying figures to bring \"texture\" forward as an element in music. In short, the late Classical was seeking music that was internally more complex. The growth of concert societies and amateur orchestras, marking the importance of music as part of middle-class life, contributed to a booming market for pianos, piano music, and virtuosi to serve as exemplars. Hummel, Beethoven, and Clementi were all renowned for their improvising.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "The direct influence of the Baroque continued to fade: the figured bass grew less prominent as a means of holding performance together, the performance practices of the mid-18th century continued to die out. However, at the same time, complete editions of Baroque masters began to become available, and the influence of Baroque style continued to grow, particularly in the ever more expansive use of brass. Another feature of the period is the growing number of performances where the composer was not present. This led to increased detail and specificity in notation; for example, there were fewer \"optional\" parts that stood separately from the main score.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "The force of these shifts became apparent with Beethoven's 3rd Symphony, given the name Eroica, which is Italian for \"heroic\", by the composer. As with Stravinsky's The Rite of Spring, it may not have been the first in all of its innovations, but its aggressive use of every part of the Classical style set it apart from its contemporary works: in length, ambition, and harmonic resources as well making it the first symphony of the Romantic era.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "The First Viennese School is a name mostly used to refer to three composers of the Classical period in late-18th-century Vienna: Haydn, Mozart, and Beethoven. Franz Schubert is occasionally added to the list.",
"title": "First Viennese School"
},
{
"paragraph_id": 42,
"text": "In German-speaking countries, the term Wiener Klassik (lit. Viennese classical era/art) is used. That term is often more broadly applied to the Classical era in music as a whole, as a means to distinguish it from other periods that are colloquially referred to as classical, namely Baroque and Romantic music.",
"title": "First Viennese School"
},
{
"paragraph_id": 43,
"text": "The term \"Viennese School\" was first used by Austrian musicologist Raphael Georg Kiesewetter in 1834, although he only counted Haydn and Mozart as members of the school. Other writers followed suit, and eventually Beethoven was added to the list. The designation \"first\" is added today to avoid confusion with the Second Viennese School.",
"title": "First Viennese School"
},
{
"paragraph_id": 44,
"text": "Whilst, Schubert apart, these composers certainly knew each other (with Haydn and Mozart even being occasional chamber-music partners), there is no sense in which they were engaged in a collaborative effort in the sense that one would associate with 20th-century schools such as the Second Viennese School, or Les Six. Nor is there any significant sense in which one composer was \"schooled\" by another (in the way that Berg and Webern were taught by Schoenberg), though it is true that Beethoven for a time received lessons from Haydn.",
"title": "First Viennese School"
},
{
"paragraph_id": 45,
"text": "Attempts to extend the First Viennese School to include such later figures as Anton Bruckner, Johannes Brahms, and Gustav Mahler are merely journalistic, and never encountered in academic musicology.",
"title": "First Viennese School"
},
{
"paragraph_id": 46,
"text": "Musical eras and their prevalent styles, forms and instruments seldom disappear at once; instead, features are replaced over time, until the old approach is simply felt as \"old-fashioned\". The Classical style did not \"die\" suddenly; rather, it gradually got phased out under the weight of changes. To give just one example, while it is generally stated that the Classical era stopped using the harpsichord in orchestras, this did not happen all of a sudden at the start of the Classical era in 1750. Rather, orchestras slowly stopped using the harpsichord to play basso continuo until the practice was discontinued by the end of the 1700s.",
"title": "Classical influence on later composers"
},
{
"paragraph_id": 47,
"text": "One crucial change was the shift towards harmonies centering on \"flatward\" keys: shifts in the subdominant direction . In the Classical style, major key was far more common than minor, chromaticism being moderated through the use of \"sharpward\" modulation (e.g., a piece in C major modulating to G major, D major, or A major, all of which are keys with more sharps). As well, sections in the minor mode were often used for contrast. Beginning with Mozart and Clementi, there began a creeping colonization of the subdominant region (the ii or IV chord, which in the key of C major would be the keys of d minor or F major). With Schubert, subdominant modulations flourished after being introduced in contexts in which earlier composers would have confined themselves to dominant shifts (modulations to the dominant chord, e.g., in the key of C major, modulating to G major). This introduced darker colors to music, strengthened the minor mode, and made structure harder to maintain. Beethoven contributed to this by his increasing use of the fourth as a consonance, and modal ambiguity—for example, the opening of the Symphony No. 9 in D minor.",
"title": "Classical influence on later composers"
},
{
"paragraph_id": 48,
"text": "Ludwig van Beethoven, Franz Schubert, Carl Maria von Weber, Johann Nepomuk Hummel, and John Field are among the most prominent in this generation of \"Proto-Romantics\", along with the young Felix Mendelssohn. Their sense of form was strongly influenced by the Classical style. While they were not yet \"learned\" composers (imitating rules which were codified by others), they directly responded to works by Haydn, Mozart, Clementi, and others, as they encountered them. The instrumental forces at their disposal in orchestras were also quite \"Classical\" in number and variety, permitting similarity with Classical works.",
"title": "Classical influence on later composers"
},
{
"paragraph_id": 49,
"text": "However, the forces destined to end the hold of the Classical style gathered strength in the works of many of the above composers, particularly Beethoven. The most commonly cited one is harmonic innovation. Also important is the increasing focus on having a continuous and rhythmically uniform accompanying figuration: Beethoven's Moonlight Sonata was the model for hundreds of later pieces—where the shifting movement of a rhythmic figure provides much of the drama and interest of the work, while a melody drifts above it. Greater knowledge of works, greater instrumental expertise, increasing variety of instruments, the growth of concert societies, and the unstoppable domination of the increasingly more powerful piano (which was given a bolder, louder tone by technological developments such as the use of steel strings, heavy cast-iron frames and sympathetically vibrating strings) all created a huge audience for sophisticated music. All of these trends contributed to the shift to the \"Romantic\" style.",
"title": "Classical influence on later composers"
},
{
"paragraph_id": 50,
"text": "Drawing the line between these two styles is very difficult: some sections of Mozart's later works, taken alone, are indistinguishable in harmony and orchestration from music written 80 years later—and some composers continued to write in normative Classical styles into the early 20th century. Even before Beethoven's death, composers such as Louis Spohr were self-described Romantics, incorporating, for example, more extravagant chromaticism in their works (e.g., using chromatic harmonies in a piece's chord progression). Conversely, works such as Schubert's Symphony No. 5, written during the chronological end of the Classical era and dawn of the Romantic era, exhibit a deliberately anachronistic artistic paradigm, harking back to the compositional style of several decades before.",
"title": "Classical influence on later composers"
},
{
"paragraph_id": 51,
"text": "However, Vienna's fall as the most important musical center for orchestral composition during the late 1820s, precipitated by the deaths of Beethoven and Schubert, marked the Classical style's final eclipse—and the end of its continuous organic development of one composer learning in close proximity to others. Franz Liszt and Frédéric Chopin visited Vienna when they were young, but they then moved on to other cities. Composers such as Carl Czerny, while deeply influenced by Beethoven, also searched for new ideas and new forms to contain the larger world of musical expression and performance in which they lived.",
"title": "Classical influence on later composers"
},
{
"paragraph_id": 52,
"text": "Renewed interest in the formal balance and restraint of 18th century classical music led in the early 20th century to the development of so-called Neoclassical style, which numbered Stravinsky and Prokofiev among its proponents, at least at certain times in their careers.",
"title": "Classical influence on later composers"
},
{
"paragraph_id": 53,
"text": "The Baroque guitar, with four or five sets of double strings or \"courses\" and elaborately decorated soundhole, was a very different instrument from the early classical guitar which more closely resembles the modern instrument with the standard six strings. Judging by the number of instructional manuals published for the instrument – over three hundred texts were published by over two hundred authors between 1760 and 1860 – the classical period marked a golden age for guitar.",
"title": "Classical period instruments"
},
{
"paragraph_id": 54,
"text": "In the Baroque era, there was more variety in the bowed stringed instruments used in ensembles, with instruments such as the viola d'amore and a range of fretted viols being used, ranging from small viols to large bass viols. In the Classical period, the string section of the orchestra was standardized as just four instruments:",
"title": "Classical period instruments"
},
{
"paragraph_id": 55,
"text": "In the Baroque era, the double bass players were not usually given a separate part; instead, they typically played the same basso continuo bassline that the cellos and other low-pitched instruments (e.g., theorbo, serpent wind instrument, viols), albeit an octave below the cellos, because the double bass is a transposing instrument that sounds one octave lower than it is written. In the Classical era, some composers continued to write only one bass part for their symphony, labeled \"bassi\"; this bass part was played by cellists and double bassists. During the Classical era, some composers began to give the double basses their own part.",
"title": "Classical period instruments"
},
{
"paragraph_id": 56,
"text": "It was commonplace for all orchestras to have at least 2 winds, usually oboes, flutes, clarinets, or sometimes english horns (see Symphony No. 22 (Haydn). Patrons also usually employed an ensemble of entirely winds, called the harmonie, which would be employed for certain events. The harmonie would join the larger string orchestra sometimes to serve as the wind section.",
"title": "Classical period instruments"
}
] | The Classical period was an era of classical music between roughly 1750 and 1820. The Classical period falls between the Baroque and the Romantic periods. Classical music has a lighter, clearer texture than Baroque music, but a more varying use of musical form, which is, in simpler terms, the rhythm and organization of any given piece of music. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially in liturgical vocal music and, later in the period, secular instrumental music. It also makes use of style galant which emphasized light elegance in place of the Baroque's dignified seriousness and impressive grandeur. Variety and contrast within a piece became more pronounced than before and the orchestra increased in size, range, and power. The harpsichord was replaced as the main keyboard instrument by the piano. Unlike the harpsichord, which plucks strings with quills, pianos strike the strings with leather-covered hammers when the keys are pressed, which enables the performer to play louder or softer and play with more expression; in contrast, the force with which a performer plays the harpsichord keys does not change the sound. Instrumental music was considered important by Classical period composers. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony and the solo concerto, which featured a virtuoso solo performer playing a solo work for violin, piano, flute, or another instrument, accompanied by an orchestra. Vocal music, such as songs for a singer and piano, choral works, and opera were also important during this period. The best-known composers from this period are Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven, and Franz Schubert; other names in this period include: Carl Philipp Emanuel Bach, Johann Christian Bach, Luigi Boccherini, Domenico Cimarosa, Joseph Martin Kraus, Muzio Clementi, Christoph Willibald Gluck, Carl Ditters von Dittersdorf, André Grétry, Pierre-Alexandre Monsigny, Leopold Mozart, Michael Haydn, Giovanni Paisiello, Johann Baptist Wanhal, François-André Danican Philidor, Niccolò Piccinni, Antonio Salieri, Etienne Nicolas Mehul, Georg Christoph Wagenseil, Georg Matthias Monn, Johann Gottlieb Graun, Carl Heinrich Graun, Franz Benda, Georg Anton Benda, Johann Georg Albrechtsberger, Mauro Giuliani, Christian Cannabich and the Chevalier de Saint-Georges. Beethoven is regarded either as a Romantic composer or a Classical period composer who was part of the transition to the Romantic era. Schubert is also a transitional figure, as were Johann Nepomuk Hummel, Luigi Cherubini, Gaspare Spontini, Gioachino Rossini, Carl Maria von Weber, John Field, Jan Ladislav Dussek and Niccolò Paganini. The period is sometimes referred to as the era of Viennese Classicism, since Gluck, Haydn, Salieri, Mozart, Beethoven, and Schubert all worked in Vienna. | 2001-11-01T13:48:49Z | 2023-12-18T22:42:06Z | [
"Template:Portal bar",
"Template:Short description",
"Template:ISBN",
"Template:Classical period (music)",
"Template:Concert music",
"Template:Classicism",
"Template:Fact",
"Template:More citations needed section",
"Template:Reflist",
"Template:More citations needed",
"Template:Clarify",
"Template:More footnotes",
"Template:Use dmy dates",
"Template:History of Western art music",
"Template:Lang-de",
"Template:Cite Grove",
"Template:Prone to spam",
"Template:IMSLP",
"Template:Music topics",
"Template:See also",
"Template:Unreferenced section",
"Template:Music",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Classical_period_(music) |
5,295 | Character encoding | Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map".
Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form.
The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode).
Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has supplanted most earlier character encodings, but the path of code development to the present is fairly well known.
The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard.
Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine. When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code.
IBM's Binary Coded Decimal (BCD) was a six-bit encoding scheme used by IBM as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. BCD extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping it easily to punch-card encoding which was already in widespread use. IBMs codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, but usually had the ability to read tapes produced on IBM equipment. BCD was the precursor of IBM's Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters.
In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count.
The compromise solution that was eventually found and developed into Unicode was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.
Informally, the terms "character encoding", "character map", "character set" and "code page" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with a single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important.
"Code page" is a historical name for a coded character set.
Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft, SAP, and Oracle Corporation, also published their own sets of code pages; the most well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437).
Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term "code page" is often still used to refer to character encodings in general.
The term "code page" is not used in Unix or Linux, where "charmap" is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP".
The code unit size is equivalent to the bit measurement for the particular encoding:
A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding:
Exactly what constitutes a character varies between character encodings.
For example, for letters with diacritics, there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into a single glyph. The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems.
Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent the same semantic character.
Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes, Unicode separately defines a coded character set that maps characters to unique natural numbers (code points), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process:
An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to the repertoire over time.
A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points.
A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF.
A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE, and UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU).
Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion.
Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang.
The Unicode model uses the term "character map" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers.
In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters.
The following table shows examples of code point values:
Consider a string of the letters "ab̲c𐐀"—that is, a string containing a Unicode combining character (U+0332 ̲ ) as well a supplementary character (U+10400 𐐀 ). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements:
Note in particular that 𐐀 is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related.
As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes, a process known as transcoding. Some of these are cited below.
Cross-platform:
Windows: | [
{
"paragraph_id": 0,
"text": "Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as \"code points\" and collectively comprise a \"code space\", a \"code page\", or a \"character map\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four \"symbols\" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode).",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has supplanted most earlier character encodings, but the path of code development to the present is fairly well known.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some \"control code\" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine. When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "IBM's Binary Coded Decimal (BCD) was a six-bit encoding scheme used by IBM as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. BCD extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping it easily to punch-card encoding which was already in widespread use. IBMs codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, but usually had the ability to read tapes produced on IBM equipment. BCD was the precursor of IBM's Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The compromise solution that was eventually found and developed into Unicode was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Informally, the terms \"character encoding\", \"character map\", \"character set\" and \"code page\" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with a single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important.",
"title": "Terminology"
},
{
"paragraph_id": 10,
"text": "\"Code page\" is a historical name for a coded character set.",
"title": "Terminology"
},
{
"paragraph_id": 11,
"text": "Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft, SAP, and Oracle Corporation, also published their own sets of code pages; the most well-known code page suites are \"Windows\" (based on Windows-1252) and \"IBM\"/\"DOS\" (based on code page 437).",
"title": "Terminology"
},
{
"paragraph_id": 12,
"text": "Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term \"code page\" is often still used to refer to character encodings in general.",
"title": "Terminology"
},
{
"paragraph_id": 13,
"text": "The term \"code page\" is not used in Unix or Linux, where \"charmap\" is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a \"charset\", \"character set\", \"code page\", or \"CHARMAP\".",
"title": "Terminology"
},
{
"paragraph_id": 14,
"text": "The code unit size is equivalent to the bit measurement for the particular encoding:",
"title": "Terminology"
},
{
"paragraph_id": 15,
"text": "A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding:",
"title": "Terminology"
},
{
"paragraph_id": 16,
"text": "Exactly what constitutes a character varies between character encodings.",
"title": "Terminology"
},
{
"paragraph_id": 17,
"text": "For example, for letters with diacritics, there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into a single glyph. The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems.",
"title": "Terminology"
},
{
"paragraph_id": 18,
"text": "Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent the same semantic character.",
"title": "Terminology"
},
{
"paragraph_id": 19,
"text": "Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes, Unicode separately defines a coded character set that maps characters to unique natural numbers (code points), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process:",
"title": "Unicode encoding model"
},
{
"paragraph_id": 20,
"text": "An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to the repertoire over time.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 21,
"text": "A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter \"A\" in the Latin alphabet might be represented by the code point 65, the character \"B\" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 22,
"text": "A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 23,
"text": "A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE, and UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU).",
"title": "Unicode encoding model"
},
{
"paragraph_id": 24,
"text": "Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 25,
"text": "Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 26,
"text": "The Unicode model uses the term \"character map\" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 27,
"text": "In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 28,
"text": "The following table shows examples of code point values:",
"title": "Unicode encoding model"
},
{
"paragraph_id": 29,
"text": "Consider a string of the letters \"ab̲c𐐀\"—that is, a string containing a Unicode combining character (U+0332 ̲ ) as well a supplementary character (U+10400 𐐀 ). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements:",
"title": "Unicode encoding model"
},
{
"paragraph_id": 30,
"text": "Note in particular that 𐐀 is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related.",
"title": "Unicode encoding model"
},
{
"paragraph_id": 31,
"text": "As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes, a process known as transcoding. Some of these are cited below.",
"title": "Transcoding"
},
{
"paragraph_id": 32,
"text": "Cross-platform:",
"title": "Transcoding"
},
{
"paragraph_id": 33,
"text": "Windows:",
"title": "Transcoding"
}
] | Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map". Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form. | 2001-09-20T00:01:17Z | 2023-11-26T21:15:41Z | [
"Template:Code",
"Template:Div col end",
"Template:Reflist",
"Template:Use dmy dates",
"Template:Vague",
"Template:Anchor",
"Template:Cn",
"Template:Cite news",
"Template:Wikiversity",
"Template:Character encoding",
"Template:Short description",
"Template:Main",
"Template:Div col",
"Template:Cite book",
"Template:Unichar",
"Template:Cite web",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Character_encoding |
5,298 | Control character | In computing and telecommunication, a control character or non-printing character (NPC) is a code point in a character set that does not represent a written character or symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly graphic characters, also known as printing characters (or printable characters), except perhaps for "space" characters. In the ASCII standard there are 33 control characters, such as code 7, BEL, which rings a terminal bell.
Procedural signs in Morse code are a form of control character.
A form of control characters were introduced in the 1870 Baudot code: NUL and DEL. The 1901 Murray code added the carriage return (CR) and line feed (LF), and other versions of the Baudot code included other control characters.
The bell character (BEL), which rang a bell to alert operators, was also an early teletype control character.
Some control characters have also been called "format effectors".
There were quite a few control characters defined (33 in ASCII, and the ECMA-48 standard adds 32 more). This was because early terminals had very primitive mechanical or electrical controls that made any kind of state-remembering API quite expensive to implement, thus a different code for each and every function looked like a requirement. It quickly became possible and inexpensive to interpret sequences of codes to perform a function, and device makers found a way to send hundreds of device instructions. Specifically, they used ASCII code 2710 (escape), followed by a series of characters called a "control sequence" or "escape sequence". The mechanism was invented by Bob Bemer, the father of ASCII. For example, the sequence of code 2710, followed by the printable characters "[2;10H", would cause a Digital Equipment Corporation VT100 terminal to move its cursor to the 10th cell of the 2nd line of the screen. Several standards exist for these sequences, notably ANSI X3.64. But the number of non-standard variations in use is large, especially among printers, where technology has advanced far faster than any standards body can possibly keep up with.
All entries in the ASCII table below code 3210 (technically the C0 control code set) are of this kind, including CR and LF used to separate lines of text. The code 12710 (DEL) is also a control character. Extended ASCII sets defined by ISO 8859 added the codes 12810 through 15910 as control characters. This was primarily done so that if the high bit was stripped, it would not change a printing character to a C0 control code. This second set is called the C1 set.
These 65 control codes were carried over to Unicode. Unicode added more characters that could be considered controls, but it makes a distinction between these "Formatting characters" (such as the zero-width non-joiner) and the 65 control characters.
The Extended Binary Coded Decimal Interchange Code (EBCDIC) character set contains 65 control codes, including all of the ASCII control codes plus additional codes which are mostly used to control IBM peripherals.
The control characters in ASCII still in common use include:
Control characters may be described as doing something when the user inputs them, such as code 3 (End-of-Text character, ETX, ^C) to interrupt the running process, or code 4 (End-of-Transmission character, EOT, ^D), used to end text input on Unix or to exit a Unix shell. These uses usually have little to do with their use when they are in text being output.
In Unicode, "Control-characters" are U+0000—U+001F (C0 controls), U+007F (delete), and U+0080—U+009F (C1 controls). Their General Category is "Cc". Formatting codes are distinct, in General Category "Cf". The Cc control characters have no Name in Unicode, but are given labels such as "<control-001A>" instead.
There are a number of techniques to display non-printing characters, which may be illustrated with the bell character in ASCII encoding:
ASCII-based keyboards have a key labelled "Control", "Ctrl", or (rarely) "Cntl" which is used much like a shift key, being pressed in combination with another letter or symbol key. In one implementation, the control key generates the code 64 places below the code for the (generally) uppercase letter it is pressed in combination with (i.e., subtract 0x40 from ASCII code value of the (generally) uppercase letter). The other implementation is to take the ASCII code produced by the key and bitwise AND it with 0x1F, forcing bits 5 to 7 to zero. For example, pressing "control" and the letter "g" (which is 0110 0111 in binary), produces the code 7 (BELL, 7 in base ten, or 0000 0111 in binary). The NULL character (code 0) is represented by Ctrl-@, "@" being the code immediately before "A" in the ASCII character set. For convenience, some terminals accept Ctrl-Space as an alias for Ctrl-@. In either case, this produces one of the 32 ASCII control codes between 0 and 31. Neither approach works to produce the DEL character because of its special location in the table and its value (code 12710), Ctrl-? is sometimes used for this character.
When the control key is held down, letter keys produce the same control characters regardless of the state of the shift or caps lock keys. In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) vary between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII ("foreign") keys also varies between systems.
Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down.
Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled "Backspace" typically produces code 8, "Tab" code 9, "Enter" or "Return" code 13 (though some keyboards might produce code 10 for "Enter").
Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. "Dumb" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above.
The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous.
Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early example of this idea was the use of Figures (FIGS) and Letters (LTRS) in Baudot code to shift between two code pages. A later, but still early, example was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed. The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line). The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets). The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading. The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line. The backspace character (BS) moves the printing position one character space backwards. On printers, including hard-copy terminals, this is most often used so the printer can overprint characters to make other, not normally available, characters. On video terminals and other electronic output devices, there are often software (or hardware) configuration choices that allow a destructive backspace (e.g., a BS, SP, BS sequence), which erases, or a non-destructive one, which does not. The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing.
With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from.
The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards. End of medium (EM) warns that the tape (or other recording medium) is ending. While many systems use CR/LF and TAB for structuring data, it is possible to encounter the separator control characters in data that needs to be structured. The separator control characters are not overloaded; there is no general use of them except to separate data into structured groupings. Their numeric values are contiguous with the space character, which can be considered a member of the group, as a word separator.
For example, the RS separator is used by RFC 7464 (JSON Text Sequences) to encode a sequence of JSON elements. Each sequence item starts with a RS character and ends with a line feed. This allows to serialize open-ended JSON sequences. It is one of the JSON streaming protocols.
The transmission control characters were intended to structure a data stream, and to manage re-transmission or graceful failure, as needed, in the face of transmission errors.
The start of heading (SOH) character was to mark a non-data section of a data stream—the part of a stream containing addresses and other housekeeping data. The start of text character (STX) marked the end of the header, and the start of the textual part of a stream. The end of text character (ETX) marked the end of the data of a message. A widely used convention is to make the two characters preceding ETX a checksum or CRC for error-detection purposes. The end of transmission block character (ETB) was used to indicate the end of a block of data, where data was divided into such blocks for transmission purposes.
The escape character (ESC) was intended to "quote" the next character, if it was another control character it would print it instead of performing the control function. It is almost never used for this purpose today. Various printable characters are used as visible "escape characters", depending on context.
The substitute character (SUB) was intended to request a translation of the next character from a printable character to another value, usually by setting bit 5 to zero. This is handy because some media (such as sheets of paper produced by typewriters) can transmit only printable characters. However, on MS-DOS systems with files opened in text mode, "end of text" or "end of file" is marked by this Ctrl-Z character, instead of the Ctrl-C or Ctrl-D, which are common on other operating systems.
The cancel character (CAN) signaled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element.
When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at any time, and one or more slave stations that transmit when they have permission. The enquire character (ENQ) is generally used by a master station to ask a slave station to send its next message. A slave station indicates that it has completed its transmission by sending the end of transmission character (EOT).
The device control codes (DC1 to DC4) were originally generic, to be implemented as necessary by each device. However, a universal need in data transmission is to request the sender to stop transmitting when a receiver is temporarily unable to accept any more data. Digital Equipment Corporation invented a convention which used 19 (the device control 3 character (DC3), also known as control-S, or XOFF) to "S"top transmission, and 17 (the device control 1 character (DC1), a.k.a. control-Q, or XON) to start transmission. It has become so widely used that most don't realize it is not part of official ASCII. This technique, however implemented, avoids additional wires in the data cable devoted only to transmission management, which saves money. A sensible protocol for the use of such transmission flow control signals must be used, to avoid potential deadlock conditions, however.
The data link escape character (DLE) was intended to be a signal to the other end of a data link that the following character is a control character such as STX or ETX. For example a packet may be structured in the following way (DLE) <STX> <PAYLOAD> (DLE) <ETX>.
Code 7 (BEL) is intended to cause an audible signal in the receiving terminal.
Many of the ASCII control characters were designed for devices of the time that are not often seen today. For example, code 22, "synchronous idle" (SYN), was originally sent by synchronous modems (which have to send data constantly) when there was no actual data to send. (Modern systems typically use a start bit to announce the beginning of a transmitted word— this is a feature of asynchronous communication. Synchronous communication links were more often seen with mainframes, where they were typically run over corporate leased lines to connect a mainframe to another mainframe or perhaps a minicomputer.)
Code 0 (ASCII code name NUL) is a special case. In paper tape, it is the case when there are no holes. It is convenient to treat this as a fill character with no meaning otherwise. Since the position of a NUL character has no holes punched, it can be replaced with any other character at a later time, so it was typically used to reserve space, either for correcting errors or for inserting information that would be available at a later time or in another place. In computing it is often used for padding in fixed length records and more commonly, to mark the end of a string.
Code 127 (DEL, a.k.a. "rubout") is likewise a special case. Its 7-bit code is all-bits-on in binary, which essentially erased a character cell on a paper tape when overpunched. Paper tape was a common storage medium when ASCII was developed, with a computing history dating back to WWII code breaking equipment at Biuro Szyfrów. Paper tape became obsolete in the 1970s, so this clever aspect of ASCII rarely saw any use after that. Some systems (such as the original Apples) converted it to a backspace. But because its code is in the range occupied by other printable characters, and because it had no official assigned glyph, many computer equipment vendors used it as an additional printable character (often an all-black "box" character useful for erasing text by overprinting with ink).
Non-erasable programmable ROMs are typically implemented as arrays of fusible elements, each representing a bit, which can only be switched one way, usually from one to zero. In such PROMs, the DEL and NUL characters can be used in the same way that they were used on punched tape: one to reserve meaningless fill bytes that can be written later, and the other to convert written bytes to meaningless fill bytes. For PROMs that switch one to zero, the roles of NUL and DEL are reversed; also, DEL will only work with 7-bit characters, which are rarely used today; for 8-bit content, the character code 255, commonly defined as a nonbreaking space character, can be used instead of DEL.
Many file systems do not allow control characters in filenames, as they may have reserved functions. | [
{
"paragraph_id": 0,
"text": "In computing and telecommunication, a control character or non-printing character (NPC) is a code point in a character set that does not represent a written character or symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly graphic characters, also known as printing characters (or printable characters), except perhaps for \"space\" characters. In the ASCII standard there are 33 control characters, such as code 7, BEL, which rings a terminal bell.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Procedural signs in Morse code are a form of control character.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "A form of control characters were introduced in the 1870 Baudot code: NUL and DEL. The 1901 Murray code added the carriage return (CR) and line feed (LF), and other versions of the Baudot code included other control characters.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The bell character (BEL), which rang a bell to alert operators, was also an early teletype control character.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Some control characters have also been called \"format effectors\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "There were quite a few control characters defined (33 in ASCII, and the ECMA-48 standard adds 32 more). This was because early terminals had very primitive mechanical or electrical controls that made any kind of state-remembering API quite expensive to implement, thus a different code for each and every function looked like a requirement. It quickly became possible and inexpensive to interpret sequences of codes to perform a function, and device makers found a way to send hundreds of device instructions. Specifically, they used ASCII code 2710 (escape), followed by a series of characters called a \"control sequence\" or \"escape sequence\". The mechanism was invented by Bob Bemer, the father of ASCII. For example, the sequence of code 2710, followed by the printable characters \"[2;10H\", would cause a Digital Equipment Corporation VT100 terminal to move its cursor to the 10th cell of the 2nd line of the screen. Several standards exist for these sequences, notably ANSI X3.64. But the number of non-standard variations in use is large, especially among printers, where technology has advanced far faster than any standards body can possibly keep up with.",
"title": "In ASCII"
},
{
"paragraph_id": 6,
"text": "All entries in the ASCII table below code 3210 (technically the C0 control code set) are of this kind, including CR and LF used to separate lines of text. The code 12710 (DEL) is also a control character. Extended ASCII sets defined by ISO 8859 added the codes 12810 through 15910 as control characters. This was primarily done so that if the high bit was stripped, it would not change a printing character to a C0 control code. This second set is called the C1 set.",
"title": "In ASCII"
},
{
"paragraph_id": 7,
"text": "These 65 control codes were carried over to Unicode. Unicode added more characters that could be considered controls, but it makes a distinction between these \"Formatting characters\" (such as the zero-width non-joiner) and the 65 control characters.",
"title": "In ASCII"
},
{
"paragraph_id": 8,
"text": "The Extended Binary Coded Decimal Interchange Code (EBCDIC) character set contains 65 control codes, including all of the ASCII control codes plus additional codes which are mostly used to control IBM peripherals.",
"title": "In ASCII"
},
{
"paragraph_id": 9,
"text": "The control characters in ASCII still in common use include:",
"title": "In ASCII"
},
{
"paragraph_id": 10,
"text": "Control characters may be described as doing something when the user inputs them, such as code 3 (End-of-Text character, ETX, ^C) to interrupt the running process, or code 4 (End-of-Transmission character, EOT, ^D), used to end text input on Unix or to exit a Unix shell. These uses usually have little to do with their use when they are in text being output.",
"title": "In ASCII"
},
{
"paragraph_id": 11,
"text": "In Unicode, \"Control-characters\" are U+0000—U+001F (C0 controls), U+007F (delete), and U+0080—U+009F (C1 controls). Their General Category is \"Cc\". Formatting codes are distinct, in General Category \"Cf\". The Cc control characters have no Name in Unicode, but are given labels such as \"<control-001A>\" instead.",
"title": "In Unicode"
},
{
"paragraph_id": 12,
"text": "There are a number of techniques to display non-printing characters, which may be illustrated with the bell character in ASCII encoding:",
"title": "Display"
},
{
"paragraph_id": 13,
"text": "ASCII-based keyboards have a key labelled \"Control\", \"Ctrl\", or (rarely) \"Cntl\" which is used much like a shift key, being pressed in combination with another letter or symbol key. In one implementation, the control key generates the code 64 places below the code for the (generally) uppercase letter it is pressed in combination with (i.e., subtract 0x40 from ASCII code value of the (generally) uppercase letter). The other implementation is to take the ASCII code produced by the key and bitwise AND it with 0x1F, forcing bits 5 to 7 to zero. For example, pressing \"control\" and the letter \"g\" (which is 0110 0111 in binary), produces the code 7 (BELL, 7 in base ten, or 0000 0111 in binary). The NULL character (code 0) is represented by Ctrl-@, \"@\" being the code immediately before \"A\" in the ASCII character set. For convenience, some terminals accept Ctrl-Space as an alias for Ctrl-@. In either case, this produces one of the 32 ASCII control codes between 0 and 31. Neither approach works to produce the DEL character because of its special location in the table and its value (code 12710), Ctrl-? is sometimes used for this character.",
"title": "How control characters map to keyboards"
},
{
"paragraph_id": 14,
"text": "When the control key is held down, letter keys produce the same control characters regardless of the state of the shift or caps lock keys. In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) vary between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII (\"foreign\") keys also varies between systems.",
"title": "How control characters map to keyboards"
},
{
"paragraph_id": 15,
"text": "Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down.",
"title": "How control characters map to keyboards"
},
{
"paragraph_id": 16,
"text": "Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled \"Backspace\" typically produces code 8, \"Tab\" code 9, \"Enter\" or \"Return\" code 13 (though some keyboards might produce code 10 for \"Enter\").",
"title": "How control characters map to keyboards"
},
{
"paragraph_id": 17,
"text": "Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. \"Dumb\" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above.",
"title": "How control characters map to keyboards"
},
{
"paragraph_id": 18,
"text": "The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous.",
"title": "The design purpose"
},
{
"paragraph_id": 19,
"text": "Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early example of this idea was the use of Figures (FIGS) and Letters (LTRS) in Baudot code to shift between two code pages. A later, but still early, example was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed. The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line). The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets). The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading. The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line. The backspace character (BS) moves the printing position one character space backwards. On printers, including hard-copy terminals, this is most often used so the printer can overprint characters to make other, not normally available, characters. On video terminals and other electronic output devices, there are often software (or hardware) configuration choices that allow a destructive backspace (e.g., a BS, SP, BS sequence), which erases, or a non-destructive one, which does not. The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing.",
"title": "The design purpose"
},
{
"paragraph_id": 20,
"text": "With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from.",
"title": "The design purpose"
},
{
"paragraph_id": 21,
"text": "The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards. End of medium (EM) warns that the tape (or other recording medium) is ending. While many systems use CR/LF and TAB for structuring data, it is possible to encounter the separator control characters in data that needs to be structured. The separator control characters are not overloaded; there is no general use of them except to separate data into structured groupings. Their numeric values are contiguous with the space character, which can be considered a member of the group, as a word separator.",
"title": "The design purpose"
},
{
"paragraph_id": 22,
"text": "For example, the RS separator is used by RFC 7464 (JSON Text Sequences) to encode a sequence of JSON elements. Each sequence item starts with a RS character and ends with a line feed. This allows to serialize open-ended JSON sequences. It is one of the JSON streaming protocols.",
"title": "The design purpose"
},
{
"paragraph_id": 23,
"text": "The transmission control characters were intended to structure a data stream, and to manage re-transmission or graceful failure, as needed, in the face of transmission errors.",
"title": "The design purpose"
},
{
"paragraph_id": 24,
"text": "The start of heading (SOH) character was to mark a non-data section of a data stream—the part of a stream containing addresses and other housekeeping data. The start of text character (STX) marked the end of the header, and the start of the textual part of a stream. The end of text character (ETX) marked the end of the data of a message. A widely used convention is to make the two characters preceding ETX a checksum or CRC for error-detection purposes. The end of transmission block character (ETB) was used to indicate the end of a block of data, where data was divided into such blocks for transmission purposes.",
"title": "The design purpose"
},
{
"paragraph_id": 25,
"text": "The escape character (ESC) was intended to \"quote\" the next character, if it was another control character it would print it instead of performing the control function. It is almost never used for this purpose today. Various printable characters are used as visible \"escape characters\", depending on context.",
"title": "The design purpose"
},
{
"paragraph_id": 26,
"text": "The substitute character (SUB) was intended to request a translation of the next character from a printable character to another value, usually by setting bit 5 to zero. This is handy because some media (such as sheets of paper produced by typewriters) can transmit only printable characters. However, on MS-DOS systems with files opened in text mode, \"end of text\" or \"end of file\" is marked by this Ctrl-Z character, instead of the Ctrl-C or Ctrl-D, which are common on other operating systems.",
"title": "The design purpose"
},
{
"paragraph_id": 27,
"text": "The cancel character (CAN) signaled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element.",
"title": "The design purpose"
},
{
"paragraph_id": 28,
"text": "When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at any time, and one or more slave stations that transmit when they have permission. The enquire character (ENQ) is generally used by a master station to ask a slave station to send its next message. A slave station indicates that it has completed its transmission by sending the end of transmission character (EOT).",
"title": "The design purpose"
},
{
"paragraph_id": 29,
"text": "The device control codes (DC1 to DC4) were originally generic, to be implemented as necessary by each device. However, a universal need in data transmission is to request the sender to stop transmitting when a receiver is temporarily unable to accept any more data. Digital Equipment Corporation invented a convention which used 19 (the device control 3 character (DC3), also known as control-S, or XOFF) to \"S\"top transmission, and 17 (the device control 1 character (DC1), a.k.a. control-Q, or XON) to start transmission. It has become so widely used that most don't realize it is not part of official ASCII. This technique, however implemented, avoids additional wires in the data cable devoted only to transmission management, which saves money. A sensible protocol for the use of such transmission flow control signals must be used, to avoid potential deadlock conditions, however.",
"title": "The design purpose"
},
{
"paragraph_id": 30,
"text": "The data link escape character (DLE) was intended to be a signal to the other end of a data link that the following character is a control character such as STX or ETX. For example a packet may be structured in the following way (DLE) <STX> <PAYLOAD> (DLE) <ETX>.",
"title": "The design purpose"
},
{
"paragraph_id": 31,
"text": "Code 7 (BEL) is intended to cause an audible signal in the receiving terminal.",
"title": "The design purpose"
},
{
"paragraph_id": 32,
"text": "Many of the ASCII control characters were designed for devices of the time that are not often seen today. For example, code 22, \"synchronous idle\" (SYN), was originally sent by synchronous modems (which have to send data constantly) when there was no actual data to send. (Modern systems typically use a start bit to announce the beginning of a transmitted word— this is a feature of asynchronous communication. Synchronous communication links were more often seen with mainframes, where they were typically run over corporate leased lines to connect a mainframe to another mainframe or perhaps a minicomputer.)",
"title": "The design purpose"
},
{
"paragraph_id": 33,
"text": "Code 0 (ASCII code name NUL) is a special case. In paper tape, it is the case when there are no holes. It is convenient to treat this as a fill character with no meaning otherwise. Since the position of a NUL character has no holes punched, it can be replaced with any other character at a later time, so it was typically used to reserve space, either for correcting errors or for inserting information that would be available at a later time or in another place. In computing it is often used for padding in fixed length records and more commonly, to mark the end of a string.",
"title": "The design purpose"
},
{
"paragraph_id": 34,
"text": "Code 127 (DEL, a.k.a. \"rubout\") is likewise a special case. Its 7-bit code is all-bits-on in binary, which essentially erased a character cell on a paper tape when overpunched. Paper tape was a common storage medium when ASCII was developed, with a computing history dating back to WWII code breaking equipment at Biuro Szyfrów. Paper tape became obsolete in the 1970s, so this clever aspect of ASCII rarely saw any use after that. Some systems (such as the original Apples) converted it to a backspace. But because its code is in the range occupied by other printable characters, and because it had no official assigned glyph, many computer equipment vendors used it as an additional printable character (often an all-black \"box\" character useful for erasing text by overprinting with ink).",
"title": "The design purpose"
},
{
"paragraph_id": 35,
"text": "Non-erasable programmable ROMs are typically implemented as arrays of fusible elements, each representing a bit, which can only be switched one way, usually from one to zero. In such PROMs, the DEL and NUL characters can be used in the same way that they were used on punched tape: one to reserve meaningless fill bytes that can be written later, and the other to convert written bytes to meaningless fill bytes. For PROMs that switch one to zero, the roles of NUL and DEL are reversed; also, DEL will only work with 7-bit characters, which are rarely used today; for 8-bit content, the character code 255, commonly defined as a nonbreaking space character, can be used instead of DEL.",
"title": "The design purpose"
},
{
"paragraph_id": 36,
"text": "Many file systems do not allow control characters in filenames, as they may have reserved functions.",
"title": "The design purpose"
}
] | In computing and telecommunication, a control character or non-printing character (NPC) is a code point in a character set that does not represent a written character or symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly graphic characters, also known as printing characters, except perhaps for "space" characters. In the ASCII standard there are 33 control characters, such as code 7, BEL, which rings a terminal bell. | 2001-03-21T05:31:18Z | 2023-12-30T18:19:53Z | [
"Template:Redirect",
"Template:Main",
"Template:Unreferenced section",
"Template:Citation",
"Template:More citations needed",
"Template:Reflist",
"Template:Character encodings",
"Template:Authority control",
"Template:Short description",
"Template:Expand section",
"Template:Mono",
"Template:Slink",
"Template:Cite IETF",
"Template:Cite book",
"Template:Distinguish",
"Template:Tt",
"Template:IETF RFC",
"Template:Cite web",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Control_character |
5,299 | Carbon | Carbon (from Latin carbo 'coal') is a chemical element; it has symbol C and atomic number 6. It is nonmetallic and tetravalent—meaning that its atoms are able to form up to four covalent bonds due to its valence shell exhibiting 4 electrons. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of about 5,730 years. Carbon is one of the few elements known since antiquity.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb "γράφειν" which means "to write"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen.
The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.
The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at 10.8 ± 0.2 megapascals (106.6 ± 2.0 atm; 1,566 ± 29 psi) and 4,600 ± 300 K (4,330 ± 300 °C; 7,820 ± 540 °F), so it sublimes at about 3,900 K (3,630 °C; 6,560 °F). Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure.
Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature.
Carbon is the sixth element, with a ground-state electron configuration of 1s2s2p, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order.
Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel:
Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification:
Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools.
The system of carbon allotropes spans a range of extremes:
Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne).
Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars.
The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature.
At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 10 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as T → 0 K and p → 0 Pa, diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for T < 400 K, without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond.
Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C60 buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure.
Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)n− . Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond.
In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds.
In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (C2). When excited, this gas glows green.
Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts.
In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is 5.972×10 kg, this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below).
In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about 175×10 cubic metres (containing about 105 gigatonnes of carbon), but studies estimate another 900×10 cubic metres of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon.
Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt.
According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt.
Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel.
As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured.
Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, 1.5%×10 of atmospheric carbon dioxide contains carbon-14.
Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible.
Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C.
Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of 5730 years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.
There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and alpha decay and has a half-life of 1.98739 × 10 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density.
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars.
The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst.
Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds.
Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned.
Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen.
The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules.
In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates.
Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels.
When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965-1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life.
Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (CO2). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (H2CO3), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide (CS2) is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an organic solvent.
The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)2), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (C3O2), the unstable dicarbon monoxide (C2O), carbon trioxide (CO3), cyclopentanepentone (C5O5), cyclohexanehexone (C6O6), and mellitic anhydride (C12O9). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic.
With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides (C2) to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds.
Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds.
While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12] unit, with one BH replaced with a CH. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(Ph3PAu)6C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η-C5Me5)], making it an "organic metallocene" in which a MeC fragment is bonded to a η-C5Me5 fragment through all five of the carbons of the ring.
It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding.
The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof, and kulstof respectively, all literally meaning coal-substance.
Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made around Roman times by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air.
In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook.
A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous.
Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water.
There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka.
According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009.
The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure).
Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.
Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725.
Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones.
In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana.
Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil.
The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a molding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors.
Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials.
Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel.
Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall and metal.
The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently.
Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity: there is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds.
Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as bort). Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually.
The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics.
Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself.
Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to Drosophila.
Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps.
In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust.
The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant Ricinus communis, cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein. | [
{
"paragraph_id": 0,
"text": "Carbon (from Latin carbo 'coal') is a chemical element; it has symbol C and atomic number 6. It is nonmetallic and tetravalent—meaning that its atoms are able to form up to four covalent bonds due to its valence shell exhibiting 4 electrons. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of about 5,730 years. Carbon is one of the few elements known since antiquity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb \"γράφειν\" which means \"to write\"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at 10.8 ± 0.2 megapascals (106.6 ± 2.0 atm; 1,566 ± 29 psi) and 4,600 ± 300 K (4,330 ± 300 °C; 7,820 ± 540 °F), so it sublimes at about 3,900 K (3,630 °C; 6,560 °F). Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "Carbon is the sixth element, with a ground-state electron configuration of 1s2s2p, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order.",
"title": "Characteristics"
},
{
"paragraph_id": 7,
"text": "Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel:",
"title": "Characteristics"
},
{
"paragraph_id": 8,
"text": "Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification:",
"title": "Characteristics"
},
{
"paragraph_id": 9,
"text": "Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools.",
"title": "Characteristics"
},
{
"paragraph_id": 10,
"text": "The system of carbon allotropes spans a range of extremes:",
"title": "Characteristics"
},
{
"paragraph_id": 11,
"text": "Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne).",
"title": "Characteristics"
},
{
"paragraph_id": 12,
"text": "Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars.",
"title": "Characteristics"
},
{
"paragraph_id": 13,
"text": "The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature.",
"title": "Characteristics"
},
{
"paragraph_id": 14,
"text": "At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that \"diamonds are forever\", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 10 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as T → 0 K and p → 0 Pa, diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for T < 400 K, without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond.",
"title": "Characteristics"
},
{
"paragraph_id": 15,
"text": "Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C60 buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure.",
"title": "Characteristics"
},
{
"paragraph_id": 16,
"text": "Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)n− . Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond.",
"title": "Characteristics"
},
{
"paragraph_id": 17,
"text": "In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds.",
"title": "Characteristics"
},
{
"paragraph_id": 18,
"text": "In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (C2). When excited, this gas glows green.",
"title": "Characteristics"
},
{
"paragraph_id": 19,
"text": "Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts.",
"title": "Characteristics"
},
{
"paragraph_id": 20,
"text": "In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed \"a couple of billion years\" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.",
"title": "Characteristics"
},
{
"paragraph_id": 21,
"text": "It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is 5.972×10 kg, this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below).",
"title": "Characteristics"
},
{
"paragraph_id": 22,
"text": "In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal \"reserves\" (not \"resources\") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about 175×10 cubic metres (containing about 105 gigatonnes of carbon), but studies estimate another 900×10 cubic metres of \"unconventional\" deposits such as shale gas, representing about 540 gigatonnes of carbon.",
"title": "Characteristics"
},
{
"paragraph_id": 23,
"text": "Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt.",
"title": "Characteristics"
},
{
"paragraph_id": 24,
"text": "According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt.",
"title": "Characteristics"
},
{
"paragraph_id": 25,
"text": "Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel.",
"title": "Characteristics"
},
{
"paragraph_id": 26,
"text": "As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic \"necks\", or \"pipes\". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured.",
"title": "Characteristics"
},
{
"paragraph_id": 27,
"text": "Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, 1.5%×10 of atmospheric carbon dioxide contains carbon-14.",
"title": "Characteristics"
},
{
"paragraph_id": 28,
"text": "Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible.",
"title": "Characteristics"
},
{
"paragraph_id": 29,
"text": "Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C.",
"title": "Characteristics"
},
{
"paragraph_id": 30,
"text": "Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of 5730 years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.",
"title": "Characteristics"
},
{
"paragraph_id": 31,
"text": "There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and alpha decay and has a half-life of 1.98739 × 10 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density.",
"title": "Characteristics"
},
{
"paragraph_id": 32,
"text": "Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.",
"title": "Characteristics"
},
{
"paragraph_id": 33,
"text": "According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars.",
"title": "Characteristics"
},
{
"paragraph_id": 34,
"text": "The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst.",
"title": "Characteristics"
},
{
"paragraph_id": 35,
"text": "Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds.",
"title": "Characteristics"
},
{
"paragraph_id": 36,
"text": "Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned.",
"title": "Characteristics"
},
{
"paragraph_id": 37,
"text": "Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen.",
"title": "Compounds"
},
{
"paragraph_id": 38,
"text": "The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules.",
"title": "Compounds"
},
{
"paragraph_id": 39,
"text": "In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates.",
"title": "Compounds"
},
{
"paragraph_id": 40,
"text": "Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels.",
"title": "Compounds"
},
{
"paragraph_id": 41,
"text": "When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965-1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life.",
"title": "Compounds"
},
{
"paragraph_id": 42,
"text": "Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (CO2). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (H2CO3), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide (CS2) is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an organic solvent.",
"title": "Compounds"
},
{
"paragraph_id": 43,
"text": "The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)2), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (C3O2), the unstable dicarbon monoxide (C2O), carbon trioxide (CO3), cyclopentanepentone (C5O5), cyclohexanehexone (C6O6), and mellitic anhydride (C12O9). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic.",
"title": "Compounds"
},
{
"paragraph_id": 44,
"text": "With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides (C2) to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds.",
"title": "Compounds"
},
{
"paragraph_id": 45,
"text": "Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds.",
"title": "Compounds"
},
{
"paragraph_id": 46,
"text": "While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12] unit, with one BH replaced with a CH. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(Ph3PAu)6C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η-C5Me5)], making it an \"organic metallocene\" in which a MeC fragment is bonded to a η-C5Me5 fragment through all five of the carbons of the ring.",
"title": "Compounds"
},
{
"paragraph_id": 47,
"text": "It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding.",
"title": "Compounds"
},
{
"paragraph_id": 48,
"text": "The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof, and kulstof respectively, all literally meaning coal-substance.",
"title": "History and etymology"
},
{
"paragraph_id": 49,
"text": "Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made around Roman times by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air.",
"title": "History and etymology"
},
{
"paragraph_id": 50,
"text": "In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave \"aerial acid\" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook.",
"title": "History and etymology"
},
{
"paragraph_id": 51,
"text": "A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that \"amorphous carbon\" is not strictly amorphous.",
"title": "History and etymology"
},
{
"paragraph_id": 52,
"text": "Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water.",
"title": "Production"
},
{
"paragraph_id": 53,
"text": "There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry \"amorphous\" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka.",
"title": "Production"
},
{
"paragraph_id": 54,
"text": "According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009.",
"title": "Production"
},
{
"paragraph_id": 55,
"text": "The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure).",
"title": "Production"
},
{
"paragraph_id": 56,
"text": "Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.",
"title": "Production"
},
{
"paragraph_id": 57,
"text": "Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725.",
"title": "Production"
},
{
"paragraph_id": 58,
"text": "Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones.",
"title": "Production"
},
{
"paragraph_id": 59,
"text": "In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana.",
"title": "Production"
},
{
"paragraph_id": 60,
"text": "Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil.",
"title": "Applications"
},
{
"paragraph_id": 61,
"text": "The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a molding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors.",
"title": "Applications"
},
{
"paragraph_id": 62,
"text": "Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials.",
"title": "Applications"
},
{
"paragraph_id": 63,
"text": "Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel.",
"title": "Applications"
},
{
"paragraph_id": 64,
"text": "Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall and metal.",
"title": "Applications"
},
{
"paragraph_id": 65,
"text": "The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently.",
"title": "Applications"
},
{
"paragraph_id": 66,
"text": "Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity: there is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds.",
"title": "Applications"
},
{
"paragraph_id": 67,
"text": "Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as bort). Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually.",
"title": "Applications"
},
{
"paragraph_id": 68,
"text": "The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics.",
"title": "Applications"
},
{
"paragraph_id": 69,
"text": "Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself.",
"title": "Precautions"
},
{
"paragraph_id": 70,
"text": "Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to Drosophila.",
"title": "Precautions"
},
{
"paragraph_id": 71,
"text": "Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps.",
"title": "Precautions"
},
{
"paragraph_id": 72,
"text": "In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust.",
"title": "Precautions"
},
{
"paragraph_id": 73,
"text": "The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant Ricinus communis, cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein.",
"title": "Precautions"
}
] | Carbon is a chemical element; it has symbol C and atomic number 6. It is nonmetallic and tetravalent—meaning that its atoms are able to form up to four covalent bonds due to its valence shell exhibiting 4 electrons. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, 12C and 13C being stable, while 14C is a radionuclide, decaying with a half-life of about 5,730 years. Carbon is one of the few elements known since antiquity. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass after oxygen. The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper, while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen. The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. | 2001-03-21T03:44:52Z | 2023-11-20T20:16:48Z | [
"Template:About",
"Template:Chem",
"Template:Cite web",
"Template:NUBASE 1997",
"Template:ISBN",
"Template:In Our Time",
"Template:Subject bar",
"Template:Allotropes of carbon",
"Template:Convert",
"Template:Redirect",
"Template:Good article",
"Template:Infobox carbon",
"Template:Cite press release",
"Template:Greenwood&Earnshaw2nd",
"Template:Carbon compounds",
"Template:Authority control",
"Template:Val",
"Template:Cite magazine",
"Template:Periodic table (navbox)",
"Template:ChemicalBondsToCarbon",
"Template:Sup",
"Template:Circa",
"Template:Div col end",
"Template:Reflist",
"Template:Nowrap",
"Template:Citation needed",
"Template:Cite news",
"Template:Main",
"Template:Chem2",
"Template:Cite book",
"Template:Cite journal",
"Template:Pp-semi-indef",
"Template:Etymology",
"Template:Sub",
"Template:CO2",
"Template:Clear",
"Template:Div col",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Carbon |
5,300 | Computer data storage | Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.
In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory. Meanwhile, slower persistent storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.
Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards.
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.
Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.
An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Utilities such as hdparm and sar can be used to measure IO performance in Linux.
Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.
Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015.
Distinct types of data storage have different points of failure and various methods of predictive failure analysis.
Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.
Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.
The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.
As of 2011, the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.
Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.
As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
In early computers, magnetic storage was also used as:
Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.
Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use as of 2009:
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
3D optical data storage has also been proposed.
Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.
Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.
While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:
Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above).
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 22 January 2022. | [
{
"paragraph_id": 0,
"text": "Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as \"memory\", while slower persistent technologies are referred to as \"storage\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.",
"title": "Functionality"
},
{
"paragraph_id": 4,
"text": "A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.",
"title": "Data organization and representation"
},
{
"paragraph_id": 5,
"text": "Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4).",
"title": "Data organization and representation"
},
{
"paragraph_id": 6,
"text": "By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or \"physical bit fatigue\", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.",
"title": "Data organization and representation"
},
{
"paragraph_id": 7,
"text": "Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string (\"compress\") and reconstruct the original string (\"decompress\") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.",
"title": "Data organization and representation"
},
{
"paragraph_id": 8,
"text": "For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.",
"title": "Data organization and representation"
},
{
"paragraph_id": 9,
"text": "Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 10,
"text": "In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 11,
"text": "Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory. Meanwhile, slower persistent storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 12,
"text": "Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 13,
"text": "Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 14,
"text": "This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 15,
"text": "As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 16,
"text": "Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 17,
"text": "As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 18,
"text": "Many types of \"ROM\" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 19,
"text": "Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 20,
"text": "Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 21,
"text": "In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 22,
"text": "Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 23,
"text": "Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 24,
"text": "Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 25,
"text": "Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 26,
"text": "When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 27,
"text": "Tertiary storage is also known as nearline storage because it is \"near to online\". The formal distinction between online, nearline, and offline storage is:",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 28,
"text": "For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 29,
"text": "Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 30,
"text": "Off-line storage is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 31,
"text": "In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards.",
"title": "Hierarchy of storage"
},
{
"paragraph_id": 32,
"text": "Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 33,
"text": "Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 34,
"text": "Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 35,
"text": "An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 36,
"text": "Utilities such as hdparm and sar can be used to measure IO performance in Linux.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 37,
"text": "Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 38,
"text": "Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 39,
"text": "Distinct types of data storage have different points of failure and various methods of predictive failure analysis.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 40,
"text": "Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 41,
"text": "Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 42,
"text": "Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 43,
"text": "The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.",
"title": "Characteristics of storage"
},
{
"paragraph_id": 44,
"text": "As of 2011, the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.",
"title": "Storage media"
},
{
"paragraph_id": 45,
"text": "Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.",
"title": "Storage media"
},
{
"paragraph_id": 46,
"text": "In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.",
"title": "Storage media"
},
{
"paragraph_id": 47,
"text": "As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.",
"title": "Storage media"
},
{
"paragraph_id": 48,
"text": "Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:",
"title": "Storage media"
},
{
"paragraph_id": 49,
"text": "In early computers, magnetic storage was also used as:",
"title": "Storage media"
},
{
"paragraph_id": 50,
"text": "Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.",
"title": "Storage media"
},
{
"paragraph_id": 51,
"text": "Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use as of 2009:",
"title": "Storage media"
},
{
"paragraph_id": 52,
"text": "Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.",
"title": "Storage media"
},
{
"paragraph_id": 53,
"text": "3D optical data storage has also been proposed.",
"title": "Storage media"
},
{
"paragraph_id": 54,
"text": "Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.",
"title": "Storage media"
},
{
"paragraph_id": 55,
"text": "Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.",
"title": "Storage media"
},
{
"paragraph_id": 56,
"text": "Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.",
"title": "Storage media"
},
{
"paragraph_id": 57,
"text": "While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:",
"title": "Related technologies"
},
{
"paragraph_id": 58,
"text": "Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above).",
"title": "Related technologies"
},
{
"paragraph_id": 59,
"text": "A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.",
"title": "Related technologies"
},
{
"paragraph_id": 60,
"text": "Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.",
"title": "Related technologies"
},
{
"paragraph_id": 61,
"text": "Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.",
"title": "Related technologies"
},
{
"paragraph_id": 62,
"text": "Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.",
"title": "Related technologies"
},
{
"paragraph_id": 63,
"text": "This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 22 January 2022.",
"title": "References"
}
] | Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory. This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. | 2001-03-21T07:37:01Z | 2023-11-17T20:19:44Z | [
"Template:Cite web",
"Template:Cite tech report",
"Template:Cite journal",
"Template:Cite magazine",
"Template:As of",
"Template:Magnetic storage media",
"Template:Notelist",
"Template:Reflist",
"Template:Use dmy dates",
"Template:Primary storage technologies",
"Template:FS1037C",
"Template:Cite book",
"Template:Basic computer components",
"Template:See also",
"Template:Anchor",
"Template:Optical storage media",
"Template:Wikiversity",
"Template:Main",
"Template:Paper data storage media",
"Template:Cite news",
"Template:Authority control",
"Template:Short description",
"Template:Broader",
"Template:Rp",
"Template:Efn"
] | https://en.wikipedia.org/wiki/Computer_data_storage |
5,302 | Conditional | Conditional (if then) may refer to: | [
{
"paragraph_id": 0,
"text": "Conditional (if then) may refer to:",
"title": ""
}
] | Conditional may refer to: Causal conditional, if X then Y, where X is a cause of Y
Conditional probability, the probability of an event A given that another event B has occurred
Conditional proof, in logic: a proof that asserts a conditional, and proves that the antecedent leads to the consequent
Strict conditional, in philosophy, logic, and mathematics
Material conditional, in propositional calculus, or logical calculus in mathematics
Relevance conditional, in relevance logic
Conditional, a statement or expression in computer programming languages
A conditional expression in computer programming languages such as ?:
Conditions in a contract | 2001-03-21T22:40:02Z | 2023-09-16T05:37:51Z | [
"Template:Wiktionary",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Conditional |
5,304 | Cone (disambiguation) | A cone is a basic geometrical shape.
Cone may also refer to: | [
{
"paragraph_id": 0,
"text": "A cone is a basic geometrical shape.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cone may also refer to:",
"title": ""
}
] | A cone is a basic geometrical shape. Cone may also refer to: | 2001-03-22T01:31:57Z | 2023-12-03T13:20:38Z | [
"Template:TOC right",
"Template:See also",
"Template:Intitle",
"Template:Lookfrom",
"Template:Disambiguation",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Cone_(disambiguation) |
5,306 | Chemical equilibrium | In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.
The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products:
The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants.
Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action:
where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal:
and the ratio of the rate constants is also a constant, now known as an equilibrium constant.
By convention, the products form the numerator. However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.
Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.
Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,
a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.
Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).
If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:
If {H3O} increases {CH3CO2H} must increase and CH3CO−2 must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.
A quantitative version is given by the reaction quotient.
J. W. Gibbs suggested in 1873 that equilibrium is attained when the Gibbs free energy of the system is at its minimum value (assuming the reaction is carried out at a constant temperature and pressure). What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes (because dG = 0), signaling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture. This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the "driving force" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation
where R is the universal gas constant and T the temperature.
When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc,
where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.
At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy, A, for the reaction; and at constant internal energy and volume, one must consider the entropy, S, for the reaction.
The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.
In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.
At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with respect to ξ must be negative if the reaction happens; at the equilibrium this derivative is equal to zero.
In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction, ξ, must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.
where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
(where μA is the standard chemical potential).
The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce
Inserting dNi = νi dξ into the above equation gives a stoichiometric coefficient ( ν i {\displaystyle \nu _{i}~} ) and a differential that denotes the reaction occurring to an infinitesimal extent (dξ). At constant pressure and temperature the above equations can be written as
which is the "Gibbs free energy change for the reaction. This results in:
By substituting the chemical potentials:
the relationship becomes:
which is the standard Gibbs energy change for the reaction that can be calculated using thermodynamical tables. The reaction quotient is defined as:
Therefore,
At equilibrium:
leading to:
and
Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.
For a reactional system at equilibrium: Qr = Keq; ξ = ξeq.
Note that activities and equilibrium constants are dimensionless numbers.
The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ.
[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used. However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here.
For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by
so the general expression defining an equilibrium constant is valid for both solution and gas phases.
In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by
where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.
However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.
Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjusted.
A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.
The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.
Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions
but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.
When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one.
Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains
For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as
A particular case is the self-ionization of water
Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as
It is perfectly legitimate to write [H] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature.
The concentrations of H and OH are not independent quantities. Most commonly [OH] is replaced by Kw[H] in equilibrium constant expressions which would otherwise include hydroxide ion.
Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:
for which the equation (without solid carbon) is written as:
Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA and A. This equilibrium can be split into two steps in each of which one proton is liberated.
K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants.
Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.
β1 and β2 are examples of association constants. Clearly β1 = 1/K2 and β2 = 1/βD; log β1 = pK2 and log β2 = pK2 + pK1 For multiple equilibrium systems, also see: theory of Response reactions.
The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation
Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is
At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.
The effect of electric field on equilibrium has been studied by Manfred Eigen among others.
Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases.
In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined.
When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.
There are three approaches to the general calculation of the composition of a mixture at equilibrium.
In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A, and the proton, H. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:
with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.
When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H] and [OH] = Kw[H]
so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants. General expressions applicable to all systems with two reagents, A and B would be
It is easy to see how this can be extended to three or more reagents.
The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.
The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al(aq) shows the species concentrations for a 5 × 10 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.
The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al are aluminium hydroxides Al(OH), AlOH+2 and Al13(OH)7+32, but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, Al(OH)−4, is formed.
Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.
At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum:
where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as:
where μ j ⊖ {\displaystyle \mu _{j}^{\ominus }} is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity.
For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:
where aij is the number of atoms of element i in molecule j and bi is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.
This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used).
Define:
where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by
(For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.
This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:,
where νj is the stoichiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey:
Multiplying the first equilibrium condition by νj and using the above equation yields:
As above, defining ΔG
where Kc is the equilibrium constant, and ΔG will be zero at equilibrium.
Analogous procedures exist for the minimization of other thermodynamic potentials. | [
{
"paragraph_id": 0,
"text": "In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products:",
"title": "Historical introduction"
},
{
"paragraph_id": 2,
"text": "The equilibrium concentration position of a reaction is said to lie \"far to the right\" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be \"far to the left\" if hardly any product is formed from the reactants.",
"title": "Historical introduction"
},
{
"paragraph_id": 3,
"text": "Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action:",
"title": "Historical introduction"
},
{
"paragraph_id": 4,
"text": "where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal:",
"title": "Historical introduction"
},
{
"paragraph_id": 5,
"text": "and the ratio of the rate constants is also a constant, now known as an equilibrium constant.",
"title": "Historical introduction"
},
{
"paragraph_id": 6,
"text": "By convention, the products form the numerator. However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.",
"title": "Historical introduction"
},
{
"paragraph_id": 7,
"text": "Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.",
"title": "Historical introduction"
},
{
"paragraph_id": 8,
"text": "Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,",
"title": "Historical introduction"
},
{
"paragraph_id": 9,
"text": "a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.",
"title": "Historical introduction"
},
{
"paragraph_id": 10,
"text": "Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).",
"title": "Historical introduction"
},
{
"paragraph_id": 11,
"text": "If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:",
"title": "Historical introduction"
},
{
"paragraph_id": 12,
"text": "If {H3O} increases {CH3CO2H} must increase and CH3CO−2 must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.",
"title": "Historical introduction"
},
{
"paragraph_id": 13,
"text": "A quantitative version is given by the reaction quotient.",
"title": "Historical introduction"
},
{
"paragraph_id": 14,
"text": "J. W. Gibbs suggested in 1873 that equilibrium is attained when the Gibbs free energy of the system is at its minimum value (assuming the reaction is carried out at a constant temperature and pressure). What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes (because dG = 0), signaling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture. This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the \"driving force\" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation",
"title": "Historical introduction"
},
{
"paragraph_id": 15,
"text": "where R is the universal gas constant and T the temperature.",
"title": "Historical introduction"
},
{
"paragraph_id": 16,
"text": "When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc,",
"title": "Historical introduction"
},
{
"paragraph_id": 17,
"text": "where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.",
"title": "Historical introduction"
},
{
"paragraph_id": 18,
"text": "At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy, A, for the reaction; and at constant internal energy and volume, one must consider the entropy, S, for the reaction.",
"title": "Thermodynamics"
},
{
"paragraph_id": 19,
"text": "The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.",
"title": "Thermodynamics"
},
{
"paragraph_id": 20,
"text": "In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.",
"title": "Thermodynamics"
},
{
"paragraph_id": 21,
"text": "At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with respect to ξ must be negative if the reaction happens; at the equilibrium this derivative is equal to zero.",
"title": "Thermodynamics"
},
{
"paragraph_id": 22,
"text": "In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction, ξ, must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.",
"title": "Thermodynamics"
},
{
"paragraph_id": 23,
"text": "where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.",
"title": "Thermodynamics"
},
{
"paragraph_id": 24,
"text": "(where μA is the standard chemical potential).",
"title": "Thermodynamics"
},
{
"paragraph_id": 25,
"text": "The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce",
"title": "Thermodynamics"
},
{
"paragraph_id": 26,
"text": "Inserting dNi = νi dξ into the above equation gives a stoichiometric coefficient ( ν i {\\displaystyle \\nu _{i}~} ) and a differential that denotes the reaction occurring to an infinitesimal extent (dξ). At constant pressure and temperature the above equations can be written as",
"title": "Thermodynamics"
},
{
"paragraph_id": 27,
"text": "which is the \"Gibbs free energy change for the reaction. This results in:",
"title": "Thermodynamics"
},
{
"paragraph_id": 28,
"text": "By substituting the chemical potentials:",
"title": "Thermodynamics"
},
{
"paragraph_id": 29,
"text": "the relationship becomes:",
"title": "Thermodynamics"
},
{
"paragraph_id": 30,
"text": "which is the standard Gibbs energy change for the reaction that can be calculated using thermodynamical tables. The reaction quotient is defined as:",
"title": "Thermodynamics"
},
{
"paragraph_id": 31,
"text": "Therefore,",
"title": "Thermodynamics"
},
{
"paragraph_id": 32,
"text": "At equilibrium:",
"title": "Thermodynamics"
},
{
"paragraph_id": 33,
"text": "leading to:",
"title": "Thermodynamics"
},
{
"paragraph_id": 34,
"text": "and",
"title": "Thermodynamics"
},
{
"paragraph_id": 35,
"text": "Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.",
"title": "Thermodynamics"
},
{
"paragraph_id": 36,
"text": "For a reactional system at equilibrium: Qr = Keq; ξ = ξeq.",
"title": "Thermodynamics"
},
{
"paragraph_id": 37,
"text": "Note that activities and equilibrium constants are dimensionless numbers.",
"title": "Thermodynamics"
},
{
"paragraph_id": 38,
"text": "The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ.",
"title": "Thermodynamics"
},
{
"paragraph_id": 39,
"text": "[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used. However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here.",
"title": "Thermodynamics"
},
{
"paragraph_id": 40,
"text": "For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by",
"title": "Thermodynamics"
},
{
"paragraph_id": 41,
"text": "so the general expression defining an equilibrium constant is valid for both solution and gas phases.",
"title": "Thermodynamics"
},
{
"paragraph_id": 42,
"text": "In aqueous solution, equilibrium constants are usually determined in the presence of an \"inert\" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by",
"title": "Thermodynamics"
},
{
"paragraph_id": 43,
"text": "where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.",
"title": "Thermodynamics"
},
{
"paragraph_id": 44,
"text": "However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.",
"title": "Thermodynamics"
},
{
"paragraph_id": 45,
"text": "Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjusted.",
"title": "Thermodynamics"
},
{
"paragraph_id": 46,
"text": "A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.",
"title": "Thermodynamics"
},
{
"paragraph_id": 47,
"text": "The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.",
"title": "Thermodynamics"
},
{
"paragraph_id": 48,
"text": "Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions",
"title": "Thermodynamics"
},
{
"paragraph_id": 49,
"text": "but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.",
"title": "Thermodynamics"
},
{
"paragraph_id": 50,
"text": "When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one.",
"title": "Pure substances"
},
{
"paragraph_id": 51,
"text": "Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains",
"title": "Pure substances"
},
{
"paragraph_id": 52,
"text": "For all but very concentrated solutions, the water can be considered a \"pure\" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as",
"title": "Pure substances"
},
{
"paragraph_id": 53,
"text": "A particular case is the self-ionization of water",
"title": "Pure substances"
},
{
"paragraph_id": 54,
"text": "Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as",
"title": "Pure substances"
},
{
"paragraph_id": 55,
"text": "It is perfectly legitimate to write [H] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature.",
"title": "Pure substances"
},
{
"paragraph_id": 56,
"text": "The concentrations of H and OH are not independent quantities. Most commonly [OH] is replaced by Kw[H] in equilibrium constant expressions which would otherwise include hydroxide ion.",
"title": "Pure substances"
},
{
"paragraph_id": 57,
"text": "Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:",
"title": "Pure substances"
},
{
"paragraph_id": 58,
"text": "for which the equation (without solid carbon) is written as:",
"title": "Pure substances"
},
{
"paragraph_id": 59,
"text": "Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA and A. This equilibrium can be split into two steps in each of which one proton is liberated.",
"title": "Multiple equilibria"
},
{
"paragraph_id": 60,
"text": "K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants.",
"title": "Multiple equilibria"
},
{
"paragraph_id": 61,
"text": "Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.",
"title": "Multiple equilibria"
},
{
"paragraph_id": 62,
"text": "β1 and β2 are examples of association constants. Clearly β1 = 1/K2 and β2 = 1/βD; log β1 = pK2 and log β2 = pK2 + pK1 For multiple equilibrium systems, also see: theory of Response reactions.",
"title": "Multiple equilibria"
},
{
"paragraph_id": 63,
"text": "The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation",
"title": "Effect of temperature"
},
{
"paragraph_id": 64,
"text": "Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is",
"title": "Effect of temperature"
},
{
"paragraph_id": 65,
"text": "At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.",
"title": "Effect of temperature"
},
{
"paragraph_id": 66,
"text": "The effect of electric field on equilibrium has been studied by Manfred Eigen among others.",
"title": "Effect of electric and magnetic fields"
},
{
"paragraph_id": 67,
"text": "Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases.",
"title": "Types of equilibrium"
},
{
"paragraph_id": 68,
"text": "In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined.",
"title": "Types of equilibrium"
},
{
"paragraph_id": 69,
"text": "When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 70,
"text": "There are three approaches to the general calculation of the composition of a mixture at equilibrium.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 71,
"text": "In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A, and the proton, H. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:",
"title": "Composition of a mixture"
},
{
"paragraph_id": 72,
"text": "with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 73,
"text": "When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown \"free concentrations\" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H] and [OH] = Kw[H]",
"title": "Composition of a mixture"
},
{
"paragraph_id": 74,
"text": "so the concentrations of the \"complexes\" are calculated from the free concentrations and the equilibrium constants. General expressions applicable to all systems with two reagents, A and B would be",
"title": "Composition of a mixture"
},
{
"paragraph_id": 75,
"text": "It is easy to see how this can be extended to three or more reagents.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 76,
"text": "The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 77,
"text": "The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al(aq) shows the species concentrations for a 5 × 10 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 78,
"text": "The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al are aluminium hydroxides Al(OH), AlOH+2 and Al13(OH)7+32, but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, Al(OH)−4, is formed.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 79,
"text": "Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 80,
"text": "At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum:",
"title": "Composition of a mixture"
},
{
"paragraph_id": 81,
"text": "where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as:",
"title": "Composition of a mixture"
},
{
"paragraph_id": 82,
"text": "where μ j ⊖ {\\displaystyle \\mu _{j}^{\\ominus }} is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 83,
"text": "For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:",
"title": "Composition of a mixture"
},
{
"paragraph_id": 84,
"text": "where aij is the number of atoms of element i in molecule j and bi is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 85,
"text": "This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used).",
"title": "Composition of a mixture"
},
{
"paragraph_id": 86,
"text": "Define:",
"title": "Composition of a mixture"
},
{
"paragraph_id": 87,
"text": "where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by",
"title": "Composition of a mixture"
},
{
"paragraph_id": 88,
"text": "(For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 89,
"text": "This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:,",
"title": "Composition of a mixture"
},
{
"paragraph_id": 90,
"text": "where νj is the stoichiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey:",
"title": "Composition of a mixture"
},
{
"paragraph_id": 91,
"text": "Multiplying the first equilibrium condition by νj and using the above equation yields:",
"title": "Composition of a mixture"
},
{
"paragraph_id": 92,
"text": "As above, defining ΔG",
"title": "Composition of a mixture"
},
{
"paragraph_id": 93,
"text": "where Kc is the equilibrium constant, and ΔG will be zero at equilibrium.",
"title": "Composition of a mixture"
},
{
"paragraph_id": 94,
"text": "Analogous procedures exist for the minimization of other thermodynamic potentials.",
"title": "Composition of a mixture"
}
] | In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium. | 2001-06-01T15:17:46Z | 2023-11-22T17:26:14Z | [
"Template:Spaces",
"Template:Su",
"Template:Cite book",
"Template:GoldBookRef",
"Template:Commons category-inline",
"Template:Citation needed",
"Template:Math",
"Template:Quote box",
"Template:Cite journal",
"Template:Cite web",
"Template:Chem2",
"Template:Chemical equilibria",
"Template:Short description",
"Template:See also",
"Template:Eqm",
"Template:Col div",
"Template:Colend",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Library resources box",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Chemical_equilibrium |
5,308 | Combination | In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by C ( n , k ) {\displaystyle C(n,k)} or C k n {\displaystyle C_{k}^{n}} , is equal to the binomial coefficient
which can be written using factorials as n ! k ! ( n − k ) ! {\displaystyle \textstyle {\frac {n!}{k!(n-k)!}}} whenever k ≤ n {\displaystyle k\leq n} , and which is zero when k > n {\displaystyle k>n} . This formula can be derived from the fact that each k-combination of a set S of n members has k ! {\displaystyle k!} permutations so P k n = C k n × k ! {\displaystyle P_{k}^{n}=C_{k}^{n}\times k!} or C k n = P k n / k ! {\displaystyle C_{k}^{n}=P_{k}^{n}/k!} . The set of all k-combinations of a set S is often denoted by ( S k ) {\displaystyle \textstyle {\binom {S}{k}}} .
A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by C ( n , k ) {\displaystyle C(n,k)} , or by a variation such as C k n {\displaystyle C_{k}^{n}} , n C k {\displaystyle {}_{n}C_{k}} , n C k {\displaystyle {}^{n}C_{k}} , C n , k {\displaystyle C_{n,k}} or even C n k {\displaystyle C_{n}^{k}} (the last form is standard in French, Romanian, Russian, Chinese and Polish texts). The same number however occurs in many other mathematical contexts, where it is denoted by ( n k ) {\displaystyle {\tbinom {n}{k}}} (often read as "n choose k"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define ( n k ) {\displaystyle {\tbinom {n}{k}}} for all natural numbers k at once by the relation
from which it is clear that
and further,
for k > n.
To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S:
it has 2 distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes (1 + X), the term for each k-combination from S becomes X, so that the coefficient of that power in the result equals the number of such k-combinations.
Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to (1 + X), one can use (in addition to the basic cases already given) the recursion relation
for 0 < k < n, which follows from (1 + X) = (1 + X)(1 + X); this leads to the construction of Pascal's triangle.
For determining an individual binomial coefficient, it is more practical to use the formula
The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored.
When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation
for 0 ≤ k ≤ n. This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an (n − k)-combination.
Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:
where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by (n − k)!, so it is certainly computationally less efficient than that formula.
The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula.
From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:
Together with the basic cases ( n 0 ) = 1 = ( n n ) {\displaystyle {\tbinom {n}{0}}=1={\tbinom {n}{n}}} , these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size n − k.
As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:
Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:
Another alternative computation, equivalent to the first, is based on writing
which gives
When evaluated in the following order, 52 ÷ 1 × 51 ÷ 2 × 50 ÷ 3 × 49 ÷ 4 × 48 ÷ 5, this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur.
Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:
One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of ( n k ) {\displaystyle {\tbinom {n}{k}}} integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = { 1, 2, ..., n }, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.
There are many ways to enumerate k combinations. One way is to visit all the binary numbers less than 2. Choose those numbers having k nonzero bits, although this is very inefficient even for small n (e.g. n = 20 would require visiting about one million numbers while the maximum number of allowed k combinations is about 186 thousand for k = 10). The positions of these 1 bits in such a number is a specific k-combination of the set { 1, ..., n }. Another simple, faster way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination and then repeatedly moving to the next allowed k-combination by incrementing the last index number if it is lower than n-1 (zero-based) or n (one-based) or the last index number x that is less than the index number following it minus one if such an index exists and resetting the index numbers after x to {x+1, x+2, ...}.
A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let x i {\displaystyle x_{i}} denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation:
If S has n elements, the number of such k-multisubsets is denoted by
a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients:
This relationship can be easily proved using a representation known as stars and bars.
A solution of the above Diophantine equation can be represented by x 1 {\displaystyle x_{1}} stars, a separator (a bar), then x 2 {\displaystyle x_{2}} more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of k + n − 1 positions to place stars and filling the remaining positions with bars. For example, the solution x 1 = 3 , x 2 = 2 , x 3 = 0 , x 4 = 5 {\displaystyle x_{1}=3,x_{2}=2,x_{3}=0,x_{4}=5} of the equation x 1 + x 2 + x 3 + x 4 = 10 {\displaystyle x_{1}+x_{2}+x_{3}+x_{4}=10} (n = 4 and k = 10) can be represented by
The number of such strings is the number of ways to place 10 stars in 13 positions, ( 13 10 ) = ( 13 3 ) = 286 , {\textstyle {\binom {13}{10}}={\binom {13}{3}}=286,} which is the number of 10-multisubsets of a set with 4 elements.
As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for n ≥ 1 , k ≥ 0 {\displaystyle n\geq 1,k\geq 0} ,
This identity follows from interchanging the stars and bars in the above representation.
For example, if you have four types of donuts (n = 4) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose the donuts with repetition can be calculated as
This result can be verified by listing all the 3-multisubsets of the set S = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions [ x 1 , x 2 , x 3 , x 4 ] {\displaystyle [x_{1},x_{2},x_{3},x_{4}]} of the equation x 1 + x 2 + x 3 + x 4 = 3 {\displaystyle x_{1}+x_{2}+x_{3}+x_{4}=3} and the last column gives the stars and bars representation of the solutions.
The number of k-combinations for all k is the number of subsets of a set of n elements. There are several ways to see that this number is 2. In terms of combinations, ∑ 0 ≤ k ≤ n ( n k ) = 2 n {\textstyle \sum _{0\leq {k}\leq {n}}{\binom {n}{k}}=2^{n}} , which is the sum of the nth row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2 − 1, where each digit position is an item from the set of n.
Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:
Representing these subsets (in the same order) as base 2 numerals:
There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of k − # samples chosen n − # samples visited {\textstyle {\frac {k-\#{\text{samples chosen}}}{n-\#{\text{samples visited}}}}} (see Reservoir sampling). Another is to pick a random non-negative integer less than ( n k ) {\displaystyle \textstyle {\binom {n}{k}}} and convert it into a combination using the combinatorial number system.
A combination can also be thought of as a selection of two sets of items: those that go into the chosen bin and those that go into the unchosen bin. This can be generalized to any number of bins with the constraint that every item must go to exactly one bin. The number of ways to put objects into bins is given by the multinomial coefficient
where n is the number of items, m is the number of bins, and k i {\displaystyle k_{i}} is the number of items that go into bin i.
One way to see why this equation holds is to first number the objects arbitrarily from 1 to n and put the objects with numbers 1 , 2 , … , k 1 {\displaystyle 1,2,\ldots ,k_{1}} into the first bin in order, the objects with numbers k 1 + 1 , k 1 + 2 , … , k 2 {\displaystyle k_{1}+1,k_{1}+2,\ldots ,k_{2}} into the second bin in order, and so on. There are n ! {\displaystyle n!} distinct numberings, but many of them are equivalent, because only the set of items in a bin matters, not their order in it. Every combined permutation of each bins' contents produces an equivalent way of putting items into bins. As a result, every equivalence class consists of k 1 ! k 2 ! ⋯ k m ! {\displaystyle k_{1}!\,k_{2}!\cdots k_{m}!} distinct numberings, and the number of equivalence classes is n ! k 1 ! k 2 ! ⋯ k m ! {\displaystyle \textstyle {\frac {n!}{k_{1}!\,k_{2}!\cdots k_{m}!}}} .
The binomial coefficient is the special case where k items go into the chosen bin and the remaining n − k {\displaystyle n-k} items go into the unchosen bin: | [
{
"paragraph_id": 0,
"text": "In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by C ( n , k ) {\\displaystyle C(n,k)} or C k n {\\displaystyle C_{k}^{n}} , is equal to the binomial coefficient",
"title": ""
},
{
"paragraph_id": 1,
"text": "which can be written using factorials as n ! k ! ( n − k ) ! {\\displaystyle \\textstyle {\\frac {n!}{k!(n-k)!}}} whenever k ≤ n {\\displaystyle k\\leq n} , and which is zero when k > n {\\displaystyle k>n} . This formula can be derived from the fact that each k-combination of a set S of n members has k ! {\\displaystyle k!} permutations so P k n = C k n × k ! {\\displaystyle P_{k}^{n}=C_{k}^{n}\\times k!} or C k n = P k n / k ! {\\displaystyle C_{k}^{n}=P_{k}^{n}/k!} . The set of all k-combinations of a set S is often denoted by ( S k ) {\\displaystyle \\textstyle {\\binom {S}{k}}} .",
"title": ""
},
{
"paragraph_id": 2,
"text": "A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by C ( n , k ) {\\displaystyle C(n,k)} , or by a variation such as C k n {\\displaystyle C_{k}^{n}} , n C k {\\displaystyle {}_{n}C_{k}} , n C k {\\displaystyle {}^{n}C_{k}} , C n , k {\\displaystyle C_{n,k}} or even C n k {\\displaystyle C_{n}^{k}} (the last form is standard in French, Romanian, Russian, Chinese and Polish texts). The same number however occurs in many other mathematical contexts, where it is denoted by ( n k ) {\\displaystyle {\\tbinom {n}{k}}} (often read as \"n choose k\"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define ( n k ) {\\displaystyle {\\tbinom {n}{k}}} for all natural numbers k at once by the relation",
"title": "Number of k-combinations"
},
{
"paragraph_id": 5,
"text": "from which it is clear that",
"title": "Number of k-combinations"
},
{
"paragraph_id": 6,
"text": "and further,",
"title": "Number of k-combinations"
},
{
"paragraph_id": 7,
"text": "for k > n.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 8,
"text": "To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S:",
"title": "Number of k-combinations"
},
{
"paragraph_id": 9,
"text": "it has 2 distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes (1 + X), the term for each k-combination from S becomes X, so that the coefficient of that power in the result equals the number of such k-combinations.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 10,
"text": "Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to (1 + X), one can use (in addition to the basic cases already given) the recursion relation",
"title": "Number of k-combinations"
},
{
"paragraph_id": 11,
"text": "for 0 < k < n, which follows from (1 + X) = (1 + X)(1 + X); this leads to the construction of Pascal's triangle.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 12,
"text": "For determining an individual binomial coefficient, it is more practical to use the formula",
"title": "Number of k-combinations"
},
{
"paragraph_id": 13,
"text": "The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 14,
"text": "When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation",
"title": "Number of k-combinations"
},
{
"paragraph_id": 15,
"text": "for 0 ≤ k ≤ n. This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an (n − k)-combination.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 16,
"text": "Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:",
"title": "Number of k-combinations"
},
{
"paragraph_id": 17,
"text": "where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by (n − k)!, so it is certainly computationally less efficient than that formula.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 18,
"text": "The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 19,
"text": "From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:",
"title": "Number of k-combinations"
},
{
"paragraph_id": 20,
"text": "Together with the basic cases ( n 0 ) = 1 = ( n n ) {\\displaystyle {\\tbinom {n}{0}}=1={\\tbinom {n}{n}}} , these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size n − k.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 21,
"text": "As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:",
"title": "Number of k-combinations"
},
{
"paragraph_id": 22,
"text": "Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:",
"title": "Number of k-combinations"
},
{
"paragraph_id": 23,
"text": "Another alternative computation, equivalent to the first, is based on writing",
"title": "Number of k-combinations"
},
{
"paragraph_id": 24,
"text": "which gives",
"title": "Number of k-combinations"
},
{
"paragraph_id": 25,
"text": "When evaluated in the following order, 52 ÷ 1 × 51 ÷ 2 × 50 ÷ 3 × 49 ÷ 4 × 48 ÷ 5, this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 26,
"text": "Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:",
"title": "Number of k-combinations"
},
{
"paragraph_id": 27,
"text": "One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of ( n k ) {\\displaystyle {\\tbinom {n}{k}}} integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = { 1, 2, ..., n }, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as \"rank\"/\"ranking\" and \"unranking\" in computational mathematics.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 28,
"text": "There are many ways to enumerate k combinations. One way is to visit all the binary numbers less than 2. Choose those numbers having k nonzero bits, although this is very inefficient even for small n (e.g. n = 20 would require visiting about one million numbers while the maximum number of allowed k combinations is about 186 thousand for k = 10). The positions of these 1 bits in such a number is a specific k-combination of the set { 1, ..., n }. Another simple, faster way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination and then repeatedly moving to the next allowed k-combination by incrementing the last index number if it is lower than n-1 (zero-based) or n (one-based) or the last index number x that is less than the index number following it minus one if such an index exists and resetting the index numbers after x to {x+1, x+2, ...}.",
"title": "Number of k-combinations"
},
{
"paragraph_id": 29,
"text": "A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let x i {\\displaystyle x_{i}} denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation:",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 30,
"text": "If S has n elements, the number of such k-multisubsets is denoted by",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 31,
"text": "a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients:",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 32,
"text": "This relationship can be easily proved using a representation known as stars and bars.",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 33,
"text": "A solution of the above Diophantine equation can be represented by x 1 {\\displaystyle x_{1}} stars, a separator (a bar), then x 2 {\\displaystyle x_{2}} more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of k + n − 1 positions to place stars and filling the remaining positions with bars. For example, the solution x 1 = 3 , x 2 = 2 , x 3 = 0 , x 4 = 5 {\\displaystyle x_{1}=3,x_{2}=2,x_{3}=0,x_{4}=5} of the equation x 1 + x 2 + x 3 + x 4 = 10 {\\displaystyle x_{1}+x_{2}+x_{3}+x_{4}=10} (n = 4 and k = 10) can be represented by",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 34,
"text": "The number of such strings is the number of ways to place 10 stars in 13 positions, ( 13 10 ) = ( 13 3 ) = 286 , {\\textstyle {\\binom {13}{10}}={\\binom {13}{3}}=286,} which is the number of 10-multisubsets of a set with 4 elements.",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 35,
"text": "As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for n ≥ 1 , k ≥ 0 {\\displaystyle n\\geq 1,k\\geq 0} ,",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 36,
"text": "This identity follows from interchanging the stars and bars in the above representation.",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 37,
"text": "For example, if you have four types of donuts (n = 4) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose the donuts with repetition can be calculated as",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 38,
"text": "This result can be verified by listing all the 3-multisubsets of the set S = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions [ x 1 , x 2 , x 3 , x 4 ] {\\displaystyle [x_{1},x_{2},x_{3},x_{4}]} of the equation x 1 + x 2 + x 3 + x 4 = 3 {\\displaystyle x_{1}+x_{2}+x_{3}+x_{4}=3} and the last column gives the stars and bars representation of the solutions.",
"title": "Number of combinations with repetition"
},
{
"paragraph_id": 39,
"text": "The number of k-combinations for all k is the number of subsets of a set of n elements. There are several ways to see that this number is 2. In terms of combinations, ∑ 0 ≤ k ≤ n ( n k ) = 2 n {\\textstyle \\sum _{0\\leq {k}\\leq {n}}{\\binom {n}{k}}=2^{n}} , which is the sum of the nth row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2 − 1, where each digit position is an item from the set of n.",
"title": "Number of k-combinations for all k"
},
{
"paragraph_id": 40,
"text": "Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:",
"title": "Number of k-combinations for all k"
},
{
"paragraph_id": 41,
"text": "Representing these subsets (in the same order) as base 2 numerals:",
"title": "Number of k-combinations for all k"
},
{
"paragraph_id": 42,
"text": "There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of k − # samples chosen n − # samples visited {\\textstyle {\\frac {k-\\#{\\text{samples chosen}}}{n-\\#{\\text{samples visited}}}}} (see Reservoir sampling). Another is to pick a random non-negative integer less than ( n k ) {\\displaystyle \\textstyle {\\binom {n}{k}}} and convert it into a combination using the combinatorial number system.",
"title": "Probability: sampling a random combination"
},
{
"paragraph_id": 43,
"text": "A combination can also be thought of as a selection of two sets of items: those that go into the chosen bin and those that go into the unchosen bin. This can be generalized to any number of bins with the constraint that every item must go to exactly one bin. The number of ways to put objects into bins is given by the multinomial coefficient",
"title": "Number of ways to put objects into bins"
},
{
"paragraph_id": 44,
"text": "where n is the number of items, m is the number of bins, and k i {\\displaystyle k_{i}} is the number of items that go into bin i.",
"title": "Number of ways to put objects into bins"
},
{
"paragraph_id": 45,
"text": "One way to see why this equation holds is to first number the objects arbitrarily from 1 to n and put the objects with numbers 1 , 2 , … , k 1 {\\displaystyle 1,2,\\ldots ,k_{1}} into the first bin in order, the objects with numbers k 1 + 1 , k 1 + 2 , … , k 2 {\\displaystyle k_{1}+1,k_{1}+2,\\ldots ,k_{2}} into the second bin in order, and so on. There are n ! {\\displaystyle n!} distinct numberings, but many of them are equivalent, because only the set of items in a bin matters, not their order in it. Every combined permutation of each bins' contents produces an equivalent way of putting items into bins. As a result, every equivalence class consists of k 1 ! k 2 ! ⋯ k m ! {\\displaystyle k_{1}!\\,k_{2}!\\cdots k_{m}!} distinct numberings, and the number of equivalence classes is n ! k 1 ! k 2 ! ⋯ k m ! {\\displaystyle \\textstyle {\\frac {n!}{k_{1}!\\,k_{2}!\\cdots k_{m}!}}} .",
"title": "Number of ways to put objects into bins"
},
{
"paragraph_id": 46,
"text": "The binomial coefficient is the special case where k items go into the chosen bin and the remaining n − k {\\displaystyle n-k} items go into the unchosen bin:",
"title": "Number of ways to put objects into bins"
}
] | In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by C ( n , k ) or C k n , is equal to the binomial coefficient which can be written using factorials as n ! k ! ( n − k ) ! whenever k ≤ n , and which is zero when k > n . This formula can be derived from the fact that each k-combination of a set S of n members has k ! permutations so P k n = C k n × k ! or C k n = P k n / k ! . The set of all k-combinations of a set S is often denoted by ( S k ) . A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears. Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960. | 2001-07-26T16:24:13Z | 2023-11-28T14:40:19Z | [
"Template:Redirect-multi",
"Template:See also",
"Template:Hidden end",
"Template:Portal",
"Template:Div col",
"Template:Cite web",
"Template:Citation",
"Template:Use dmy dates",
"Template:Nobreak",
"Template:Reflist",
"Template:Harv",
"Template:Mvar",
"Template:Nowrap",
"Template:Main",
"Template:Div col end",
"Template:Cite book",
"Template:Harvnb",
"Template:Short description",
"Template:Citation needed",
"Template:Math",
"Template:Hidden begin",
"Template:Ugc",
"Template:Dead link",
"Template:About"
] | https://en.wikipedia.org/wiki/Combination |
5,309 | Software | Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work.
At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. As of 2023, most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past.
The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer.
The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1936 essay, On Computable Numbers, with an Application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering principles to development of software.
In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the Oxford English Dictionary's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum.
On virtually all computer platforms, software can be grouped into a few broad categories.
Based on the goal, computer software can be divided into:
Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software.
Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE.
People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.
Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.
Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs.
Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together.
The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.
Proprietary software can be divided into two types:
Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software.
Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code.
Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.
Design and implementation of software vary depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the former has much more basic functionality.
Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.
Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software.
Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.
A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker" – although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems. | [
{
"paragraph_id": 0,
"text": "Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work.",
"title": ""
},
{
"paragraph_id": 1,
"text": "At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to \"jump\" to a different instruction or is interrupted by the operating system. As of 2023, most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.",
"title": ""
},
{
"paragraph_id": 3,
"text": "An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1936 essay, On Computable Numbers, with an Application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering principles to development of software.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper \"The Teaching of Concrete Mathematics\" contained the earliest known usage of the term \"software\" found in a search of JSTOR's electronic archives, predating the Oxford English Dictionary's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term \"software\" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "On virtually all computer platforms, software can be grouped into a few broad categories.",
"title": "Types"
},
{
"paragraph_id": 7,
"text": "Based on the goal, computer software can be divided into:",
"title": "Types"
},
{
"paragraph_id": 8,
"text": "Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software.",
"title": "Types"
},
{
"paragraph_id": 9,
"text": "Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.",
"title": "Topics"
},
{
"paragraph_id": 11,
"text": "Computer software has to be \"loaded\" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions.",
"title": "Topics"
},
{
"paragraph_id": 12,
"text": "Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using \"pointers\" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.",
"title": "Topics"
},
{
"paragraph_id": 13,
"text": "Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called \"bugs\" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs.",
"title": "Topics"
},
{
"paragraph_id": 14,
"text": "Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that \"every program has at least one more bug\" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together.",
"title": "Topics"
},
{
"paragraph_id": 15,
"text": "The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.",
"title": "Topics"
},
{
"paragraph_id": 16,
"text": "Proprietary software can be divided into two types:",
"title": "Topics"
},
{
"paragraph_id": 17,
"text": "Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software.",
"title": "Topics"
},
{
"paragraph_id": 18,
"text": "Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code.",
"title": "Topics"
},
{
"paragraph_id": 19,
"text": "Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.",
"title": "Topics"
},
{
"paragraph_id": 20,
"text": "Design and implementation of software vary depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the former has much more basic functionality.",
"title": "Design and implementation"
},
{
"paragraph_id": 21,
"text": "Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.",
"title": "Design and implementation"
},
{
"paragraph_id": 22,
"text": "Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software.",
"title": "Design and implementation"
},
{
"paragraph_id": 23,
"text": "Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.",
"title": "Design and implementation"
},
{
"paragraph_id": 24,
"text": "A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as \"coder\" and \"hacker\" – although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems.",
"title": "Design and implementation"
},
{
"paragraph_id": 25,
"text": "",
"title": "External links"
}
] | Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work. At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. As of 2023, most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past. The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler. | 2001-10-18T13:51:09Z | 2023-12-21T23:34:30Z | [
"Template:Curlie",
"Template:Main",
"Template:Citation needed",
"Template:Specify",
"Template:Reflist",
"Template:Cite news",
"Template:Cite book",
"Template:Pp-protected",
"Template:As of",
"Template:Sfn",
"Template:Cite web",
"Template:Cite journal",
"Template:Software digital distribution platforms",
"Template:Short description",
"Template:Other uses",
"Template:Better source needed",
"Template:Subject bar",
"Template:Authority control",
"Template:Use dmy dates",
"Template:More citations needed",
"Template:See also",
"Template:Spaced ndash"
] | https://en.wikipedia.org/wiki/Software |
5,311 | Computer programming | Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process.
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.
The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. However, Charles Babbage had already written his first program for the Analytical Engine in 1837.
In the 1880s, Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.
High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware. The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.
Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand, reusing and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:
The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.
Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability.
The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.
It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).
Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. Scripting and breakpointing is also part of this process.
Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.
Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones. Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.
Allen Downey, in his book How To Think Like A Computer Scientist, writes:
Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.
Computer programmers are those who write computer software. Their jobs usually involve:
Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language. | [
{
"paragraph_id": 0,
"text": "Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the \"program\" – a series of pasteboard cards with holes punched in them.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. However, Charles Babbage had already written his first program for the Analytical Engine in 1837.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In the 1880s, Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware. The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:",
"title": "Modern programming"
},
{
"paragraph_id": 11,
"text": "In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.",
"title": "Modern programming"
},
{
"paragraph_id": 12,
"text": "Readability is important because programmers spend the majority of their time reading, trying to understand, reusing and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.",
"title": "Modern programming"
},
{
"paragraph_id": 13,
"text": "Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:",
"title": "Modern programming"
},
{
"paragraph_id": 14,
"text": "The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.",
"title": "Modern programming"
},
{
"paragraph_id": 15,
"text": "Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability.",
"title": "Modern programming"
},
{
"paragraph_id": 16,
"text": "The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.",
"title": "Modern programming"
},
{
"paragraph_id": 17,
"text": "The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.",
"title": "Modern programming"
},
{
"paragraph_id": 18,
"text": "Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.",
"title": "Modern programming"
},
{
"paragraph_id": 19,
"text": "A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).",
"title": "Modern programming"
},
{
"paragraph_id": 20,
"text": "Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.",
"title": "Modern programming"
},
{
"paragraph_id": 21,
"text": "It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).",
"title": "Modern programming"
},
{
"paragraph_id": 22,
"text": "Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).",
"title": "Modern programming"
},
{
"paragraph_id": 23,
"text": "Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.",
"title": "Modern programming"
},
{
"paragraph_id": 24,
"text": "After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. Scripting and breakpointing is also part of this process.",
"title": "Modern programming"
},
{
"paragraph_id": 25,
"text": "Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.",
"title": "Modern programming"
},
{
"paragraph_id": 26,
"text": "Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from \"low-level\" to \"high-level\"; \"low-level\" languages are typically more machine-oriented and faster to execute, whereas \"high-level\" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in \"high-level\" languages than in \"low-level\" ones. Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.",
"title": "Programming languages"
},
{
"paragraph_id": 27,
"text": "Allen Downey, in his book How To Think Like A Computer Scientist, writes:",
"title": "Programming languages"
},
{
"paragraph_id": 28,
"text": "Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.",
"title": "Programming languages"
},
{
"paragraph_id": 29,
"text": "Computer programmers are those who write computer software. Their jobs usually involve:",
"title": "Programmers"
},
{
"paragraph_id": 30,
"text": "Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language.",
"title": "Programmers"
}
] | Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic. Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging, implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process. | 2001-10-25T21:53:15Z | 2023-12-30T04:33:24Z | [
"Template:Cite journal",
"Template:Webarchive",
"Template:Cite magazine",
"Template:Wikiversity",
"Template:Commons category-inline",
"Template:Reflist",
"Template:Cite web",
"Template:Software quality",
"Template:Curlie",
"Template:Computer science",
"Template:Software engineering",
"Template:Anchor",
"Template:Cite arXiv",
"Template:Library resources box",
"Template:Wikiquote-inline",
"Template:Wikibooks",
"Template:Authority control",
"Template:Short description",
"Template:Use mdy dates",
"Template:See also",
"Template:Main",
"Template:Div col",
"Template:Cite news",
"Template:Use American English",
"Template:Software development process",
"Template:Div col end",
"Template:Portal",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Computer_programming |
5,312 | On the Consolation of Philosophy | On the Consolation of Philosophy (Latin: De consolatione philosophiae), often titled as The Consolation of Philosophy or simply the Consolation, is a philosophical work by the Roman philosopher Boethius. Written in 523 while he was imprisoned and awaiting execution by the Ostrogothic King Theodoric, it is often described as the last great Western work of the Classical Period. Boethius' Consolation heavily influenced the philosophy of late antiquity, as well as Medieval and early Renaissance Christianity.
On the Consolation of Philosophy was written in AD 523 during a one-year imprisonment Boethius served while awaiting trial—and eventual execution—for the alleged crime of treason under the Ostrogothic King Theodoric the Great. Boethius was at the very heights of power in Rome, holding the prestigious office of magister officiorum, and was brought down by treachery. This experience inspired the text, which reflects on how evil can exist in a world governed by God (the problem of theodicy), and how happiness is still attainable amidst fickle fortune, while also considering the nature of happiness and God. In 1891, the academic Hugh Fraser Stewart described the work as "by far the most interesting example of prison literature the world has ever seen."
Boethius writes the book as a conversation between himself and a female personification of philosophy, referred to as "Lady Philosophy". Philosophy consoles Boethius by discussing the transitory nature of wealth, fame, and power ("no man can ever truly be secure until he has been forsaken by Fortune"), and the ultimate superiority of things of the mind, which she calls the "one true good". She contends that happiness comes from within, and that virtue is all that one truly has because it is not imperiled by the vicissitudes of fortune.
Boethius engages with the nature of predestination and free will, the problem of evil and the "problem of desert", human nature, virtue, and justice. He speaks about the nature of free will and determinism when he asks if God knows and sees all, or does man have free will. On human nature, Boethius says that humans are essentially good, and only when they give in to "wickedness" do they "sink to the level of being an animal." On justice, he says criminals are not to be abused, but rather treated with sympathy and respect, using the analogy of doctor and patient to illustrate the ideal relationship between prosecutor and criminal.
On the Consolation of Philosophy is laid out as follows:
In the Consolation, Boethius answered religious questions without reference to Christianity, relying solely on natural philosophy and the Classical Greek tradition. He believed in the correspondence between faith and reason. The truths found in Christianity would be no different from the truths found in philosophy. In the words of Henry Chadwick, "If the Consolation contains nothing distinctively Christian, it is also relevant that it contains nothing specifically pagan either...[it] is a work written by a Platonist who is also a Christian."
Boethius repeats the Macrobius model of the Earth in the center of a spherical cosmos.
The philosophical message of the book fits well with the religious piety of the Middle Ages. Boethius encouraged readers not to pursue worldly goods such as money and power, but to seek internalized virtues. Evil had a purpose, to provide a lesson to help change for good; while suffering from evil was seen as virtuous. Because God ruled the universe through Love, prayer to God and the application of Love would lead to true happiness. The Middle Ages, with their vivid sense of an overruling fate, found in Boethius an interpretation of life closely akin to the spirit of Christianity. The Consolation stands, by its note of fatalism and its affinities with the Christian doctrine of humility, midway between the pagan philosophy of Seneca the Younger and the later Christian philosophy of consolation represented by Thomas à Kempis.
The book is heavily influenced by Plato and his dialogues (as was Boethius himself). Its popularity can in part be explained by its Neoplatonic and Christian ethical messages, although current scholarly research is still far from clear exactly why and how the work became so vastly popular in the Middle Ages.
From the Carolingian epoch to the end of the Middle Ages and beyond, The Consolation of Philosophy was one of the most popular and influential philosophical works, read by statesmen, poets, historians, philosophers, and theologians. It is through Boethius that much of the thought of the Classical period was made available to the Western Medieval world. It has often been said Boethius was the "last of the Romans and the first of the Scholastics".
Translations into the vernacular were done by famous notables, including King Alfred (Old English), Jean de Meun (Old French), Geoffrey Chaucer (Middle English), Queen Elizabeth I (Early Modern English) and Notker Labeo (Old High German). Boethius's Consolation of Philosophy was translated into Italian by Alberto della Piagentina (1332), Anselmo Tanso (Milan, 1520), Lodovico Domenichi (Florence, 1550), Benedetto Varchi (Florence, 1551), Cosimo Bartoli (Florence, 1551) and Tommaso Tamburini (Palermo, 1657).
Found within the Consolation are themes that have echoed throughout the Western canon: the female figure of wisdom that informs Dante, the ascent through the layered universe that is shared with Milton, the reconciliation of opposing forces that find their way into Chaucer in The Knight's Tale, and the Wheel of Fortune so popular throughout the Middle Ages.
Citations from it occur frequently in Dante's Divina Commedia. Of Boethius, Dante remarked: "The blessed soul who exposes the deceptive world to anyone who gives ear to him."
Boethian influence can be found nearly everywhere in Geoffrey Chaucer's poetry, e.g. in Troilus and Criseyde, The Knight's Tale, The Clerk's Tale, The Franklin's Tale, The Parson's Tale and The Tale of Melibee, in the character of Lady Nature in The Parliament of Fowls and some of the shorter poems, such as Truth, The Former Age and Lak of Stedfastnesse. Chaucer translated the work in his Boece.
The Italian composer Luigi Dallapiccola used some of the text in his choral work Canti di prigionia (1938). The Australian composer Peter Sculthorpe quoted parts of it in his opera or music theatre work Rites of Passage (1972–73), which was commissioned for the opening of the Sydney Opera House but was not ready in time.
Tom Shippey in The Road to Middle-earth says how "Boethian" much of the treatment of evil is in Tolkien's The Lord of the Rings. Shippey says that Tolkien knew well the translation of Boethius that was made by King Alfred and he quotes some "Boethian" remarks from Frodo, Treebeard, and Elrond.
Boethius and Consolatio Philosophiae are cited frequently by the main character Ignatius J. Reilly in the Pulitzer Prize-winning A Confederacy of Dunces (1980).
It is a prosimetrical text, meaning that it is written in alternating sections of prose and metered verse. In the course of the text, Boethius displays a virtuosic command of the forms of Latin poetry. It is classified as a Menippean satire, a fusion of allegorical tale, platonic dialogue, and lyrical poetry.
Edward Gibbon described the work as "a golden volume not unworthy of the leisure of Plato or Tully."
In the 20th century, there were close to four hundred manuscripts still surviving, a testament to its popularity.
Of the work, C. S. Lewis wrote: "To acquire a taste for it is almost to become naturalised in the Middle Ages."
Hundreds of Latin songs were recorded in neumes from the ninth century through to the thirteenth century, including settings of the poetic passages from Boethius's The Consolation of Philosophy. The music of this song repertory had long been considered irretrievably lost because the notational signs indicated only melodic outlines, relying on now-lapsed oral traditions to fill in the missing details. However, research conducted by Sam Barrett at the University of Cambridge, extended in collaboration with medieval music ensemble Sequentia, has shown that principles of musical setting for this period can be identified, providing crucial information to enable modern realisations. Sequentia performed the world premiere of the reconstructed songs from Boethius's The Consolation of Philosophy at Pembroke College, Cambridge, in April 2016, bringing to life music not heard in over 1,000 years; a number of the songs were subsequently recorded on the CD Boethius: Songs of Consolation. Metra from 11th-Century Canterbury (Glossa, 2018). The detective story behind the recovery of these lost songs is told in a documentary film, and a website launched by the University of Cambridge in 2018 provides further details of the reconstruction process, bringing together manuscripts, reconstructions, and video resources. | [
{
"paragraph_id": 0,
"text": "On the Consolation of Philosophy (Latin: De consolatione philosophiae), often titled as The Consolation of Philosophy or simply the Consolation, is a philosophical work by the Roman philosopher Boethius. Written in 523 while he was imprisoned and awaiting execution by the Ostrogothic King Theodoric, it is often described as the last great Western work of the Classical Period. Boethius' Consolation heavily influenced the philosophy of late antiquity, as well as Medieval and early Renaissance Christianity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "On the Consolation of Philosophy was written in AD 523 during a one-year imprisonment Boethius served while awaiting trial—and eventual execution—for the alleged crime of treason under the Ostrogothic King Theodoric the Great. Boethius was at the very heights of power in Rome, holding the prestigious office of magister officiorum, and was brought down by treachery. This experience inspired the text, which reflects on how evil can exist in a world governed by God (the problem of theodicy), and how happiness is still attainable amidst fickle fortune, while also considering the nature of happiness and God. In 1891, the academic Hugh Fraser Stewart described the work as \"by far the most interesting example of prison literature the world has ever seen.\"",
"title": "Description"
},
{
"paragraph_id": 2,
"text": "Boethius writes the book as a conversation between himself and a female personification of philosophy, referred to as \"Lady Philosophy\". Philosophy consoles Boethius by discussing the transitory nature of wealth, fame, and power (\"no man can ever truly be secure until he has been forsaken by Fortune\"), and the ultimate superiority of things of the mind, which she calls the \"one true good\". She contends that happiness comes from within, and that virtue is all that one truly has because it is not imperiled by the vicissitudes of fortune.",
"title": "Description"
},
{
"paragraph_id": 3,
"text": "Boethius engages with the nature of predestination and free will, the problem of evil and the \"problem of desert\", human nature, virtue, and justice. He speaks about the nature of free will and determinism when he asks if God knows and sees all, or does man have free will. On human nature, Boethius says that humans are essentially good, and only when they give in to \"wickedness\" do they \"sink to the level of being an animal.\" On justice, he says criminals are not to be abused, but rather treated with sympathy and respect, using the analogy of doctor and patient to illustrate the ideal relationship between prosecutor and criminal.",
"title": "Description"
},
{
"paragraph_id": 4,
"text": "On the Consolation of Philosophy is laid out as follows:",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "In the Consolation, Boethius answered religious questions without reference to Christianity, relying solely on natural philosophy and the Classical Greek tradition. He believed in the correspondence between faith and reason. The truths found in Christianity would be no different from the truths found in philosophy. In the words of Henry Chadwick, \"If the Consolation contains nothing distinctively Christian, it is also relevant that it contains nothing specifically pagan either...[it] is a work written by a Platonist who is also a Christian.\"",
"title": "Interpretation"
},
{
"paragraph_id": 6,
"text": "Boethius repeats the Macrobius model of the Earth in the center of a spherical cosmos.",
"title": "Interpretation"
},
{
"paragraph_id": 7,
"text": "The philosophical message of the book fits well with the religious piety of the Middle Ages. Boethius encouraged readers not to pursue worldly goods such as money and power, but to seek internalized virtues. Evil had a purpose, to provide a lesson to help change for good; while suffering from evil was seen as virtuous. Because God ruled the universe through Love, prayer to God and the application of Love would lead to true happiness. The Middle Ages, with their vivid sense of an overruling fate, found in Boethius an interpretation of life closely akin to the spirit of Christianity. The Consolation stands, by its note of fatalism and its affinities with the Christian doctrine of humility, midway between the pagan philosophy of Seneca the Younger and the later Christian philosophy of consolation represented by Thomas à Kempis.",
"title": "Interpretation"
},
{
"paragraph_id": 8,
"text": "The book is heavily influenced by Plato and his dialogues (as was Boethius himself). Its popularity can in part be explained by its Neoplatonic and Christian ethical messages, although current scholarly research is still far from clear exactly why and how the work became so vastly popular in the Middle Ages.",
"title": "Interpretation"
},
{
"paragraph_id": 9,
"text": "From the Carolingian epoch to the end of the Middle Ages and beyond, The Consolation of Philosophy was one of the most popular and influential philosophical works, read by statesmen, poets, historians, philosophers, and theologians. It is through Boethius that much of the thought of the Classical period was made available to the Western Medieval world. It has often been said Boethius was the \"last of the Romans and the first of the Scholastics\".",
"title": "Influence"
},
{
"paragraph_id": 10,
"text": "Translations into the vernacular were done by famous notables, including King Alfred (Old English), Jean de Meun (Old French), Geoffrey Chaucer (Middle English), Queen Elizabeth I (Early Modern English) and Notker Labeo (Old High German). Boethius's Consolation of Philosophy was translated into Italian by Alberto della Piagentina (1332), Anselmo Tanso (Milan, 1520), Lodovico Domenichi (Florence, 1550), Benedetto Varchi (Florence, 1551), Cosimo Bartoli (Florence, 1551) and Tommaso Tamburini (Palermo, 1657).",
"title": "Influence"
},
{
"paragraph_id": 11,
"text": "Found within the Consolation are themes that have echoed throughout the Western canon: the female figure of wisdom that informs Dante, the ascent through the layered universe that is shared with Milton, the reconciliation of opposing forces that find their way into Chaucer in The Knight's Tale, and the Wheel of Fortune so popular throughout the Middle Ages.",
"title": "Influence"
},
{
"paragraph_id": 12,
"text": "Citations from it occur frequently in Dante's Divina Commedia. Of Boethius, Dante remarked: \"The blessed soul who exposes the deceptive world to anyone who gives ear to him.\"",
"title": "Influence"
},
{
"paragraph_id": 13,
"text": "Boethian influence can be found nearly everywhere in Geoffrey Chaucer's poetry, e.g. in Troilus and Criseyde, The Knight's Tale, The Clerk's Tale, The Franklin's Tale, The Parson's Tale and The Tale of Melibee, in the character of Lady Nature in The Parliament of Fowls and some of the shorter poems, such as Truth, The Former Age and Lak of Stedfastnesse. Chaucer translated the work in his Boece.",
"title": "Influence"
},
{
"paragraph_id": 14,
"text": "The Italian composer Luigi Dallapiccola used some of the text in his choral work Canti di prigionia (1938). The Australian composer Peter Sculthorpe quoted parts of it in his opera or music theatre work Rites of Passage (1972–73), which was commissioned for the opening of the Sydney Opera House but was not ready in time.",
"title": "Influence"
},
{
"paragraph_id": 15,
"text": "Tom Shippey in The Road to Middle-earth says how \"Boethian\" much of the treatment of evil is in Tolkien's The Lord of the Rings. Shippey says that Tolkien knew well the translation of Boethius that was made by King Alfred and he quotes some \"Boethian\" remarks from Frodo, Treebeard, and Elrond.",
"title": "Influence"
},
{
"paragraph_id": 16,
"text": "Boethius and Consolatio Philosophiae are cited frequently by the main character Ignatius J. Reilly in the Pulitzer Prize-winning A Confederacy of Dunces (1980).",
"title": "Influence"
},
{
"paragraph_id": 17,
"text": "It is a prosimetrical text, meaning that it is written in alternating sections of prose and metered verse. In the course of the text, Boethius displays a virtuosic command of the forms of Latin poetry. It is classified as a Menippean satire, a fusion of allegorical tale, platonic dialogue, and lyrical poetry.",
"title": "Influence"
},
{
"paragraph_id": 18,
"text": "Edward Gibbon described the work as \"a golden volume not unworthy of the leisure of Plato or Tully.\"",
"title": "Influence"
},
{
"paragraph_id": 19,
"text": "In the 20th century, there were close to four hundred manuscripts still surviving, a testament to its popularity.",
"title": "Influence"
},
{
"paragraph_id": 20,
"text": "Of the work, C. S. Lewis wrote: \"To acquire a taste for it is almost to become naturalised in the Middle Ages.\"",
"title": "Influence"
},
{
"paragraph_id": 21,
"text": "Hundreds of Latin songs were recorded in neumes from the ninth century through to the thirteenth century, including settings of the poetic passages from Boethius's The Consolation of Philosophy. The music of this song repertory had long been considered irretrievably lost because the notational signs indicated only melodic outlines, relying on now-lapsed oral traditions to fill in the missing details. However, research conducted by Sam Barrett at the University of Cambridge, extended in collaboration with medieval music ensemble Sequentia, has shown that principles of musical setting for this period can be identified, providing crucial information to enable modern realisations. Sequentia performed the world premiere of the reconstructed songs from Boethius's The Consolation of Philosophy at Pembroke College, Cambridge, in April 2016, bringing to life music not heard in over 1,000 years; a number of the songs were subsequently recorded on the CD Boethius: Songs of Consolation. Metra from 11th-Century Canterbury (Glossa, 2018). The detective story behind the recovery of these lost songs is told in a documentary film, and a website launched by the University of Cambridge in 2018 provides further details of the reconstruction process, bringing together manuscripts, reconstructions, and video resources.",
"title": "Influence"
}
] | On the Consolation of Philosophy, often titled as The Consolation of Philosophy or simply the Consolation, is a philosophical work by the Roman philosopher Boethius. Written in 523 while he was imprisoned and awaiting execution by the Ostrogothic King Theodoric, it is often described as the last great Western work of the Classical Period. Boethius' Consolation heavily influenced the philosophy of late antiquity, as well as Medieval and early Renaissance Christianity. | 2001-10-15T20:13:02Z | 2023-12-28T05:02:08Z | [
"Template:Authority control",
"Template:Italic title",
"Template:Infobox book",
"Template:StandardEbooks",
"Template:Cite book",
"Template:Cite web",
"Template:Cite CE1913",
"Template:Commons category",
"Template:Neoplatonism",
"Template:Citation Needed",
"Template:Reflist",
"Template:Citation",
"Template:For",
"Template:Lang-la",
"Template:Cite encyclopedia",
"Template:Cite journal",
"Template:Wikisourcelang",
"Template:Librivox book",
"Template:Short description",
"Template:Use dmy dates",
"Template:ISBN"
] | https://en.wikipedia.org/wiki/On_the_Consolation_of_Philosophy |
5,313 | Crouching Tiger, Hidden Dragon | Crouching Tiger, Hidden Dragon is a 2000 Mandarin-language wuxia martial arts adventure film directed by Ang Lee and written for the screen by Wang Hui-ling, James Schamus, and Tsai Kuo-jung. The film stars Chow Yun-fat, Michelle Yeoh, Zhang Ziyi, and Chang Chen. It is based on the Chinese novel of the same name serialized between 1941 and 1942 by Wang Dulu, the fourth part of his Crane Iron pentalogy.
A multinational venture, the film was made on a US$17 million budget, and was produced by Edko Films and Zoom Hunt Productions in collaboration with China Film Co-productions Corporation and Asian Union Film & Entertainment for Columbia Pictures Film Production Asia in association with Good Machine International. The film premiered at the Cannes Film Festival on 18 May 2000, and was theatrically released in the United States on 8 December. With dialogue in Standard Chinese, subtitled for various markets, Crouching Tiger, Hidden Dragon became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history. The film was the first foreign-language film to break the $100 million mark in the United States.
The film received universal acclaim from critics, praised for its story, direction, cinematography, and martial arts sequences. Crouching Tiger, Hidden Dragon won over 40 awards and was nominated for 10 Academy Awards in 2001, including Best Picture, and won Best Foreign Language Film, Best Art Direction, Best Original Score, and Best Cinematography, receiving the most nominations ever for a non-English-language film at the time, until 2018's Roma tied this record. The film also won four BAFTAs and two Golden Globe Awards, each of them for Best Foreign Film. For retrospective years, Crouching Tiger is often cited as one of the finest wuxia films ever made and has been widely regarded one of the greatest films in the 21st century.
In Qing dynasty China, Li Mu Bai is a renowned Wudang swordsman, and his friend Yu Shu Lien, a female warrior, heads a private security company. Shu Lien and Mu Bai have long had feelings for each other, but because Shu Lien had been engaged to Mu Bai's close friend, Meng Sizhao before his death, Shu Lien and Mu Bai feel bound by loyalty to Meng Sizhao and have not revealed their feelings to each other. Mu Bai, choosing to retire from the life of a swordsman, asks Shu Lien to give his fabled 400-year-old sword "Green Destiny" to their benefactor Sir Te in Beijing. Long ago, Mu Bai's teacher was killed by Jade Fox, a woman who sought to learn Wudang secrets. While at Sir Te's place, Shu Lien meets Yu Jiaolong, or Jen, who is the daughter of the rich and powerful Governor Yu and is about to get married.
One evening, a masked thief sneaks into Sir Te's estate and steals the Green Destiny. Sir Te's servant Master Bo and Shu Lien trace the theft to Governor Yu's compound, where Jade Fox had been posing as Jen's governess for many years. Soon after, Mu Bai arrives in Beijing and discusses the theft with Shu Lien. Master Bo makes the acquaintance of Inspector Tsai, a police investigator from the provinces, and his daughter May, who have come to Beijing in pursuit of Fox. Fox challenges the pair and Master Bo to a showdown that night. Following a protracted battle, the group is on the verge of defeat when Mu Bai arrives and outmaneuvers Fox. She reveals that she killed Mu Bai's teacher because he would sleep with her, but refuse to take a woman as a disciple, and she felt it poetic justice for him to die at a woman's hand. Just as Mu Bai is about to kill her, the masked thief reappears and helps Fox. Fox kills Tsai before fleeing with the thief (who is revealed to be Jen). After seeing Jen fight Mu Bai, Fox realizes Jen had been secretly studying the Wudang manual. Fox is illiterate and could only follow the diagrams, whereas Jen's ability to read the manual allowed her to surpass her teacher in martial arts.
At night, a bandit named Lo breaks into Jen's bedroom and asks her to leave with him. In the past, when Governor Yu and his family were traveling in the western deserts of Xinjiang, Lo and his bandits raided Jen's caravan and Lo stole her comb. She pursued him to his desert cave to retrieve her comb. However, the pair soon fell in love. Lo eventually convinced Jen to return to her family, though not before telling her a legend of a man who jumped off a mountain to make his wishes come true. Because the man's heart was pure, his wish was granted and he was unharmed, but flew away never to be seen again. Lo has come now to Beijing to persuade Jen not to go through with her arranged marriage. However, Jen refuses to leave with him. Later, Lo interrupts Jen's wedding procession, begging her to leave with him. Shu Lien and Mu Bai convince Lo to wait for Jen at Mount Wudang, where he will be safe from Jen's family, who are furious with him. Jen runs away from her husband on their wedding night before the marriage can be consummated. Disguised in men's clothing, she is accosted at an inn by a large group of warriors; armed with the Green Destiny and her own superior combat skills, she emerges victorious.
Jen visits Shu Lien, who tells her that Lo is waiting for her at Mount Wudang. After an angry exchange, the two women engage in a duel. Shu Lien is the superior fighter, but Jen wields the Green Destiny and is able to destroy each weapon that Shu Lien wields, until Shu Lien finally manages to defeat Jen with a broken sword. When Shu Lien shows mercy, Jen wounds Shu Lien in the arm. Mu Bai arrives and pursues Jen into a bamboo forest, where he offers to take her as his student. Jen agrees if he can take Green Destiny from her in three moves. Mu Bai is able to take the sword in only one move, but Jen reneges on her promise, and Mu Bai throws the sword over a waterfall. Jen dives after the sword and is rescued by Fox. Fox puts Jen into a drugged sleep and places her in a cavern, where Mu Bai and Shu Lien discover her. Fox suddenly attacks them with poisoned needles. Mu Bai mortally wounds Fox, only to realize that one of the needles has hit him in the neck. Before dying, Fox confesses that her goal had been to kill Jen because Jen had hidden the secrets of Wudang's fighting techniques from her.
Contrite, Jen leaves to prepare an antidote for the poisoned dart. With his last breath, Mu Bai finally confesses his love for Shu Lien. He dies in her arms as Jen returns. Shu Lien forgives Jen, telling her to go to Lo and always be true to herself. The Green Destiny is returned to Sir Te. Jen goes to Mount Wudang and spends the night with Lo. The next morning, Lo finds Jen standing on a bridge overlooking the edge of the mountain. In an echo of the legend that they spoke about in the desert, she asks him to make a wish. Lo wishes for them to be together again, back in the desert. Jen leaps from the bridge, falling into the mists below.
Credits from British Film Institute:
The title "Crouching Tiger, Hidden Dragon" is a literal translation of the Chinese idiom "臥虎藏龍" which describes a place or situation that is full of unnoticed masters. It is from a poem of the ancient Chinese poet Yu Xin (513–581) that reads "暗石疑藏虎,盤根似臥龍", which means "behind the rock in the dark probably hides a tiger, and the coiling giant root resembles a crouching dragon". The title also has several other layers of meaning. On one level, the Chinese characters in the title connect to the narrative that the last character in Xiaohu and Jiaolong's names mean "tiger" and "dragon", respectively. On another level, the Chinese idiomatic phrase is an expression referring to the undercurrents of emotion, passion, and secret desire that lie beneath the surface of polite society and civil behavior, which alludes to the film's storyline.
The success of the Disney animated feature Mulan (1998) popularized the image of the Chinese woman warrior in the west. The storyline of Crouching Tiger, Hidden Dragon is mostly driven by the three female characters. In particular, Jen is driven by her desire to be free from the gender role imposed on her, while Shu Lien, herself oppressed by the gender role, tries to lead Jen back into the role deemed appropriate for her. Some prominent martial arts disciplines are traditionally held to have been originated by women, e.g., Wing Chun. The film's title refers to masters one does not notice, which necessarily includes mostly women, and therefore suggests the advantage of a female bodyguard.
Poison is also a significant theme in the film. The Chinese word "毒" (dú) means not only physical poison but also cruelty and sinfulness. In the world of martial arts, the use of poison is considered an act of one who is too cowardly and dishonorable to fight; and indeed, the only character who explicitly fits these characteristics is Jade Fox. The poison is a weapon of her bitterness and quest for vengeance: she poisons the master of Wudang, attempts to poison Jen, and succeeds in killing Mu Bai using a poisoned needle. In further play on this theme by the director, Jade Fox, as she dies, refers to the poison from a young child, "the deceit of an eight-year-old girl", referring to what she considers her own spiritual poisoning by her young apprentice Jen. Li Mu Bai himself warns that, without guidance, Jen could become a "poison dragon".
The story is set during the Qing dynasty (1644–1912), but it does not specify an exact time. Lee sought to present a "China of the imagination" rather than an accurate vision of Chinese history. At the same time, Lee also wanted to make a film that Western audiences would want to see. Thus, the film is shot for a balance between Eastern and Western aesthetics. There are some scenes showing uncommon artistry for the typical martial arts film such as an airborne battle among wispy bamboo plants.
The film was adapted from the novel Crouching Tiger, Hidden Dragon by Wang Dulu, serialized between 1941 and 1942 in Qingdao Xinmin News. The novel is the fourth in a sequence of five. In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends.
Shu Qi was Ang Lee's first choice for the role of Jen, but she turned it down.
Although its Academy Award for Best Foreign Language Film was presented to Taiwan, Crouching Tiger, Hidden Dragon was in fact an international co-production between companies in four regions: the Chinese company China Film Co-production Corporation, the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine, the Hong Kong company Edko Films, and the Taiwanese Zoom Hunt Productions, as well as the unspecified United China Vision and Asia Union Film & Entertainment, created solely for this film.
The film was made in Beijing, with location shooting in Urumchi, Western Provinces, Taklamakan Plateau, Shanghai and Anji of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, "I didn't take one break in eight months, not even for half a day. I was miserable—I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke." The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used "only to remove the safety wires that held the actors" aloft. "Most of the time you can see their faces," he added. "That's really them in the trees."
Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay, so she learned the Standard Chinese lines phonetically; Chang Chen is from Taiwan and he speaks Standard Chinese in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on "the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life."
The film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films. As a result, high-quality English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences.
The score was composed by Dun TAN in 1999. It was played for the movie by the Shanghai Symphony Orchestra, the Shanghai National Orchestra and the Shanghai Percussion Ensemble. It features solo passages for cello played by Yo-Yo Ma. The "last track" ("A Love Before Time") features Coco Lee, who later sang it at the Academy Awards. The composer Chen Yuanlin also collaborated in the project. The music for the entire film was produced in two weeks. Tan the next year (2000) adapted his filmscore as a cello concerto called simply "Crouching Tiger."
The film was adapted into a video game and a series of comics, and it led to the original novel being adapted into a 34-episode Taiwanese television series. The latter was released in 2004 as New Crouching Tiger, Hidden Dragon for Northern American release.
The film was released on VHS and DVD on 5 June 2001 by Columbia TriStar Home Entertainment. It was also released on UMD on 26 June 2005. In the United Kingdom, it was watched by 3.5 million viewers on television in 2004, making it the year's most-watched foreign-language film on television.
The film was re-released in a 4K restoration by Sony Pictures Classics in 2023.
The film premiered in cinemas on 8 December 2000, in limited release within the United States. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On 12 January 2001, Crouching Tiger, Hidden Dragon premiered in cinemas in wide release throughout the U.S., grossing $8,647,295 in business, ranking in sixth place. The film Save the Last Dance came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place, screening in 837 theaters. Save the Last Dance remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, Crouching Tiger, Hidden Dragon opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19.
Crouching Tiger, Hidden Dragon, which is based on an early 20th century novel by Wang Dulu, unfolds much like a comic book, with the characters and their circumstances being painted using wide brush strokes. Subtlety is not part of Lee's palette; he is going for something grand and melodramatic, and that's what he gets.
Crouching Tiger, Hidden Dragon was widely acclaimed in the Western world, receiving numerous awards. On Rotten Tomatoes, the film holds an approval rating of 98% based on 168 reviews, with an average rating of 8.6/10. The site's critical consensus states: "The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, Crouching Tiger, Hidden Dragon features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama." Metacritic reported the film had an average score of 94 out of 100, based on 32 reviews, indicating "universal acclaim".
Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin Chinese as a mother tongue. All four main actors spoke Standard Chinese with vastly different accents: Chow speaks with a Cantonese accent, Yeoh with a Malaysian accent, Chang Chen with a Taiwanese accent, and Zhang Ziyi with a Beijing accent. Yeoh responded to this complaint in a 28 December 2000, interview with Cinescape. She argued, "My character lived outside of Beijing, and so I didn't have to do the Beijing accent." When the interviewer, Craig Reid, remarked, "My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand," Yeoh responded: "Yes, provinces all have their very own strong accents. When we first started the movie, Cheng Pei Pei was going to have her accent, and Chang Zhen was going to have his accent, and this person would have that accent. And in the end nobody could understand what they were saying. Forget about us, even the crew from Beijing thought this was all weird."
The film led to a boost in popularity of Chinese wuxia films in the western world, where they were previously little known, and led to films such as Hero and House of Flying Daggers, both directed by Zhang Yimou, being marketed towards Western audiences. The film also provided the breakthrough role for Zhang Ziyi's career, who noted:
Because of movies like Crouching Tiger, Hidden Dragon, Hero, and Memoirs of a Geisha, a lot of people in the United States have become interested not only in me but in Chinese and Asian actors in general. Because of these movies, maybe there will be more opportunities for Asian actors.
Film Journal noted that Crouching Tiger, Hidden Dragon "pulled off the rare trifecta of critical acclaim, boffo box-office and gestalt shift", in reference to its ground-breaking success for a subtitled film in the American market.
Gathering widespread critical acclaim at the Toronto and New York film festivals, the film also became a favorite when Academy Awards nominations were announced in 2001. The film was screened out of competition at the 2000 Cannes Film Festival. The film received ten Academy Award nominations, which was the highest ever for a non-English language film, up until it was tied by Roma (2018).
The film is ranked at number 497 on Empire's 2008 list of the 500 greatest movies of all time. and at number 66 in the magazine's 100 Best Films of World Cinema, published in 2010. In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years. In 2016, it was voted the 35th-best film of the 21st century as picked by 177 film critics from around the world in a poll conducted by BBC. The film was included in BBC's 2018 list of The 100 greatest foreign language films ranked by 209 critics from 43 countries around the world. In 2019, The Guardian ranked the film 51st in its 100 best films of the 21st century list.
In 2001, it was reported that director Ang Lee was planning to make a sequel to the film. Crouching Tiger, Hidden Dragon: Sword of Destiny, was released in 2016. It was directed by Yuen Wo-ping, who was the action choreographer for the first film. It is a co-production between Pegasus Media, China Film Group Corporation, and the Weinstein Company. Unlike the original film, the sequel was filmed in English for international release and dubbed into Chinese for Chinese releases.
Sword of Destiny is based on Iron Knight, Silver Vase, the next (and last) novel in the Crane–Iron Pentalogy. It features a mostly new cast, headed by Donnie Yen. Michelle Yeoh reprised her role from the original. Zhang Ziyi was also approached to appear in Sword of Destiny but refused, stating that she would only appear in a sequel if Ang Lee were directing it.
In the West, the sequel was for the most part not shown in theaters, instead being distributed direct-to-video by the streaming service Netflix.
The theme of Janet Jackson's song "China Love" was related to the film by MTV News, in which Jackson sings of the daughter of an emperor in love with a warrior, unable to sustain relations when forced to marry into royalty.
The names of the pterosaur genus Kryptodrakon and the ceratopsian genus Yinlong (both meaning "hidden dragon" in Greek and Chinese respectively) allude to the film.
The character of Lo, or "Dark Cloud" the desert bandit, influenced the development of the protagonist of the Prince of Persia series of video games.
In the video game Def Jam Fight for NY: The Takeover there are two hybrid fighting styles that pay homage to this movie. Which have the following combinations: Crouching tiger (Martial Arts + Streetfighting + Submissions) and Hidden Dragon (Martial Arts + Streetfighting + Kickboxing). | [
{
"paragraph_id": 0,
"text": "Crouching Tiger, Hidden Dragon is a 2000 Mandarin-language wuxia martial arts adventure film directed by Ang Lee and written for the screen by Wang Hui-ling, James Schamus, and Tsai Kuo-jung. The film stars Chow Yun-fat, Michelle Yeoh, Zhang Ziyi, and Chang Chen. It is based on the Chinese novel of the same name serialized between 1941 and 1942 by Wang Dulu, the fourth part of his Crane Iron pentalogy.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A multinational venture, the film was made on a US$17 million budget, and was produced by Edko Films and Zoom Hunt Productions in collaboration with China Film Co-productions Corporation and Asian Union Film & Entertainment for Columbia Pictures Film Production Asia in association with Good Machine International. The film premiered at the Cannes Film Festival on 18 May 2000, and was theatrically released in the United States on 8 December. With dialogue in Standard Chinese, subtitled for various markets, Crouching Tiger, Hidden Dragon became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history. The film was the first foreign-language film to break the $100 million mark in the United States.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The film received universal acclaim from critics, praised for its story, direction, cinematography, and martial arts sequences. Crouching Tiger, Hidden Dragon won over 40 awards and was nominated for 10 Academy Awards in 2001, including Best Picture, and won Best Foreign Language Film, Best Art Direction, Best Original Score, and Best Cinematography, receiving the most nominations ever for a non-English-language film at the time, until 2018's Roma tied this record. The film also won four BAFTAs and two Golden Globe Awards, each of them for Best Foreign Film. For retrospective years, Crouching Tiger is often cited as one of the finest wuxia films ever made and has been widely regarded one of the greatest films in the 21st century.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In Qing dynasty China, Li Mu Bai is a renowned Wudang swordsman, and his friend Yu Shu Lien, a female warrior, heads a private security company. Shu Lien and Mu Bai have long had feelings for each other, but because Shu Lien had been engaged to Mu Bai's close friend, Meng Sizhao before his death, Shu Lien and Mu Bai feel bound by loyalty to Meng Sizhao and have not revealed their feelings to each other. Mu Bai, choosing to retire from the life of a swordsman, asks Shu Lien to give his fabled 400-year-old sword \"Green Destiny\" to their benefactor Sir Te in Beijing. Long ago, Mu Bai's teacher was killed by Jade Fox, a woman who sought to learn Wudang secrets. While at Sir Te's place, Shu Lien meets Yu Jiaolong, or Jen, who is the daughter of the rich and powerful Governor Yu and is about to get married.",
"title": "Plot"
},
{
"paragraph_id": 4,
"text": "One evening, a masked thief sneaks into Sir Te's estate and steals the Green Destiny. Sir Te's servant Master Bo and Shu Lien trace the theft to Governor Yu's compound, where Jade Fox had been posing as Jen's governess for many years. Soon after, Mu Bai arrives in Beijing and discusses the theft with Shu Lien. Master Bo makes the acquaintance of Inspector Tsai, a police investigator from the provinces, and his daughter May, who have come to Beijing in pursuit of Fox. Fox challenges the pair and Master Bo to a showdown that night. Following a protracted battle, the group is on the verge of defeat when Mu Bai arrives and outmaneuvers Fox. She reveals that she killed Mu Bai's teacher because he would sleep with her, but refuse to take a woman as a disciple, and she felt it poetic justice for him to die at a woman's hand. Just as Mu Bai is about to kill her, the masked thief reappears and helps Fox. Fox kills Tsai before fleeing with the thief (who is revealed to be Jen). After seeing Jen fight Mu Bai, Fox realizes Jen had been secretly studying the Wudang manual. Fox is illiterate and could only follow the diagrams, whereas Jen's ability to read the manual allowed her to surpass her teacher in martial arts.",
"title": "Plot"
},
{
"paragraph_id": 5,
"text": "At night, a bandit named Lo breaks into Jen's bedroom and asks her to leave with him. In the past, when Governor Yu and his family were traveling in the western deserts of Xinjiang, Lo and his bandits raided Jen's caravan and Lo stole her comb. She pursued him to his desert cave to retrieve her comb. However, the pair soon fell in love. Lo eventually convinced Jen to return to her family, though not before telling her a legend of a man who jumped off a mountain to make his wishes come true. Because the man's heart was pure, his wish was granted and he was unharmed, but flew away never to be seen again. Lo has come now to Beijing to persuade Jen not to go through with her arranged marriage. However, Jen refuses to leave with him. Later, Lo interrupts Jen's wedding procession, begging her to leave with him. Shu Lien and Mu Bai convince Lo to wait for Jen at Mount Wudang, where he will be safe from Jen's family, who are furious with him. Jen runs away from her husband on their wedding night before the marriage can be consummated. Disguised in men's clothing, she is accosted at an inn by a large group of warriors; armed with the Green Destiny and her own superior combat skills, she emerges victorious.",
"title": "Plot"
},
{
"paragraph_id": 6,
"text": "Jen visits Shu Lien, who tells her that Lo is waiting for her at Mount Wudang. After an angry exchange, the two women engage in a duel. Shu Lien is the superior fighter, but Jen wields the Green Destiny and is able to destroy each weapon that Shu Lien wields, until Shu Lien finally manages to defeat Jen with a broken sword. When Shu Lien shows mercy, Jen wounds Shu Lien in the arm. Mu Bai arrives and pursues Jen into a bamboo forest, where he offers to take her as his student. Jen agrees if he can take Green Destiny from her in three moves. Mu Bai is able to take the sword in only one move, but Jen reneges on her promise, and Mu Bai throws the sword over a waterfall. Jen dives after the sword and is rescued by Fox. Fox puts Jen into a drugged sleep and places her in a cavern, where Mu Bai and Shu Lien discover her. Fox suddenly attacks them with poisoned needles. Mu Bai mortally wounds Fox, only to realize that one of the needles has hit him in the neck. Before dying, Fox confesses that her goal had been to kill Jen because Jen had hidden the secrets of Wudang's fighting techniques from her.",
"title": "Plot"
},
{
"paragraph_id": 7,
"text": "Contrite, Jen leaves to prepare an antidote for the poisoned dart. With his last breath, Mu Bai finally confesses his love for Shu Lien. He dies in her arms as Jen returns. Shu Lien forgives Jen, telling her to go to Lo and always be true to herself. The Green Destiny is returned to Sir Te. Jen goes to Mount Wudang and spends the night with Lo. The next morning, Lo finds Jen standing on a bridge overlooking the edge of the mountain. In an echo of the legend that they spoke about in the desert, she asks him to make a wish. Lo wishes for them to be together again, back in the desert. Jen leaps from the bridge, falling into the mists below.",
"title": "Plot"
},
{
"paragraph_id": 8,
"text": "Credits from British Film Institute:",
"title": "Cast"
},
{
"paragraph_id": 9,
"text": "The title \"Crouching Tiger, Hidden Dragon\" is a literal translation of the Chinese idiom \"臥虎藏龍\" which describes a place or situation that is full of unnoticed masters. It is from a poem of the ancient Chinese poet Yu Xin (513–581) that reads \"暗石疑藏虎,盤根似臥龍\", which means \"behind the rock in the dark probably hides a tiger, and the coiling giant root resembles a crouching dragon\". The title also has several other layers of meaning. On one level, the Chinese characters in the title connect to the narrative that the last character in Xiaohu and Jiaolong's names mean \"tiger\" and \"dragon\", respectively. On another level, the Chinese idiomatic phrase is an expression referring to the undercurrents of emotion, passion, and secret desire that lie beneath the surface of polite society and civil behavior, which alludes to the film's storyline.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 10,
"text": "The success of the Disney animated feature Mulan (1998) popularized the image of the Chinese woman warrior in the west. The storyline of Crouching Tiger, Hidden Dragon is mostly driven by the three female characters. In particular, Jen is driven by her desire to be free from the gender role imposed on her, while Shu Lien, herself oppressed by the gender role, tries to lead Jen back into the role deemed appropriate for her. Some prominent martial arts disciplines are traditionally held to have been originated by women, e.g., Wing Chun. The film's title refers to masters one does not notice, which necessarily includes mostly women, and therefore suggests the advantage of a female bodyguard.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 11,
"text": "Poison is also a significant theme in the film. The Chinese word \"毒\" (dú) means not only physical poison but also cruelty and sinfulness. In the world of martial arts, the use of poison is considered an act of one who is too cowardly and dishonorable to fight; and indeed, the only character who explicitly fits these characteristics is Jade Fox. The poison is a weapon of her bitterness and quest for vengeance: she poisons the master of Wudang, attempts to poison Jen, and succeeds in killing Mu Bai using a poisoned needle. In further play on this theme by the director, Jade Fox, as she dies, refers to the poison from a young child, \"the deceit of an eight-year-old girl\", referring to what she considers her own spiritual poisoning by her young apprentice Jen. Li Mu Bai himself warns that, without guidance, Jen could become a \"poison dragon\".",
"title": "Themes and interpretations"
},
{
"paragraph_id": 12,
"text": "The story is set during the Qing dynasty (1644–1912), but it does not specify an exact time. Lee sought to present a \"China of the imagination\" rather than an accurate vision of Chinese history. At the same time, Lee also wanted to make a film that Western audiences would want to see. Thus, the film is shot for a balance between Eastern and Western aesthetics. There are some scenes showing uncommon artistry for the typical martial arts film such as an airborne battle among wispy bamboo plants.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 13,
"text": "The film was adapted from the novel Crouching Tiger, Hidden Dragon by Wang Dulu, serialized between 1941 and 1942 in Qingdao Xinmin News. The novel is the fourth in a sequence of five. In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends.",
"title": "Production"
},
{
"paragraph_id": 14,
"text": "Shu Qi was Ang Lee's first choice for the role of Jen, but she turned it down.",
"title": "Production"
},
{
"paragraph_id": 15,
"text": "Although its Academy Award for Best Foreign Language Film was presented to Taiwan, Crouching Tiger, Hidden Dragon was in fact an international co-production between companies in four regions: the Chinese company China Film Co-production Corporation, the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine, the Hong Kong company Edko Films, and the Taiwanese Zoom Hunt Productions, as well as the unspecified United China Vision and Asia Union Film & Entertainment, created solely for this film.",
"title": "Production"
},
{
"paragraph_id": 16,
"text": "The film was made in Beijing, with location shooting in Urumchi, Western Provinces, Taklamakan Plateau, Shanghai and Anji of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, \"I didn't take one break in eight months, not even for half a day. I was miserable—I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke.\" The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used \"only to remove the safety wires that held the actors\" aloft. \"Most of the time you can see their faces,\" he added. \"That's really them in the trees.\"",
"title": "Production"
},
{
"paragraph_id": 17,
"text": "Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay, so she learned the Standard Chinese lines phonetically; Chang Chen is from Taiwan and he speaks Standard Chinese in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on \"the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life.\"",
"title": "Production"
},
{
"paragraph_id": 18,
"text": "The film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films. As a result, high-quality English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences.",
"title": "Production"
},
{
"paragraph_id": 19,
"text": "The score was composed by Dun TAN in 1999. It was played for the movie by the Shanghai Symphony Orchestra, the Shanghai National Orchestra and the Shanghai Percussion Ensemble. It features solo passages for cello played by Yo-Yo Ma. The \"last track\" (\"A Love Before Time\") features Coco Lee, who later sang it at the Academy Awards. The composer Chen Yuanlin also collaborated in the project. The music for the entire film was produced in two weeks. Tan the next year (2000) adapted his filmscore as a cello concerto called simply \"Crouching Tiger.\"",
"title": "Production"
},
{
"paragraph_id": 20,
"text": "The film was adapted into a video game and a series of comics, and it led to the original novel being adapted into a 34-episode Taiwanese television series. The latter was released in 2004 as New Crouching Tiger, Hidden Dragon for Northern American release.",
"title": "Release"
},
{
"paragraph_id": 21,
"text": "The film was released on VHS and DVD on 5 June 2001 by Columbia TriStar Home Entertainment. It was also released on UMD on 26 June 2005. In the United Kingdom, it was watched by 3.5 million viewers on television in 2004, making it the year's most-watched foreign-language film on television.",
"title": "Release"
},
{
"paragraph_id": 22,
"text": "The film was re-released in a 4K restoration by Sony Pictures Classics in 2023.",
"title": "Release"
},
{
"paragraph_id": 23,
"text": "The film premiered in cinemas on 8 December 2000, in limited release within the United States. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On 12 January 2001, Crouching Tiger, Hidden Dragon premiered in cinemas in wide release throughout the U.S., grossing $8,647,295 in business, ranking in sixth place. The film Save the Last Dance came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place, screening in 837 theaters. Save the Last Dance remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, Crouching Tiger, Hidden Dragon opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19.",
"title": "Reception"
},
{
"paragraph_id": 24,
"text": "Crouching Tiger, Hidden Dragon, which is based on an early 20th century novel by Wang Dulu, unfolds much like a comic book, with the characters and their circumstances being painted using wide brush strokes. Subtlety is not part of Lee's palette; he is going for something grand and melodramatic, and that's what he gets.",
"title": "Reception"
},
{
"paragraph_id": 25,
"text": "Crouching Tiger, Hidden Dragon was widely acclaimed in the Western world, receiving numerous awards. On Rotten Tomatoes, the film holds an approval rating of 98% based on 168 reviews, with an average rating of 8.6/10. The site's critical consensus states: \"The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, Crouching Tiger, Hidden Dragon features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama.\" Metacritic reported the film had an average score of 94 out of 100, based on 32 reviews, indicating \"universal acclaim\".",
"title": "Reception"
},
{
"paragraph_id": 26,
"text": "Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin Chinese as a mother tongue. All four main actors spoke Standard Chinese with vastly different accents: Chow speaks with a Cantonese accent, Yeoh with a Malaysian accent, Chang Chen with a Taiwanese accent, and Zhang Ziyi with a Beijing accent. Yeoh responded to this complaint in a 28 December 2000, interview with Cinescape. She argued, \"My character lived outside of Beijing, and so I didn't have to do the Beijing accent.\" When the interviewer, Craig Reid, remarked, \"My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand,\" Yeoh responded: \"Yes, provinces all have their very own strong accents. When we first started the movie, Cheng Pei Pei was going to have her accent, and Chang Zhen was going to have his accent, and this person would have that accent. And in the end nobody could understand what they were saying. Forget about us, even the crew from Beijing thought this was all weird.\"",
"title": "Reception"
},
{
"paragraph_id": 27,
"text": "The film led to a boost in popularity of Chinese wuxia films in the western world, where they were previously little known, and led to films such as Hero and House of Flying Daggers, both directed by Zhang Yimou, being marketed towards Western audiences. The film also provided the breakthrough role for Zhang Ziyi's career, who noted:",
"title": "Reception"
},
{
"paragraph_id": 28,
"text": "Because of movies like Crouching Tiger, Hidden Dragon, Hero, and Memoirs of a Geisha, a lot of people in the United States have become interested not only in me but in Chinese and Asian actors in general. Because of these movies, maybe there will be more opportunities for Asian actors.",
"title": "Reception"
},
{
"paragraph_id": 29,
"text": "Film Journal noted that Crouching Tiger, Hidden Dragon \"pulled off the rare trifecta of critical acclaim, boffo box-office and gestalt shift\", in reference to its ground-breaking success for a subtitled film in the American market.",
"title": "Reception"
},
{
"paragraph_id": 30,
"text": "Gathering widespread critical acclaim at the Toronto and New York film festivals, the film also became a favorite when Academy Awards nominations were announced in 2001. The film was screened out of competition at the 2000 Cannes Film Festival. The film received ten Academy Award nominations, which was the highest ever for a non-English language film, up until it was tied by Roma (2018).",
"title": "Reception"
},
{
"paragraph_id": 31,
"text": "The film is ranked at number 497 on Empire's 2008 list of the 500 greatest movies of all time. and at number 66 in the magazine's 100 Best Films of World Cinema, published in 2010. In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years. In 2016, it was voted the 35th-best film of the 21st century as picked by 177 film critics from around the world in a poll conducted by BBC. The film was included in BBC's 2018 list of The 100 greatest foreign language films ranked by 209 critics from 43 countries around the world. In 2019, The Guardian ranked the film 51st in its 100 best films of the 21st century list.",
"title": "Reception"
},
{
"paragraph_id": 32,
"text": "In 2001, it was reported that director Ang Lee was planning to make a sequel to the film. Crouching Tiger, Hidden Dragon: Sword of Destiny, was released in 2016. It was directed by Yuen Wo-ping, who was the action choreographer for the first film. It is a co-production between Pegasus Media, China Film Group Corporation, and the Weinstein Company. Unlike the original film, the sequel was filmed in English for international release and dubbed into Chinese for Chinese releases.",
"title": "Sequel"
},
{
"paragraph_id": 33,
"text": "Sword of Destiny is based on Iron Knight, Silver Vase, the next (and last) novel in the Crane–Iron Pentalogy. It features a mostly new cast, headed by Donnie Yen. Michelle Yeoh reprised her role from the original. Zhang Ziyi was also approached to appear in Sword of Destiny but refused, stating that she would only appear in a sequel if Ang Lee were directing it.",
"title": "Sequel"
},
{
"paragraph_id": 34,
"text": "In the West, the sequel was for the most part not shown in theaters, instead being distributed direct-to-video by the streaming service Netflix.",
"title": "Sequel"
},
{
"paragraph_id": 35,
"text": "The theme of Janet Jackson's song \"China Love\" was related to the film by MTV News, in which Jackson sings of the daughter of an emperor in love with a warrior, unable to sustain relations when forced to marry into royalty.",
"title": "Posterity"
},
{
"paragraph_id": 36,
"text": "The names of the pterosaur genus Kryptodrakon and the ceratopsian genus Yinlong (both meaning \"hidden dragon\" in Greek and Chinese respectively) allude to the film.",
"title": "Posterity"
},
{
"paragraph_id": 37,
"text": "The character of Lo, or \"Dark Cloud\" the desert bandit, influenced the development of the protagonist of the Prince of Persia series of video games.",
"title": "Posterity"
},
{
"paragraph_id": 38,
"text": "In the video game Def Jam Fight for NY: The Takeover there are two hybrid fighting styles that pay homage to this movie. Which have the following combinations: Crouching tiger (Martial Arts + Streetfighting + Submissions) and Hidden Dragon (Martial Arts + Streetfighting + Kickboxing).",
"title": "Posterity"
}
] | Crouching Tiger, Hidden Dragon is a 2000 Mandarin-language wuxia martial arts adventure film directed by Ang Lee and written for the screen by Wang Hui-ling, James Schamus, and Tsai Kuo-jung. The film stars Chow Yun-fat, Michelle Yeoh, Zhang Ziyi, and Chang Chen. It is based on the Chinese novel of the same name serialized between 1941 and 1942 by Wang Dulu, the fourth part of his Crane Iron pentalogy. A multinational venture, the film was made on a US$17 million budget, and was produced by Edko Films and Zoom Hunt Productions in collaboration with China Film Co-productions Corporation and Asian Union Film & Entertainment for Columbia Pictures Film Production Asia in association with Good Machine International. The film premiered at the Cannes Film Festival on 18 May 2000, and was theatrically released in the United States on 8 December. With dialogue in Standard Chinese, subtitled for various markets, Crouching Tiger, Hidden Dragon became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history. The film was the first foreign-language film to break the $100 million mark in the United States. The film received universal acclaim from critics, praised for its story, direction, cinematography, and martial arts sequences. Crouching Tiger, Hidden Dragon won over 40 awards and was nominated for 10 Academy Awards in 2001, including Best Picture, and won Best Foreign Language Film, Best Art Direction, Best Original Score, and Best Cinematography, receiving the most nominations ever for a non-English-language film at the time, until 2018's Roma tied this record. The film also won four BAFTAs and two Golden Globe Awards, each of them for Best Foreign Film. For retrospective years, Crouching Tiger is often cited as one of the finest wuxia films ever made and has been widely regarded one of the greatest films in the 21st century. | 2001-11-02T11:16:20Z | 2023-12-08T14:03:51Z | [
"Template:Infobox Chinese",
"Template:Div col end",
"Template:Cite web",
"Template:Ang Lee",
"Template:Tan Dun",
"Template:TIFF People's Choice Award",
"Template:Hatnote group",
"Template:Refh",
"Template:Reflist",
"Template:Cite news",
"Template:Cite magazine",
"Template:Academy Award Best Foreign Language Film",
"Template:BAFTA Best Foreign Language Film",
"Template:MTV Movie Award for Best Fight",
"Template:Fact",
"Template:Ill",
"Template:Blockquote",
"Template:Nom",
"Template:Zh",
"Template:Official website",
"Template:Short description",
"Template:Infobox film",
"Template:Lang",
"Template:Nowrap",
"Template:Use dmy dates",
"Template:Cite journal",
"Template:In lang",
"Template:Wang Dulu",
"Template:Authority control",
"Template:Main",
"Template:Metacritic film",
"Template:Golden Globe Award for Best Foreign Language Film",
"Template:Mojo title",
"Template:Anchor",
"Template:Won",
"Template:Draw",
"Template:Webarchive",
"Template:IMDb title",
"Template:Portal bar",
"Template:Div col",
"Template:Cite book",
"Template:Wikiquote",
"Template:Rotten Tomatoes",
"Template:Taiwanese submissions for the Academy Award",
"Template:Transliteration"
] | https://en.wikipedia.org/wiki/Crouching_Tiger,_Hidden_Dragon |
5,314 | Charlemagne | Charlemagne (/ˈʃɑːrləmeɪn, ˌʃɑːrləˈmeɪn/ SHAR-lə-mayn, -MAYN; 2 April 748 – 28 January 814) was King of the Franks from 768, King of the Lombards from 774, and Emperor from 800, all until his death. Charlemagne succeeded in uniting the majority of Western and Central Europe, and he was the first recognized emperor to rule Western Europe after the fall of the Western Roman Empire approximately three centuries earlier. Charlemagne's rule saw a program of political and societal changes that had a lasting impact on Europe in the Middle Ages.
A member of the Frankish Carolingian dynasty, Charlemagne was the eldest son of Pepin the Short and Bertrada of Laon. With his brother Carloman I, he became king of the Franks in 768 following Pepins's death, and became sole ruler in 771. As king, he continued his father's policy towards the protection of the papacy and became its chief defender, removing the Lombards from power in northern Italy in 774. Charlemagne's reign saw a period of expansion that led to conquests of Bavaria, Saxony, and northern Spain, as well as other campaigns that led Charles to extend his rule over a vast area of Europe. He spread Christianity to his new conquests, often by force, as seen at the Massacre of Verden against the Saxons.
In 800, Charlemagne was crowned as emperor in Rome by Pope Leo III. While historians debate about the exact significance of the coronation, the title represented the height of prestige and authority he had achieved. Charlemagne's position as the first emperor in the West since Romulus Augustulus brought him into conflict with the contemporary Eastern Roman Empire based in Constantinople. As king and emperor, Charlemagne engaged in a series of reforms in administration, law, education, military organization, and religion which shaped Europe for centuries. The stability of his reign saw the beginning of a period of significant cultural activity known as the Carolingian Renaissance.
Charlemagne died in 814, and was laid to rest in the Aachen Cathedral, in his imperial capital city of Aachen. He was succeeded by his only surviving son Louis the Pious. After Louis, the Frankish kingdom would be divided, eventually coalescing into West and East Francia, which would respectively become France and the Holy Roman Empire. Charlemagne's profound impact on the Middle Ages, and the influence on the vast territory he ruled has led him to be called the "Father of Europe". He is seen as a founding figure by multiple European states, and many historical royal houses of Europe trace their lineage back to him. Charlemagne has been the subject of artwork, monuments, and literature since the medieval period, and has received veneration in the Catholic Church.
Various languages were spoken in Charlemagne's world, and he would have been known to contemporaries as: Karlus in the Germanic dialect he spoke; Karlo to Romance speakers; or Carolus (or an alternative form, Karolus) in Latin, the formal language of writing and diplomacy. Charles is the modern English form of these names.
The name Charlemagne by which the emperor is normally known in English, comes from the French Charles-le-magne, meaning "Charles the Great". In modern German, he is known as Karl der Große. The nickname magnus (great) may have been associated with him already in his lifetime, but this is not certain. The contemporary Latin Royal Frankish Annals routinely call him Carolus magnus rex, "Charles the great king". As a nickname, it is certainly attested in the works of the Poeta Saxo around 900 and it only became standard in all the lands of his former empire around 1000.
Charles's achievements gave a new meaning to his name. In many Slavic, Baltic and Turkic languages, the very word for "king" derives from his name; e.g., Polish: król, Ukrainian: король (korol'), Czech: král, Slovak: kráľ, Lithuanian: karalius, Latvian: karalis, Russian: король, Macedonian: крал, Bulgarian: крал, Serbo-Croatian: краљ/kralj, Turkish: kral. This development parallels that of the name of the Caesars in the original Roman Empire, which became kaiser and tsar (or czar), among others.
By the 6th century, the western Germanic tribe of the Franks had been Christianised, due in considerable measure to the Catholic conversion of Clovis I. Francia, ruled by the Merovingians, was the most powerful of the kingdoms that succeeded the Western Roman Empire, encompassing nearly all of modern France and Switzerland, along with parts of modern Germany and the Low Countries. Francia was often divided in several sub-kingdoms under different Merovingian kings, due to ill-defined succession laws. The late 7th century saw a period of war and instability following the murder of King Childeric II, which led to factional struggles among the Frankish aristocrats.
In 687, Pepin of Herstal, mayor of the palace of the Frankish sub-kingdom Austrasia, ended the strife between various kings and their mayors with his victory at Tertry. Pepin was the grandson of two important figures of the Austrasian Kingdom: Saint Arnulf of Metz and Pepin of Landen. Pepin's position as mayor of the palace saw him gain power as the Mergovian kings' own waned. Pepin was eventually succeeded by his son Charles, later known as Charles Martel. Charles did not support a Merovingian successor upon the death of King Theuderic IV in 737, leaving the throne vacant. Charles was able to pass on power and be succeeded in 741 by his sons Carloman and Pepin the Short, the father of Charlemagne.
The brothers placed Childeric III on the throne in 743. Carloman abdicated his position in 747 to travel to Rome and entered a monastery, and his son Drogo took his place. By 751 or 752, Pepin moved to depose Childeric and replace him as king. Early Carolingian-influenced sources claim that Pepin's seizure of the throne was sanctioned by Pope Stephen II, but modern historians dispute this. It is possible that papal approval only came when Stephen traveled to Francia in 754, apparently to request Pippin's aid against the Lombards, and on this trip anointed Pepin as king, legitimizing his rule. This papal visit is the earliest appearance of Charlemagne in the historical record, as he was sent to greet and escort the Pope, and he and his brother Carloman were anointed along with their father. Around the same time, Pepin moved to sideline Drogo, sending him and his brother to a monastery.
Charlemagne's birth date is uncertain, but was most likely born in 748. An older tradition, taking after 9th century biographer Einhard's report of Charlemagne being 72 at death, gives a birth year of 742. Einhard, not knowing the emperor's true age, based this on the Roman emperor Augustus' age reported in Suetonius' biography. German scholar Karl Werner challenged the acceptance of Einhard's date and cited a near-contemporary additions to annals which recorded Charlemagne's birth in 747. Lorsch Abbey commemorated Charlemagne's date of birth as 2 April since the mid-9th century, and this date is likely genuine. As the annalists recorded the start of the year from Easter rather than 1 January, Matthais Becher built off of Werner's work and showed that 2 April in the year recorded would have actually been in 748. 2 April 748 has therefore become the accepted date among scholars. The date of 742 has led to the belief that Charlemagne may have been an illegitimate child, as Pepin and Bertrada were bound by a private contract at the time of his birth, but did not marry until 744. Charlemagne's place of birth is also unknown but may have been at Frankish palaces in Vaires-sur-Marne or Quierzy
Charlemagne appears only sparsely in the Frankish annals from his anointing by Pope Stephen until the death of his father.
Charlemagne began issuing charters in his own name in 760, and is recorded as joining his father on campaign in 761. During Pepin's reign, Aquitaine was constantly in rebellion against his rule. Pepin fell ill on campaign in Aquitaine and died on 24 September 768, and Charlemagne and Carloman succeeded their father. While the brothers maintained separate palaces and maintained separate spheres of influence,it was still a joint rulership. The immediate concern of the brothers was the ongoing uprising in Aquitane. While they marched into Antiquaine together, Carloman abandoned the campaign and Charlemagne completed it on his own. Charlemagne's capture of Duke Hunald marked the end of ten years of war in the attempt to bring Aquitaine in line.
Carloman's refusal to participate in the war against Aquitaine led to a rift between the two kings. It's uncertain why Carloman did not join Charlemagne. It is possible that the brothers disagreed over control over the territory, or that Carloman was focusing on securing his rule in the north of Francia. The brothers reported to Pope Stephen III that their relations had returned to normal, though it's unclear if this was true. Regardless of potential strife between the kings, they still maintained a joint rule out of practicality. Both Charlemagne and Carloman worked to secure the support of the clergy and local elites to secure their positions.
Interests in the political affairs of Italy became a focus of Charlemagne's. The Papacy had sought the protection of the Franks from the aggression of the Lombards since the time of Charles Martel, as the ability of the Byzantine Empire to control Central Italy was fading. Charlemagne and Carloman apparently both had troops in Rome, indicating a joint policy in Italy Bertrand, mother of the Frankish kings, went to broker a bethrothal between one of her sons and a daughter of the Lombard king Desiderius in 770. It is traditionally reported that this daughter was named Desiderata and married Charlemagne. However, she may have been named Gerperga.
Carloman died suddenly on 4 December 771, leaving Charlemagne as sole King of the Franks. His wife Gerberga and their children fled to the court of Desiderius, as Charlemagne moved immediately to secure his hold on his brother's territory. As part of this effort, Charlemagne married Hildegard, daughter of a powerful magnate in Carloman's lands. By this, Charlemagne put aside his marriage to Desiridus' daughter.
At his succession in 772, Pope Adrian I demanded the return of certain cities in the former exarchate of Ravenna in accordance with a promise at the succession of Desiderius. Instead, Desiderius took over certain papal cities and invaded the Pentapolis, heading for Rome. Adrian sent ambassadors to Charlemagne in autumn requesting he enforce the policies of his father, Pepin. Desiderius sent his own ambassadors denying the pope's charges. The ambassadors met at Thionville, and Charlemagne upheld the pope's side. Charlemagne demanded what the pope had requested, but Desiderius swore never to comply. Charlemagne and his uncle Bernard crossed the Alps in 773 and chased the Lombards back to Pavia, which they then besieged. Charlemagne temporarily left the siege to deal with Adelchis, son of Desiderius, who was raising an army at Verona. The young prince was chased to the Adriatic littoral and fled to Constantinople to plead for assistance from Constantine V, who was waging war with Bulgaria.
The siege lasted until the spring of 774 when Charlemagne visited the pope in Rome. There he confirmed his father's grants of land, with some later chronicles falsely claiming that he also expanded them, granting Tuscany, Emilia, Venice and Corsica. The pope granted him the title patrician. He then returned to Pavia, where the Lombards were on the verge of surrendering. In return for their lives, the Lombards surrendered and opened the gates in early summer. Desiderius was sent to the abbey of Corbie, and his son Adelchis died in Constantinople, a patrician. Charles, unusually, had himself crowned with the Iron Crown and made the magnates of Lombardy pay homage to him at Pavia. Only Duke Arechis II of Benevento refused to submit and proclaimed independence. Charlemagne was then master of Italy as king of the Lombards. He left Italy with a garrison in Pavia and a few Frankish counts in place the same year.
Instability continued in Italy. In 776, Dukes Hrodgaud of Friuli and Hildeprand of Spoleto rebelled. Charlemagne rushed back from Saxony and defeated the Duke of Friuli in battle; the Duke was slain. The Duke of Spoleto signed a treaty. Their co-conspirator, Arechis, was not subdued, and Adelchis, their candidate in Byzantium, never left that city. Northern Italy was now faithfully his.
In 787, Charlemagne directed his attention towards the Duchy of Benevento, where Arechis II was reigning independently with the self-given title of Princeps. Charlemagne's siege of Salerno forced Arechis into submission, and in return for peace, Arechis recognized Charlemagne's suzerainty and handed his son Grimoald III over as a hostage. After Arechis' death in 787, Grimoald was allowed to return to Benevento. In 788, the principality was invaded by Byzantine troops led by Adelchis, but his attempts were thwarted by Grimoald. The Franks assisted in the repulsion of Adelchis, but, in turn, attacked Benevento's territories several times, obtaining small gains, notably the annexation of Chieti to the duchy of Spoleto. Later, Grimoald tried to throw off Frankish suzerainty, but Charles' sons, Pepin of Italy and Charles the Younger, forced him to submit in 792.
The destructive war led by Pepin in Aquitaine, although brought to a satisfactory conclusion for the Franks, proved the Frankish power structure south of the Loire was feeble and unreliable. After the defeat and death of Waifer in 768, while Aquitaine submitted again to the Carolingian dynasty, a new rebellion broke out in 769 led by Hunald II, a possible son of Waifer. He took refuge with the ally Duke Lupus II of Gascony, but probably out of fear of Charlemagne's reprisal, Lupus handed him over to the new King of the Franks to whom he pledged loyalty, which seemed to confirm the peace in the Basque area south of the Garonne. In the campaign of 769, Charlemagne seems to have followed a policy of "overwhelming force" and avoided a major pitched battle.
Wary of new Basque uprisings, Charlemagne seems to have tried to contain Duke Lupus's power by appointing Seguin as the Count of Bordeaux (778) and other counts of Frankish background in bordering areas (Toulouse, County of Fézensac). The Basque Duke, in turn, seems to have contributed decisively or schemed the Battle of Roncevaux Pass (referred to as "Basque treachery"). The defeat of Charlemagne's army in Roncevaux (778) confirmed his determination to rule directly by establishing the Kingdom of Aquitaine (ruled by Louis the Pious) based on a power base of Frankish officials, distributing lands among colonisers and allocating lands to the Church, which he took as an ally. A Christianisation programme was put in place across the high Pyrenees (778).
The new political arrangement for Vasconia did not sit well with local lords. As of 788 Adalric was fighting and capturing Chorson, Carolingian Count of Toulouse. He was eventually released, but Charlemagne, enraged at the compromise, decided to depose him and appointed his trustee William of Gellone. William, in turn, fought the Basques and defeated them after banishing Adalric (790).
From 781 (Pallars, Ribagorça) to 806 (Pamplona under Frankish influence), taking the County of Toulouse for a power base, Charlemagne asserted Frankish authority over the Pyrenees by subduing the south-western marches of Toulouse (790) and establishing vassal counties on the southern Pyrenees that were to make up the Marca Hispanica. As of 794, a Frankish vassal, the Basque lord Belasko (al-Galashki, 'the Gaul') ruled Álava, but Pamplona remained under Cordovan and local control up to 806. Belasko and the counties in the Marca Hispánica provided the necessary base to attack the Andalusians (an expedition led by William Count of Toulouse and Louis the Pious to capture Barcelona in 801). Events in the Duchy of Vasconia (rebellion in Pamplona, count overthrown in Aragon, Duke Seguin of Bordeaux deposed, uprising of the Basque lords, etc.) were to prove it ephemeral upon Charlemagne's death.
According to the Muslim historian Ibn al-Athir, the Diet of Paderborn had received the representatives of the Muslim rulers of Zaragoza, Girona, Barcelona and Huesca. Their masters had been cornered in the Iberian peninsula by Abd ar-Rahman I, the Umayyad emir of Cordova. These "Saracen" (Moorish and Muwallad) rulers offered their homage to the king of the Franks in return for military support. Seeing an opportunity to extend Christendom and his own power, and believing the Saxons to be a fully conquered nation, Charlemagne agreed to go to Spain.
In 778, he led the Neustrian army across the Western Pyrenees, while the Austrasians, Lombards, and Burgundians passed over the Eastern Pyrenees. The armies met at Saragossa and Charlemagne received the homage of the Muslim rulers, Sulayman al-Arabi and Kasmin ibn Yusuf, but the city did not fall for him. Indeed, Charlemagne faced the toughest battle of his career. The Muslims forced him to retreat, so he decided to go home, as he could not trust the Basques, whom he had subdued by conquering Pamplona. He turned to leave Iberia, but as his army was crossing back through the Pass of Roncesvalles, one of the most famous events of his reign occurred: the Basques attacked and destroyed his rearguard and baggage train. The Battle of Roncevaux Pass, though less a battle than a skirmish, left many famous dead, including the seneschal Eggihard, the count of the palace Anselm, and the warden of the Breton March, Roland, inspiring the subsequent creation of The Song of Roland (La Chanson de Roland), regarded as the first major work in the French language.
The conquest of Italy brought Charlemagne in contact with Muslims who, at the time, controlled the Mediterranean. Charlemagne's eldest son, Pepin the Hunchback, was much occupied with Muslims in Italy. Charlemagne conquered Corsica and Sardinia at an unknown date and in 799 the Balearic Islands. The islands were often attacked by Muslim pirates, but the counts of Genoa and Tuscany (Boniface) controlled them with large fleets until the end of Charlemagne's reign. Charlemagne even had contact with the caliphal court in Baghdad. In 797 (or possibly 801), the caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas and a clock.
In Hispania, the struggle against Islam continued unabated throughout the latter half of his reign. Louis was in charge of the Spanish border. In 785, his men captured Girona permanently and extended Frankish control into the Catalan littoral for the duration of Charlemagne's reign (the area remained nominally Frankish until the Treaty of Corbeil in 1258). The Muslim chiefs in the northeast of Islamic Spain were constantly rebelling against Cordovan authority, and they often turned to the Franks for help. The Frankish border was slowly extended until 795, when Girona, Cardona, Ausona and Urgell were united into the new Spanish March, within the old duchy of Septimania.
In 797, Barcelona, the greatest city of the region, fell to the Franks when Zeid, its governor, rebelled against Cordova and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis of Aquitaine marched the entire army of his kingdom over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. The Franks continued to press forward against the emir. They probably took Tarragona and forced the submission of Tortosa in 809. The last conquest brought them to the mouth of the Ebro and gave them raiding access to Valencia, prompting the Emir al-Hakam I to recognise their conquests in 813.
Charlemagne was engaged in almost constant warfare throughout his reign, often at the head of his elite scara bodyguard squadrons. In the Saxon Wars, spanning thirty years and eighteen battles, he conquered Saxonia and proceeded to convert it to Christianity.
The Germanic Saxons were divided into four subgroups in four regions. Nearest to Austrasia was Westphalia and farthest away was Eastphalia. Between them was Engria and north of these three, at the base of the Jutland peninsula, was Nordalbingia.
In his first campaign, in 773, Charlemagne forced the Engrians to submit and cut down an Irminsul pillar near Paderborn. The campaign was cut short by his first expedition to Italy. He returned in 775, marching through Westphalia and conquering the Saxon fort at Sigiburg. He then crossed Engria, where he defeated the Saxons again. Finally, in Eastphalia, he defeated a Saxon force, and its leader Hessi [de] converted to Christianity. Charlemagne returned through Westphalia, leaving encampments at Sigiburg and Eresburg, which had been important Saxon bastions. He then controlled Saxony with the exception of Nordalbingia, but Saxon resistance had not ended.
Following his subjugation of the Dukes of Friuli and Spoleto, Charlemagne returned rapidly to Saxony in 776, where a rebellion had destroyed his fortress at Eresburg. The Saxons were once again defeated, but their main leader, Widukind, escaped to Denmark, his wife's home. Charlemagne built a new camp at Karlstadt. In 777, he called a national diet at Paderborn to integrate Saxony fully into the Frankish kingdom. Many Saxons were baptised as Christians.
In the summer of 779, he again invaded Saxony and reconquered Eastphalia, Engria and Westphalia. At a diet near Lippe, he divided the land into missionary districts and himself assisted in several mass baptisms (780). He then returned to Italy and, for the first time, the Saxons did not immediately revolt. Saxony was peaceful from 780 to 782.
He returned to Saxony in 782 and instituted a code of law and appointed counts, both Saxon and Frank. The laws were draconian on religious issues; for example, the Capitulatio de partibus Saxoniae prescribed death to Saxon pagans who refused to convert to Christianity. This led to renewed conflict. That year, in autumn, Widukind returned and led a new revolt. In response, at Verden in Lower Saxony, Charlemagne is recorded as having ordered the execution of 4,500 Saxon prisoners by beheading, known as the Massacre of Verden ("Verdener Blutgericht"). The killings triggered three years of renewed bloody warfare. During this war, the East Frisians between the Lauwers and the Weser joined the Saxons in revolt and were finally subdued. The war ended with Widukind accepting baptism. The Frisians afterwards asked for missionaries to be sent to them and a bishop of their own nation, Ludger, was sent. Charlemagne also promulgated a law code, the Lex Frisonum, as he did for most subject peoples.
Thereafter, the Saxons maintained the peace for seven years, but in 792, Westphalia again rebelled. The Eastphalians and Nordalbingians joined them in 793, but the insurrection was unpopular and was put down by 794. An Engrian rebellion followed in 796, but the presence of Charlemagne, Christian Saxons and Slavs quickly crushed it. The last insurrection occurred in 804, more than thirty years after Charlemagne's first campaign against them, but also failed. According to Einhard:
The war that had lasted so many years was at length ended by their acceding to the terms offered by the King; which were renunciation of their national religious customs and the worship of devils, acceptance of the sacraments of the Christian faith and religion, and union with the Franks to form one people.
By 774, Charlemagne had invaded the Kingdom of Lombardy, and he later annexed the Lombardian territories and assumed its crown, placing the Papal States under Frankish protection. The Duchy of Spoleto south of Rome was acquired in 774, while in the central western parts of Europe, the Duchy of Bavaria was absorbed and the Bavarian policy continued of establishing tributary marches, (borders protected in return for tribute or taxes) among the Slavic Sorbs and Czechs. The remaining power confronting the Franks in the east were the Avars. However, Charlemagne acquired other Slavic areas, including Bohemia, Moravia, Austria and Croatia.
In 789, Charlemagne turned to Bavaria. He claimed that Tassilo III, Duke of Bavaria was an unfit ruler, due to his oath-breaking. The charges were exaggerated, but Tassilo was deposed anyway and put in the monastery of Jumièges. In 794, Tassilo was made to renounce any claim to Bavaria for himself and his family (the Agilolfings) at the synod of Frankfurt; he formally handed over to the king all of the rights he had held. Bavaria was subdivided into Frankish counties, as had been done with Saxony.
In 788, the Avars, an Asian nomadic group that had settled down in what is today Hungary (Einhard called them Huns), invaded Friuli and Bavaria. Charlemagne was preoccupied with other matters until 790 when he marched down the Danube and ravaged Avar territory to the Győr. A Lombard army under Pippin then marched into the Drava valley and ravaged Pannonia. The campaigns ended when the Saxons revolted again in 792.
For the next two years, Charlemagne was occupied, along with the Slavs, against the Saxons. Pippin and Duke Eric of Friuli continued, however, to assault the Avars' ring-shaped strongholds. The great Ring of the Avars, their capital fortress, was taken twice. The booty was sent to Charlemagne at his capital, Aachen, and redistributed to his followers and to foreign rulers, including King Offa of Mercia. Soon the Avar tuduns had lost the will to fight and travelled to Aachen to become vassals to Charlemagne and to become Christians. Charlemagne accepted their surrender and sent one native chief, baptised Abraham, back to Avaria with the ancient title of khagan. Abraham kept his people in line, but in 800, the Bulgarians under Khan Krum attacked the remains of the Avar state.
In 803, Charlemagne sent a Bavarian army into Pannonia, defeating and bringing an end to the Avar confederation.
In November of the same year, Charlemagne went to Regensburg where the Avar leaders acknowledged him as their ruler. In 805, the Avar khagan, who had already been baptised, went to Aachen to ask permission to settle with his people south-eastward from Vienna. The Transdanubian territories became integral parts of the Frankish realm, which was abolished by the Magyars in 899–900.
In 789, in recognition of his new pagan neighbours, the Slavs, Charlemagne marched an Austrasian-Saxon army across the Elbe into Obotrite territory. The Slavs ultimately submitted, led by their leader Witzin. Charlemagne then accepted the surrender of the Veleti under Dragovit and demanded many hostages. He also demanded permission to send missionaries into this pagan region unmolested. The army marched to the Baltic before turning around and marching to the Rhine, winning much booty with no harassment. The tributary Slavs became loyal allies. In 795, when the Saxons broke the peace, the Abotrites and Veleti rebelled with their new ruler against the Saxons. Witzin died in battle and Charlemagne avenged him by harrying the Eastphalians on the Elbe. Thrasuco, his successor, led his men to conquest over the Nordalbingians and handed their leaders over to Charlemagne, who honoured him. The Abotrites remained loyal until Charles' death and fought later against the Danes.
When Charlemagne incorporated much of Central Europe, he brought the Frankish state face to face with the Avars and Slavs in the southeast. The most southeast Frankish neighbours were Croats, who settled in Lower Pannonia and Duchy of Croatia. While fighting the Avars, the Franks had called for their support. During the 790s, he won a major victory over them in 796. Duke Vojnomir of Lower Pannonia aided Charlemagne, and the Franks made themselves overlords over the Croats of northern Dalmatia, Slavonia and Pannonia.
The Frankish commander Eric of Friuli wanted to extend his dominion by conquering the Littoral Croat Duchy. During that time, Dalmatian Croatia was ruled by Duke Višeslav of Croatia. In the Battle of Trsat, the forces of Eric fled their positions and were routed by the forces of Višeslav. Eric was among those killed, which was a great blow for the Carolingian Empire.
Charlemagne also directed his attention to the Slavs to the west of the Avar khaganate: the Carantanians and Carniolans. These people were subdued by the Lombards and Bavarii and made tributaries, but were never fully incorporated into the Frankish state.
In April 799, Pope Leo III, who had faced difficulties since his accession in 795, was attacked in Rome and accused of various crimes by political enemies. Leo escaped and fled north to seek Charlemagne's help. Charlemagne continued his campaign against the Saxons before breaking off to meet Leo at Paderborn in September. Charlemagne, hearing evidence from both the Pope and his enemies, sent Leo back to Rome along with royal legates, who had instructions to reinstate the Pope and investigate the matter further. It was not until August of the next year that Charlemagne himself made plans to go to Rome, after an extensive tour of his lands in Neustria. Charlemagne met Leo in November near Mentana, at the twelfth milestone outside Rome, the traditional location where Roman emperors began their formal entry to the city. In Rome, Leo stood trial before the king and swore his innocence of all charges made against him. On 25 December 800, at mass on Christmas Day, Leo acclaimed Charlemagne as emperor and crowned him. In doing so, Charlemagne became the first reigning emperor in the west since the deposition of Romulus Augustulus in 476. His son Charles the Younger was anointed as king by Leo at the same time.
Historians differ as to intentions behind the imperial coronation, the extent to which Charlemagne was aware of it or participated in its planning, and the significance of the events to those present and to Charlemagne's reign. Contemporary Frankish and papal sources differ in their emphasis and representation of events. Charlemagne's 9th century biographer Einhard insists that he would not have entered the church had he known of the Pope's plan has variously been taken as truthful or as a "literary device" used as a sign of Charlemagne's humility. Roger Collins argues that the actions surrounding the coronation indicate that it was planned by Charlemagne as early as his meeting with Leo in 799, and Johannes Fried argues Charlemagne planned to adopt the title of emperor by 798 "at the latest." In the years before the coronation, Charlemagne's courtier Alcuin had referred to Charlemagne's realm as an Imperium Christianum ("Christian Empire"), wherein, "just as the inhabitants of the Roman Empire had been united by a common Roman citizenship", presumably this new empire would be united by a common Christian faith. This is the view of Henri Pirenne when he says "Charles was the Emperor of the ecclesia as the Pope conceived it, of the Roman Church, regarded as the universal Church".
For both Leo and Charlemagne, the Roman Empire remained a significant contemporary power in European politics, especially in Italy. The Byzantines continued to hold a substantial portion of Italy, with borders not far south of Rome. In sitting in judgment of the Pope, Charlemagne could have been seen as usurping the prerogatives of the emperor in Constantinople. One of the earliest narrative sources, the Annals of Lorsch present the position of Emperess Irene, a woman, on the throne indicated an absence in the imperial title that Leo and Charlemagne could therefore fill. Pirenne disputes this, saying that the coronation "was not in any sense explained by the fact that at this moment a woman was reigning in Constantinople." Leo's main motivations may have been the desire to increase his own standing after his political difficulties, showing himself as a king-maker and securing Charlemagne as his powerful ally and protector. The Byzantine Empire's lack of ability to influence events in Italy and support the papacy were also important in Leo's position. The act of Leo crowning Charlemagne can also be viewed as showing the Pope's spiritual power over Charlemagne as a temporal ruler. The Royal Frankish Annals, on the other hand, records Leo prostrating himself before Charlemagne after crowning him, an act of submission standard in Roman coronation rituals from the time of Diocletian. This account represents Leo, rather being the superior of Charlemagne, merely acting as an agent of the Roman people in recognizing their acclimation of Charlemagne as emperor.
Henry Mayr-Harting argues that assumption of the imperial title by Charlemagne was an effort to incorporate the Saxons into the Frankish realm, as they did not have a native tradition of kingship. However, Costambeys, Innes, and MacLean note in The Carolingian World that "since Saxony had not been in the Roman empire it is hard to see on what basis an emperor would have been any more welcomed." These authors argue that the decision to take the title of emperor was more aimed at furthering Charlemagne's influence in Italy, as an appeal to traditional authority recognized by Italian elites both within and especially outside his current control.
Collins concurs that becoming emperor gave Charlemagne "the right to try to impose his rule over the whole of [Italy]", and regards this as a motivator for the coronation. He also notes the "element of political and military risk" inherent in the affair, due to the opposition of the Byzantine Empire as well as potential opposition from the Frankish elite as the imperial title could draw him further into Mediterranean politics. Collins sees several actions of Charlemagne as attempts to ensure his new title was cast in a distinctly Frankish context.
Charlemagne's coronation led to a centuries-long ideological conflict between his successors and Constantinople, termed the problem of two emperors, as it could be seen as a repudiation of the Byzantine singular claim to imperial title as preeminent among Christian rulers. Charlemagne may have had a more limited view of his role, seeing the title simply representing dominion over the lands he already ruled. Still, the title of emperor gave Charlemagne enhanced prestige and ideological authority. He immediately incorporated his new title into documents issued, adopting the formula Charles, most serene Augustus, crowned by God, great peaceful emperor governing the Roman empire, and who is by the mercy of God king of the Franks and the Lombards as opposed to the earlier form Charles, by the grace of God king of the Franks and Lombards and patrician of the Romans. The avoidance of the specific claim of being a "Roman emperor" as an opposed to the more neutral "emperor governing the Roman empire" may have been to improve relations with the Byzantines. This phrasing, alongside the continuation of his earlier royal titles, may also represent a view of his role as emperor as merely being the ruler of the people of the city of Rome, just as he was of the Franks and the Lombards.
Charlemagne left Italy in the summer of 801 after judging several ecclesiastical disputes in Rome and further stops in Ravenna, Pavia, and Bologna. He would not return to Rome again. Although the trends of his later realm began in the 790s, period of Charlemagne's reign from 801 onward marks a "distinct phase" characterised by a more stationary rule from the palace at Aachen. Expansion of the realm largely ended, marked by the establishment of marches to defend the empire's frontiers. While there continued to be conflict until the end of Charlemagne's reign, the relative peace of the imperial period saw an increased focus on internal governance through the issuing of laws and capitularies.
Charlemagne did not campaign in either 802 or 803. The Capitulare missorum generale issued in 802, called the "programmatic capitulary", was an expansive piece of legislation, with provisions governing the conduct of royal officials and requiring a loyalty oath to the emperor to be taken by all free men under his rule. The capitulary reformed the institution of the missi dominici, officials who would now be assigned in pairs (a cleric and a lay aristocrat) to administer justice and oversee governance in defined territories. The emperor also ordered revisions of the Lombard and Frankish law codes.
In addition to the missi, Charlemagne also ruled the empire through his sons as sub-kings. Pepin and Louis had been appointed kings of the Italy and Aquitaine respectively in 781, though both were children at the time and were ruled by regents in their minority. Though both had some devolved authority as kings in adulthood, Charlemagne still had ultimate authority and intervened in matters directly. Charles, the eldest son, had been given rule over realms in Neustria in 789 or 790, and had been made a king in 800.
The 806 charter Divisio Regnorum ("division of the realm"), set the terms of succession of the empire in the event of Charlemagne's death. Charles, as eldest son, was given the largest share of the inheritance, with rule of Francia proper along with Saxony, Nordgau, and parts of Alemannia. The two younger sons were confirmed in their kingdoms and gained additional territories, with most of Bavaria and Alemmannia given to Pepin and Provence, Septimania, and parts of Burgundy to Louis. Charlemagne did not address the inheritance of the imperial title. The Divisio also addressed the event of any of the brothers, and urged peace between them and between any of their nephews who might inherit.
The iconoclasm of the Byzantine Isaurian Dynasty was endorsed by the Franks. The Second Council of Nicaea reintroduced the veneration of icons under Empress Irene. The council was not recognised by Charlemagne since no Frankish emissaries had been invited, even though Charlemagne ruled more than three provinces of the classical Roman empire and was considered equal in rank to the Byzantine emperor. And while the Pope supported the reintroduction of the iconic veneration, he politically digressed from Byzantium. He certainly desired to increase the influence of the papacy, to honour his saviour Charlemagne, and to solve the constitutional issues then most troubling to European jurists in an era when Rome was not in the hands of an emperor. Thus, Charlemagne's assumption of the imperial title was not a usurpation in the eyes of the Franks or Italians. It was, however, seen as such in Byzantium, where it was protested by Irene and her successor Nikephoros I—neither of whom had any great effect in enforcing their protests.
The East Romans, however, still held several territories in Italy: Venice (what was left of the Exarchate of Ravenna), Reggio (in Calabria), Otranto (in Apulia), and Naples (the Ducatus Neapolitanus). These regions remained outside of Frankish hands until 804, when the Venetians, torn by infighting, transferred their allegiance to the Iron Crown of Pippin, Charles' son. The Pax Nicephori ended. Nicephorus ravaged the coasts with a fleet, initiating the only instance of war between the Byzantines and the Franks. The conflict lasted until 810 when the pro-Byzantine party in Venice gave their city back to the Byzantine Emperor, and the two emperors of Europe made peace: Charlemagne received the Istrian peninsula and in 812 the emperor Michael I Rangabe recognised his status as Emperor, although not necessarily as "Emperor of the Romans".
After the conquest of Nordalbingia, the Frankish frontier was brought into contact with Scandinavia. The pagan Danes, "a race almost unknown to his ancestors, but destined to be only too well known to his sons" as Charles Oman described them, inhabiting the Jutland peninsula, had heard many stories from Widukind and his allies who had taken refuge with them about the dangers of the Franks and the fury which their Christian king could direct against pagan neighbours.
In 808, the king of the Danes, Godfred, expanded the vast Danevirke across the isthmus of Schleswig. This defence, last employed in the Danish-Prussian War of 1864, was at its beginning a 30 km (19 mi) long earthenwork rampart. The Danevirke protected Danish land and gave Godfred the opportunity to harass Frisia and Flanders with pirate raids. He also subdued the Frank-allied Veleti and fought the Abotrites.
Godfred invaded Frisia, joked of visiting Aachen, but was murdered before he could do any more, either by a Frankish assassin or by one of his own men. Godfred was succeeded by his nephew Hemming, who concluded the Treaty of Heiligen with Charlemagne in late 811.
In 813, Charlemagne called Louis the Pious, king of Aquitaine, his only surviving legitimate son, to his court. There Charlemagne crowned his son as co-emperor and sent him back to Aquitaine. He then spent the autumn hunting before returning to Aachen on 1 November. In January, he fell ill with pleurisy. In deep depression (mostly because many of his plans were not yet realised), he took to his bed on 21 January and as Einhard tells it:
He died January twenty-eighth, the seventh day from the time that he took to his bed, at nine o'clock in the morning, after partaking of the Holy Communion, in the seventy-second year of his age and the forty-seventh of his reign.
He was buried that same day, in Aachen Cathedral. The earliest surviving planctus, the Planctus de obitu Karoli, was composed by a monk of Bobbio, which he had patronised. A later story, told by Otho of Lomello, Count of the Palace at Aachen in the time of Emperor Otto III, would claim that he and Otto had discovered Charlemagne's tomb: Charlemagne, they claimed, was seated upon a throne, wearing a crown and holding a sceptre, his flesh almost entirely incorrupt. In 1165, Emperor Frederick I re-opened the tomb again and placed the emperor in a sarcophagus beneath the floor of the cathedral. In 1215 Emperor Frederick II re-interred him in a casket made of gold and silver known as the Karlsschrein.
Charlemagne's death emotionally affected many of his subjects, particularly those of the literary clique who had surrounded him at Aachen. An anonymous monk of Bobbio lamented:
From the lands where the sun rises to western shores, people are crying and wailing ... the Franks, the Romans, all Christians, are stung with mourning and great worry ... the young and old, glorious nobles, all lament the loss of their Caesar ... the world laments the death of Charles ... O Christ, you who govern the heavenly host, grant a peaceful place to Charles in your kingdom. Alas for miserable me.
Louis succeeded him as Charles had intended. He left a testament allocating his assets in 811 that was not updated prior to his death. He left most of his wealth to the Church, to be used for charity. His empire lasted only another generation in its entirety; its division, according to custom, between Louis's own sons after their father's death laid the foundation for the modern states of Germany and France.
The Carolingian king exercised the bannum, the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the "Carolingian Renaissance".
Charlemagne's success rested primarily on novel siege technologies and excellent logistics rather than the long-claimed "cavalry revolution" led by Charles Martel in 730s. However, the stirrup, which made the "shock cavalry" lance charge possible, was not introduced to the Frankish kingdom until the late eighth century.
Horses were used extensively by the Frankish military because they provided a quick, long-distance method of transporting troops, which was critical to building and maintaining the large empire.
Charlemagne had an important role in determining Europe's immediate economic future. Pursuing his father's reforms, Charlemagne abolished the monetary system based on the gold sou. Instead, he and the Anglo-Saxon King Offa of Mercia took up Pippin's system for pragmatic reasons, notably a shortage of the metal.
The gold shortage was a direct consequence of the conclusion of peace with Byzantium, which resulted in ceding Venice and Sicily to the East and losing their trade routes to Africa. The resulting standardisation economically harmonised and unified the complex array of currencies that had been in use at the commencement of his reign, thus simplifying trade and commerce.
Charlemagne established a new standard, the livre carolinienne (from the Latin libra, the modern pound), which was based upon a pound of silver—a unit of both money and weight—worth 20 sous (from the Latin solidus [which was primarily an accounting device and never actually minted], the modern shilling) or 240 deniers (from the Latin denarius, the modern penny). During this period, the livre and the sou were counting units; only the denier was a coin of the realm.
Charlemagne instituted principles for accounting practice by means of the Capitulare de villis of 802, which laid down strict rules for the way in which incomes and expenses were to be recorded.
Charlemagne applied this system to much of the European continent, and Offa's standard was voluntarily adopted by much of England. After Charlemagne's death, continental coinage degraded, and most of Europe resorted to using the continued high-quality English coin until about 1100.
Early in Charlemagne's rule he tacitly allowed Jews to monopolise money lending. He invited Italian Jews to immigrate, as royal clients independent of the feudal landowners, and form trading communities in the agricultural regions of Provence and the Rhineland. Their trading activities augmented the otherwise almost exclusively agricultural economies of these regions. His personal physician was Jewish, and he employed a Jew named Isaac as his personal representative to the Muslim caliphate of Baghdad.
Part of Charlemagne's success as a warrior, an administrator and ruler can be traced to his admiration for learning and education. His reign is often referred to as the Carolingian Renaissance because of the flowering of scholarship, literature, art and architecture that characterise it. Charlemagne came into contact with the culture and learning of other countries (especially Moorish Spain, Anglo-Saxon England, and Lombard Italy) due to his vast conquests. He greatly increased the provision of monastic schools and scriptoria (centres for book-copying) in Francia.
Charlemagne was a lover of books, sometimes having them read to him during meals. He was thought to enjoy the works of Augustine of Hippo. His court played a key role in producing books that taught elementary Latin and different aspects of the church. It also played a part in creating a royal library that contained in-depth works on language and Christian faith.
Charlemagne encouraged clerics to translate Christian creeds and prayers into their respective vernaculars as well to teach grammar and music. Due to the increased interest of intellectual pursuits and the urging of their king, the monks accomplished so much copying that almost every manuscript from that time was preserved. At the same time, at the urging of their king, scholars were producing more secular books on many subjects, including history, poetry, art, music, law, theology, etc. Due to the increased number of titles, private libraries flourished. These were mainly supported by aristocrats and churchmen who could afford to sustain them. At Charlemagne's court, a library was founded and a number of copies of books were produced, to be distributed by Charlemagne. Book production was completed slowly by hand and took place mainly in large monastic libraries. Books were so in demand during Charlemagne's time that these libraries lent out some books, but only if that borrower offered valuable collateral in return.
Most of the surviving works of classical Latin were copied and preserved by Carolingian scholars. Indeed, the earliest manuscripts available for many ancient texts are Carolingian. It is almost certain that a text which survived to the Carolingian age survives still.
The pan-European nature of Charlemagne's influence is indicated by the origins of many of the men who worked for him: Alcuin, an Anglo-Saxon from York; Theodulf, a Visigoth, probably from Septimania; Paul the Deacon, Lombard; Italians Peter of Pisa and Paulinus of Aquileia; and Franks Angilbert, Angilram, Einhard and Waldo of Reichenau.
Charlemagne promoted the liberal arts at court, ordering that his children and grandchildren be well-educated, and even studying himself (in a time when even leaders who promoted education did not take time to learn themselves) under the tutelage of Peter of Pisa, from whom he learned grammar; Alcuin, with whom he studied rhetoric, dialectic (logic), and astronomy (he was particularly interested in the movements of the stars); and Einhard, who tutored him in arithmetic.
His great scholarly failure, as Einhard relates, was his inability to write: when in his old age he attempted to learn—practising the formation of letters in his bed during his free time on books and wax tablets he hid under his pillow—"his effort came too late in life and achieved little success", and his ability to read—which Einhard is silent about, and which no contemporary source supports—has also been called into question.
In 800, Charlemagne enlarged the hostel at the Muristan in Jerusalem and added a library to it. He certainly had not been personally in Jerusalem.
Charlemagne expanded the reform Church's programme unlike his father, Pippin, and uncle, Carloman. The deepening of the spiritual life was later to be seen as central to public policy and royal governance. His reform focused on strengthening the church's power structure, improving clergy's skill and moral quality, standardising liturgical practices, improvements on the basic tenets of the faith and the rooting out of paganism. His authority extended over church and state. He could discipline clerics, control ecclesiastical property and define orthodox doctrine. Despite the harsh legislation and sudden change, he had developed support from clergy who approved his desire to deepen the piety and morals of his subjects.
In 809–810, Charlemagne called a church council in Aachen, which confirmed the unanimous belief in the West that the Holy Spirit proceeds from the Father and the Son (ex Patre Filioque) and sanctioned inclusion in the Nicene Creed of the phrase Filioque (and the Son). For this Charlemagne sought the approval of Pope Leo III. The Pope, while affirming the doctrine and approving its use in teaching, opposed its inclusion in the text of the Creed as adopted in the 381 First Council of Constantinople. This spoke of the procession of the Holy Spirit from the Father, without adding phrases such as "and the Son", "through the Son", or "alone". Stressing his opposition, the Pope had the original text inscribed in Greek and Latin on two heavy shields that were displayed in Saint Peter's Basilica.
During Charles' reign, the Roman half uncial script and its cursive version, which had given rise to various continental minuscule scripts, were combined with features from the insular scripts in use in Irish and English monasteries. Carolingian minuscule was created partly under the patronage of Charlemagne. Alcuin, who ran the palace school and scriptorium at Aachen, was probably a chief influence.
The revolutionary character of the Carolingian reform, however, can be overemphasised; efforts at taming Merovingian and Germanic influence had been underway before Alcuin arrived at Aachen. The new minuscule was disseminated first from Aachen and later from the influential scriptorium at Tours, where Alcuin retired as an abbot.
Einhard tells in his twenty-fourth chapter:
Charles was temperate in eating, and particularly so in drinking, for he abominated drunkenness in anybody, much more in himself and those of his household; but he could not easily abstain from food, and often complained that fasts injured his health. He very rarely gave entertainments, only on great feast-days, and then to large numbers of people. His meals ordinarily consisted of four courses, not counting the roast, which his huntsmen used to bring in on the spit; he was more fond of this than of any other dish. While at table, he listened to reading or music. The subjects of the readings were the stories and deeds of olden time: he was fond, too, of St. Augustine's books, and especially of the one titled "The City of God".
Charlemagne threw grand banquets and feasts for special occasions such as religious holidays and four of his weddings. When he was not working, he loved Christian books, horseback riding, swimming, bathing in natural hot springs with his friends and family, and hunting. Franks were well known for horsemanship and hunting skills. Charles was a light sleeper and would stay in his bed chambers for entire days at a time due to restless nights. During these days, he would not get out of bed when a quarrel occurred in his kingdom, instead summoning all members of the situation into his bedroom to be given orders. Einhard tells again in the twenty-fourth chapter: "In summer after the midday meal, he would eat some fruit, drain a single cup, put off his clothes and shoes, just as he did for the night, and rest for two or three hours. He was in the habit of awaking and rising from bed four or five times during the night."
Einhard speaks of Charlemagne's patrius sermo, "father" or "native toungue". Most scholars have identified this as a form of Old High German, probably a Rhenish Franconian dialect. Einhard wrote from his experiences in Charlemagne's court in the 790s onward. Due to the prevalence in Francia of the "rustic Roman" language that was rapidly developing into Old French, he was probably functionally bilingual in both Germanic and Romance dialects from a young age. Charlemagne also spoke Latin, and according to Einhard could understand and perhaps speak some Greek. Fried considers it likely that Charlemagne would have been literate, though Einhard recorded that he only attempted to learn to write later in life.
The largely fictional account of Charlemagne's Iberian campaigns by Pseudo-Turpin, written some three centuries after his death, gave rise to a legend that the king also spoke Arabic.
Charlemagne's personal appearance is known from a good description by Einhard after his death in the biography Vita Karoli Magni. Einhard states:
He was heavily built, sturdy, and of considerable stature, although not exceptionally so, since his height was seven times the length of his own foot. He had a round head, large and lively eyes, a slightly larger nose than usual, white but still attractive hair, a bright and cheerful expression, a short and fat neck, and he enjoyed good health, except for the fevers that affected him in the last few years of his life. Towards the end, he dragged one leg. Even then, he stubbornly did what he wanted and refused to listen to doctors, indeed he detested them, because they wanted to persuade him to stop eating roast meat, as was his wont, and to be content with boiled meat.
The physical portrait provided by Einhard is confirmed by contemporary depictions such as coins and his 8-inch (20 cm) bronze statuette kept in the Louvre. In 1861, Charlemagne's tomb was opened by scientists who reconstructed his skeleton and estimated it to be measured 1.95 metres (6 ft 5 in). A 2010 estimate of his height from an X-ray and CT scan of his tibia was 1.84 metres (6 ft 0 in). This puts him in the 99th percentile of height for his period, given that average male height of his time was 1.69 metres (5 ft 7 in). The width of the bone suggested he was slim in build.
Charlemagne wore the traditional costume of the Frankish people, described by Einhard thus:
He used to wear the national, that is to say, the Frank, dress—next his skin a linen shirt and linen breeches, and above these a tunic fringed with silk; while hose fastened by bands covered his lower limbs, and shoes his feet, and he protected his shoulders and chest in winter by a close-fitting coat of otter or marten skins.
He wore a blue cloak and always carried a sword typically of a golden or silver hilt. He wore intricately jeweled swords to banquets or ambassadorial receptions. Nevertheless:
He despised foreign costumes, however handsome, and never allowed himself to be robed in them, except twice in Rome, when he donned the Roman tunic, chlamys, and shoes; the first time at the request of Pope Hadrian, the second to gratify Leo, Hadrian's successor.
On great feast days, he wore embroidery and jewels on his clothing and shoes. He had a golden buckle for his cloak on such occasions and would appear with his great diadem, but he despised such apparel according to Einhard, and usually dressed like the common people.
Charlemagne had residences across his kingdom, including numerous private estates that were governed in accordance with the Capitulare de villis. A 9th-century document detailing the inventory of an estate at Asnapium listed amounts of livestock, plants and vegetables and kitchenware including cauldrons, drinking cups, brass kettles and firewood. The manor contained seventeen houses built inside the courtyard for nobles and family members and was separated from its supporting villas.
Charlemagne had eighteen children with seven of his ten known wives or concubines. Nonetheless, he had only four legitimate grandsons, the four sons of his fourth son, Louis. In addition, he had a grandson (Bernard of Italy, the only son of his third son, Pepin of Italy), who was illegitimate but included in the line of inheritance. Among his descendants are several royal dynasties, including the Habsburg, and Capetian dynasties. By consequence, most if not all established European noble families ever since can genealogically trace some of their background to Charlemagne.
During the first peace of any substantial length (780–782), Charles began to appoint his sons to positions of authority. In 781, during a visit to Rome, he made his two youngest sons kings, crowned by the Pope. The elder of these two, Carloman, was made the king of Italy, taking the Iron Crown that his father had first worn in 774, and in the same ceremony was renamed "Pepin" (not to be confused with Charlemagne's eldest, possibly illegitimate son, Pepin the Hunchback). The younger of the two, Louis, became King of Aquitaine. Charlemagne ordered Pepin and Louis to be raised in the customs of their kingdoms, and he gave their regents some control of their subkingdoms, but kept the real power, though he intended his sons to inherit their realms. He did not tolerate insubordination in his sons: in 792, he banished Pepin the Hunchback to Prüm Abbey because the young man had joined a rebellion against him.
Charles was determined to have his children educated, including his daughters, as his parents had instilled the importance of learning in him at an early age. His children were also taught skills in accord with their aristocratic status, which included training in riding and weaponry for his sons, and embroidery, spinning and weaving for his daughters.
The sons fought many wars on behalf of their father. Charles was mostly preoccupied with the Bretons, whose border he shared and who insurrected on at least two occasions and were easily put down. He also fought the Saxons on multiple occasions. In 805 and 806, he was sent into the Böhmerwald (modern Bohemia) to deal with the Slavs living there (Bohemian tribes, ancestors of the modern Czechs). He subjected them to Frankish authority and devastated the valley of the Elbe, forcing tribute from them. Pippin had to hold the Avar and Beneventan borders and fought the Slavs to his north. He was uniquely poised to fight the Byzantine Empire when that conflict arose after Charlemagne's imperial coronation and a Venetian rebellion. Finally, Louis was in charge of the Spanish March and fought the Duke of Benevento in southern Italy on at least one occasion. He took Barcelona in a great siege in 801.
Charlemagne kept his daughters at home with him and refused to allow them to contract sacramental marriages (though he originally condoned an engagement between his eldest daughter Rotrude and Constantine VI of Byzantium, this engagement was annulled when Rotrude was 11). Charlemagne's opposition to his daughters' marriages may possibly have intended to prevent the creation of cadet branches of the family to challenge the main line, as had been the case with Tassilo of Bavaria. However, he tolerated their extramarital relationships, even rewarding their common-law husbands and treasuring the illegitimate grandchildren they produced for him. He also refused to believe stories of their wild behaviour. After his death the surviving daughters were banished from the court by their brother, the pious Louis, to take up residence in the convents they had been bequeathed by their father. At least one of them, Bertha, had a recognised relationship, if not a marriage, with Angilbert, a member of Charlemagne's court circle.
The stability and peace of Charlemagne's reign would not long outlast him. Louis' reign was marked by strife, including multiple rebellions of his own sons. Following Louis' death, the empire was divided between West, East, and Middle Francia. Middle Francia saw several more divisions over subsequent generations. Carolingians would rule with some interruptions in East Francia until 911 and in West Francia (which would become France) until 987. After 887, the imperial title was held sporadically by a series of non-dynastic Italian rulers before lapsing in 924. East Francian King Otto the Great conquered Italy and was crowned emperor in 962. The Holy Roman Empire founded by Otto would last until its dissolution in 1806.
Charlemagne served as a model for medieval rulership "at least until the final end of empire in the West in the early nineteenth century." Charlemagne is often given the epithet "the father of Europe" because of the influence of his reign, and the legacy he left across the large area of the continent he ruled. The political structures Charlemagne established remained in place through his Carolingian successors, and continued to have influence into the eleventh century. During his reign, groundwork was laid for the process of concentration of power in military aristocrats that would characterize the later Middle Ages.
Despite the end of ruling Carolingian lines, Charlemagne is considered a direct ancestor of European ruling houses, including the Capetian dynasty, the Ottonian dynasty, the House of Luxembourg, the House of Ivrea and the House of Habsburg. The Ottonians and Capetians, direct successors of the Carolingans, drew on the legacy of Charlemagne to bolster their legitimacy and prestige. Ottonians and future emperors would continue to hold their German coronations at Aachen through the Middle Ages. The marriage of Philip II of France to Isabella of Hainault, a direct descendant of Charlemagne was seen as a sign of increased legitimacy for their son Louis VIII, and association with Charlemagne by French kings continue until the monarchy's end. Frederick Barbarossa, Charles V, and Napoleon all directly cited the influence of and associated themselves with Charlemagne.
The city of Aachen has, since 1949, awarded an international prize (called the Karlspreis der Stadt Aachen) in honour of Charlemagne. It is awarded annually to those who have promoted the idea of European unity. Winners of the prize include Richard von Coudenhove-Kalergi, the founder of the pan-European movement, Alcide De Gasperi, and Winston Churchill.
Charlemagne was a frequent subject of and inspiration for medieval writers after his death. Einhard's Vita Karoli Magni "can be said to have revived the defunct literary genre of the secular biography." Einhard drew on classical sources such as Suetonius' De vita Caesarum, the orations of Cicero, and Tacitus' Agricola to frame the structure and style of his work. The Carolingian period also saw an revival in the genre of mirrors for princes genre. The author of the Visio Karoli Magni written around 865 uses facts gathered apparently from Einhard and his own observations on the decline of Charlemagne's family after the dissensions war (840–43) as the basis for a visionary tale of Charles' meeting with a prophetic spectre in a dream. Notker's Gesta Karoli Magni, written for Charlemagne's great-grandson Charles the Fat, presents moral anecdotes to highlight the emperor's qualities as a ruler.
Charlemagne was depicted as one of the Nine Worthies, becoming a fixture in medieval literature and art as an exemplar of a Christian king. Charlemagne is the main figure of the medieval literary cycle known as Matter of France. Works of this cycle, which originated during the period of the Crusades centre depictions of the emperor as a leader of Christian knights in wars against Muslims. The cycle includes chansons de geste (epic poems) such as the Roland, and chronicles such as the Historia Caroli Magni. Geoffrey of Monmouth's legends of King Arthur and his knights may have drawn on the legendary depiction of Charlemagne and his knights as a source and archetype.
In the Divine Comedy, the spirit of Charlemagne appears to Dante in the Heaven of Mars, among the other "warriors of the faith".
Emperor Otto III attempted to have Charlemagne canonized as a saint in 1000. In 1165, Frederick Barbossa convinced the Antipope Paschal III to elevate him to sainthood. As Paschal's acts were not considered valid, Charlemagne was not recognized as a saint by the Holy See in Rome. He is not enumerated among the 28 saints named "Charles" in the Roman Martyrology. Despite this lack of recognition, Charlemagne's cult became observed in Aachen, Reims, Frankfurt am Main, Zurich, and Regensburg, and he has been venerated in France since the reign of Charles V. Pope Benedict XIV recognized his cult, beatifying him, in the eighteenth century Benedict also quoted Charlemagne's capitularies in his apostolic constitution 'Providas' against freemasonry: "For in no way are we able to understand how they can be faithful to us, who have shown themselves unfaithful to God and disobedient to their Priests". | [
{
"paragraph_id": 0,
"text": "Charlemagne (/ˈʃɑːrləmeɪn, ˌʃɑːrləˈmeɪn/ SHAR-lə-mayn, -MAYN; 2 April 748 – 28 January 814) was King of the Franks from 768, King of the Lombards from 774, and Emperor from 800, all until his death. Charlemagne succeeded in uniting the majority of Western and Central Europe, and he was the first recognized emperor to rule Western Europe after the fall of the Western Roman Empire approximately three centuries earlier. Charlemagne's rule saw a program of political and societal changes that had a lasting impact on Europe in the Middle Ages.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A member of the Frankish Carolingian dynasty, Charlemagne was the eldest son of Pepin the Short and Bertrada of Laon. With his brother Carloman I, he became king of the Franks in 768 following Pepins's death, and became sole ruler in 771. As king, he continued his father's policy towards the protection of the papacy and became its chief defender, removing the Lombards from power in northern Italy in 774. Charlemagne's reign saw a period of expansion that led to conquests of Bavaria, Saxony, and northern Spain, as well as other campaigns that led Charles to extend his rule over a vast area of Europe. He spread Christianity to his new conquests, often by force, as seen at the Massacre of Verden against the Saxons.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 800, Charlemagne was crowned as emperor in Rome by Pope Leo III. While historians debate about the exact significance of the coronation, the title represented the height of prestige and authority he had achieved. Charlemagne's position as the first emperor in the West since Romulus Augustulus brought him into conflict with the contemporary Eastern Roman Empire based in Constantinople. As king and emperor, Charlemagne engaged in a series of reforms in administration, law, education, military organization, and religion which shaped Europe for centuries. The stability of his reign saw the beginning of a period of significant cultural activity known as the Carolingian Renaissance.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Charlemagne died in 814, and was laid to rest in the Aachen Cathedral, in his imperial capital city of Aachen. He was succeeded by his only surviving son Louis the Pious. After Louis, the Frankish kingdom would be divided, eventually coalescing into West and East Francia, which would respectively become France and the Holy Roman Empire. Charlemagne's profound impact on the Middle Ages, and the influence on the vast territory he ruled has led him to be called the \"Father of Europe\". He is seen as a founding figure by multiple European states, and many historical royal houses of Europe trace their lineage back to him. Charlemagne has been the subject of artwork, monuments, and literature since the medieval period, and has received veneration in the Catholic Church.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Various languages were spoken in Charlemagne's world, and he would have been known to contemporaries as: Karlus in the Germanic dialect he spoke; Karlo to Romance speakers; or Carolus (or an alternative form, Karolus) in Latin, the formal language of writing and diplomacy. Charles is the modern English form of these names.",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "The name Charlemagne by which the emperor is normally known in English, comes from the French Charles-le-magne, meaning \"Charles the Great\". In modern German, he is known as Karl der Große. The nickname magnus (great) may have been associated with him already in his lifetime, but this is not certain. The contemporary Latin Royal Frankish Annals routinely call him Carolus magnus rex, \"Charles the great king\". As a nickname, it is certainly attested in the works of the Poeta Saxo around 900 and it only became standard in all the lands of his former empire around 1000.",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "Charles's achievements gave a new meaning to his name. In many Slavic, Baltic and Turkic languages, the very word for \"king\" derives from his name; e.g., Polish: król, Ukrainian: король (korol'), Czech: král, Slovak: kráľ, Lithuanian: karalius, Latvian: karalis, Russian: король, Macedonian: крал, Bulgarian: крал, Serbo-Croatian: краљ/kralj, Turkish: kral. This development parallels that of the name of the Caesars in the original Roman Empire, which became kaiser and tsar (or czar), among others.",
"title": "Name"
},
{
"paragraph_id": 7,
"text": "By the 6th century, the western Germanic tribe of the Franks had been Christianised, due in considerable measure to the Catholic conversion of Clovis I. Francia, ruled by the Merovingians, was the most powerful of the kingdoms that succeeded the Western Roman Empire, encompassing nearly all of modern France and Switzerland, along with parts of modern Germany and the Low Countries. Francia was often divided in several sub-kingdoms under different Merovingian kings, due to ill-defined succession laws. The late 7th century saw a period of war and instability following the murder of King Childeric II, which led to factional struggles among the Frankish aristocrats.",
"title": "Rise to power"
},
{
"paragraph_id": 8,
"text": "In 687, Pepin of Herstal, mayor of the palace of the Frankish sub-kingdom Austrasia, ended the strife between various kings and their mayors with his victory at Tertry. Pepin was the grandson of two important figures of the Austrasian Kingdom: Saint Arnulf of Metz and Pepin of Landen. Pepin's position as mayor of the palace saw him gain power as the Mergovian kings' own waned. Pepin was eventually succeeded by his son Charles, later known as Charles Martel. Charles did not support a Merovingian successor upon the death of King Theuderic IV in 737, leaving the throne vacant. Charles was able to pass on power and be succeeded in 741 by his sons Carloman and Pepin the Short, the father of Charlemagne.",
"title": "Rise to power"
},
{
"paragraph_id": 9,
"text": "The brothers placed Childeric III on the throne in 743. Carloman abdicated his position in 747 to travel to Rome and entered a monastery, and his son Drogo took his place. By 751 or 752, Pepin moved to depose Childeric and replace him as king. Early Carolingian-influenced sources claim that Pepin's seizure of the throne was sanctioned by Pope Stephen II, but modern historians dispute this. It is possible that papal approval only came when Stephen traveled to Francia in 754, apparently to request Pippin's aid against the Lombards, and on this trip anointed Pepin as king, legitimizing his rule. This papal visit is the earliest appearance of Charlemagne in the historical record, as he was sent to greet and escort the Pope, and he and his brother Carloman were anointed along with their father. Around the same time, Pepin moved to sideline Drogo, sending him and his brother to a monastery.",
"title": "Rise to power"
},
{
"paragraph_id": 10,
"text": "Charlemagne's birth date is uncertain, but was most likely born in 748. An older tradition, taking after 9th century biographer Einhard's report of Charlemagne being 72 at death, gives a birth year of 742. Einhard, not knowing the emperor's true age, based this on the Roman emperor Augustus' age reported in Suetonius' biography. German scholar Karl Werner challenged the acceptance of Einhard's date and cited a near-contemporary additions to annals which recorded Charlemagne's birth in 747. Lorsch Abbey commemorated Charlemagne's date of birth as 2 April since the mid-9th century, and this date is likely genuine. As the annalists recorded the start of the year from Easter rather than 1 January, Matthais Becher built off of Werner's work and showed that 2 April in the year recorded would have actually been in 748. 2 April 748 has therefore become the accepted date among scholars. The date of 742 has led to the belief that Charlemagne may have been an illegitimate child, as Pepin and Bertrada were bound by a private contract at the time of his birth, but did not marry until 744. Charlemagne's place of birth is also unknown but may have been at Frankish palaces in Vaires-sur-Marne or Quierzy",
"title": "Rise to power"
},
{
"paragraph_id": 11,
"text": "Charlemagne appears only sparsely in the Frankish annals from his anointing by Pope Stephen until the death of his father.",
"title": "Rise to power"
},
{
"paragraph_id": 12,
"text": "Charlemagne began issuing charters in his own name in 760, and is recorded as joining his father on campaign in 761. During Pepin's reign, Aquitaine was constantly in rebellion against his rule. Pepin fell ill on campaign in Aquitaine and died on 24 September 768, and Charlemagne and Carloman succeeded their father. While the brothers maintained separate palaces and maintained separate spheres of influence,it was still a joint rulership. The immediate concern of the brothers was the ongoing uprising in Aquitane. While they marched into Antiquaine together, Carloman abandoned the campaign and Charlemagne completed it on his own. Charlemagne's capture of Duke Hunald marked the end of ten years of war in the attempt to bring Aquitaine in line.",
"title": "Rise to power"
},
{
"paragraph_id": 13,
"text": "Carloman's refusal to participate in the war against Aquitaine led to a rift between the two kings. It's uncertain why Carloman did not join Charlemagne. It is possible that the brothers disagreed over control over the territory, or that Carloman was focusing on securing his rule in the north of Francia. The brothers reported to Pope Stephen III that their relations had returned to normal, though it's unclear if this was true. Regardless of potential strife between the kings, they still maintained a joint rule out of practicality. Both Charlemagne and Carloman worked to secure the support of the clergy and local elites to secure their positions.",
"title": "Rise to power"
},
{
"paragraph_id": 14,
"text": "Interests in the political affairs of Italy became a focus of Charlemagne's. The Papacy had sought the protection of the Franks from the aggression of the Lombards since the time of Charles Martel, as the ability of the Byzantine Empire to control Central Italy was fading. Charlemagne and Carloman apparently both had troops in Rome, indicating a joint policy in Italy Bertrand, mother of the Frankish kings, went to broker a bethrothal between one of her sons and a daughter of the Lombard king Desiderius in 770. It is traditionally reported that this daughter was named Desiderata and married Charlemagne. However, she may have been named Gerperga.",
"title": "Rise to power"
},
{
"paragraph_id": 15,
"text": "Carloman died suddenly on 4 December 771, leaving Charlemagne as sole King of the Franks. His wife Gerberga and their children fled to the court of Desiderius, as Charlemagne moved immediately to secure his hold on his brother's territory. As part of this effort, Charlemagne married Hildegard, daughter of a powerful magnate in Carloman's lands. By this, Charlemagne put aside his marriage to Desiridus' daughter.",
"title": "Rise to power"
},
{
"paragraph_id": 16,
"text": "At his succession in 772, Pope Adrian I demanded the return of certain cities in the former exarchate of Ravenna in accordance with a promise at the succession of Desiderius. Instead, Desiderius took over certain papal cities and invaded the Pentapolis, heading for Rome. Adrian sent ambassadors to Charlemagne in autumn requesting he enforce the policies of his father, Pepin. Desiderius sent his own ambassadors denying the pope's charges. The ambassadors met at Thionville, and Charlemagne upheld the pope's side. Charlemagne demanded what the pope had requested, but Desiderius swore never to comply. Charlemagne and his uncle Bernard crossed the Alps in 773 and chased the Lombards back to Pavia, which they then besieged. Charlemagne temporarily left the siege to deal with Adelchis, son of Desiderius, who was raising an army at Verona. The young prince was chased to the Adriatic littoral and fled to Constantinople to plead for assistance from Constantine V, who was waging war with Bulgaria.",
"title": "Italian campaigns"
},
{
"paragraph_id": 17,
"text": "The siege lasted until the spring of 774 when Charlemagne visited the pope in Rome. There he confirmed his father's grants of land, with some later chronicles falsely claiming that he also expanded them, granting Tuscany, Emilia, Venice and Corsica. The pope granted him the title patrician. He then returned to Pavia, where the Lombards were on the verge of surrendering. In return for their lives, the Lombards surrendered and opened the gates in early summer. Desiderius was sent to the abbey of Corbie, and his son Adelchis died in Constantinople, a patrician. Charles, unusually, had himself crowned with the Iron Crown and made the magnates of Lombardy pay homage to him at Pavia. Only Duke Arechis II of Benevento refused to submit and proclaimed independence. Charlemagne was then master of Italy as king of the Lombards. He left Italy with a garrison in Pavia and a few Frankish counts in place the same year.",
"title": "Italian campaigns"
},
{
"paragraph_id": 18,
"text": "Instability continued in Italy. In 776, Dukes Hrodgaud of Friuli and Hildeprand of Spoleto rebelled. Charlemagne rushed back from Saxony and defeated the Duke of Friuli in battle; the Duke was slain. The Duke of Spoleto signed a treaty. Their co-conspirator, Arechis, was not subdued, and Adelchis, their candidate in Byzantium, never left that city. Northern Italy was now faithfully his.",
"title": "Italian campaigns"
},
{
"paragraph_id": 19,
"text": "In 787, Charlemagne directed his attention towards the Duchy of Benevento, where Arechis II was reigning independently with the self-given title of Princeps. Charlemagne's siege of Salerno forced Arechis into submission, and in return for peace, Arechis recognized Charlemagne's suzerainty and handed his son Grimoald III over as a hostage. After Arechis' death in 787, Grimoald was allowed to return to Benevento. In 788, the principality was invaded by Byzantine troops led by Adelchis, but his attempts were thwarted by Grimoald. The Franks assisted in the repulsion of Adelchis, but, in turn, attacked Benevento's territories several times, obtaining small gains, notably the annexation of Chieti to the duchy of Spoleto. Later, Grimoald tried to throw off Frankish suzerainty, but Charles' sons, Pepin of Italy and Charles the Younger, forced him to submit in 792.",
"title": "Italian campaigns"
},
{
"paragraph_id": 20,
"text": "The destructive war led by Pepin in Aquitaine, although brought to a satisfactory conclusion for the Franks, proved the Frankish power structure south of the Loire was feeble and unreliable. After the defeat and death of Waifer in 768, while Aquitaine submitted again to the Carolingian dynasty, a new rebellion broke out in 769 led by Hunald II, a possible son of Waifer. He took refuge with the ally Duke Lupus II of Gascony, but probably out of fear of Charlemagne's reprisal, Lupus handed him over to the new King of the Franks to whom he pledged loyalty, which seemed to confirm the peace in the Basque area south of the Garonne. In the campaign of 769, Charlemagne seems to have followed a policy of \"overwhelming force\" and avoided a major pitched battle.",
"title": "Southern expansion"
},
{
"paragraph_id": 21,
"text": "Wary of new Basque uprisings, Charlemagne seems to have tried to contain Duke Lupus's power by appointing Seguin as the Count of Bordeaux (778) and other counts of Frankish background in bordering areas (Toulouse, County of Fézensac). The Basque Duke, in turn, seems to have contributed decisively or schemed the Battle of Roncevaux Pass (referred to as \"Basque treachery\"). The defeat of Charlemagne's army in Roncevaux (778) confirmed his determination to rule directly by establishing the Kingdom of Aquitaine (ruled by Louis the Pious) based on a power base of Frankish officials, distributing lands among colonisers and allocating lands to the Church, which he took as an ally. A Christianisation programme was put in place across the high Pyrenees (778).",
"title": "Southern expansion"
},
{
"paragraph_id": 22,
"text": "The new political arrangement for Vasconia did not sit well with local lords. As of 788 Adalric was fighting and capturing Chorson, Carolingian Count of Toulouse. He was eventually released, but Charlemagne, enraged at the compromise, decided to depose him and appointed his trustee William of Gellone. William, in turn, fought the Basques and defeated them after banishing Adalric (790).",
"title": "Southern expansion"
},
{
"paragraph_id": 23,
"text": "From 781 (Pallars, Ribagorça) to 806 (Pamplona under Frankish influence), taking the County of Toulouse for a power base, Charlemagne asserted Frankish authority over the Pyrenees by subduing the south-western marches of Toulouse (790) and establishing vassal counties on the southern Pyrenees that were to make up the Marca Hispanica. As of 794, a Frankish vassal, the Basque lord Belasko (al-Galashki, 'the Gaul') ruled Álava, but Pamplona remained under Cordovan and local control up to 806. Belasko and the counties in the Marca Hispánica provided the necessary base to attack the Andalusians (an expedition led by William Count of Toulouse and Louis the Pious to capture Barcelona in 801). Events in the Duchy of Vasconia (rebellion in Pamplona, count overthrown in Aragon, Duke Seguin of Bordeaux deposed, uprising of the Basque lords, etc.) were to prove it ephemeral upon Charlemagne's death.",
"title": "Southern expansion"
},
{
"paragraph_id": 24,
"text": "According to the Muslim historian Ibn al-Athir, the Diet of Paderborn had received the representatives of the Muslim rulers of Zaragoza, Girona, Barcelona and Huesca. Their masters had been cornered in the Iberian peninsula by Abd ar-Rahman I, the Umayyad emir of Cordova. These \"Saracen\" (Moorish and Muwallad) rulers offered their homage to the king of the Franks in return for military support. Seeing an opportunity to extend Christendom and his own power, and believing the Saxons to be a fully conquered nation, Charlemagne agreed to go to Spain.",
"title": "Southern expansion"
},
{
"paragraph_id": 25,
"text": "In 778, he led the Neustrian army across the Western Pyrenees, while the Austrasians, Lombards, and Burgundians passed over the Eastern Pyrenees. The armies met at Saragossa and Charlemagne received the homage of the Muslim rulers, Sulayman al-Arabi and Kasmin ibn Yusuf, but the city did not fall for him. Indeed, Charlemagne faced the toughest battle of his career. The Muslims forced him to retreat, so he decided to go home, as he could not trust the Basques, whom he had subdued by conquering Pamplona. He turned to leave Iberia, but as his army was crossing back through the Pass of Roncesvalles, one of the most famous events of his reign occurred: the Basques attacked and destroyed his rearguard and baggage train. The Battle of Roncevaux Pass, though less a battle than a skirmish, left many famous dead, including the seneschal Eggihard, the count of the palace Anselm, and the warden of the Breton March, Roland, inspiring the subsequent creation of The Song of Roland (La Chanson de Roland), regarded as the first major work in the French language.",
"title": "Southern expansion"
},
{
"paragraph_id": 26,
"text": "The conquest of Italy brought Charlemagne in contact with Muslims who, at the time, controlled the Mediterranean. Charlemagne's eldest son, Pepin the Hunchback, was much occupied with Muslims in Italy. Charlemagne conquered Corsica and Sardinia at an unknown date and in 799 the Balearic Islands. The islands were often attacked by Muslim pirates, but the counts of Genoa and Tuscany (Boniface) controlled them with large fleets until the end of Charlemagne's reign. Charlemagne even had contact with the caliphal court in Baghdad. In 797 (or possibly 801), the caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas and a clock.",
"title": "Southern expansion"
},
{
"paragraph_id": 27,
"text": "In Hispania, the struggle against Islam continued unabated throughout the latter half of his reign. Louis was in charge of the Spanish border. In 785, his men captured Girona permanently and extended Frankish control into the Catalan littoral for the duration of Charlemagne's reign (the area remained nominally Frankish until the Treaty of Corbeil in 1258). The Muslim chiefs in the northeast of Islamic Spain were constantly rebelling against Cordovan authority, and they often turned to the Franks for help. The Frankish border was slowly extended until 795, when Girona, Cardona, Ausona and Urgell were united into the new Spanish March, within the old duchy of Septimania.",
"title": "Southern expansion"
},
{
"paragraph_id": 28,
"text": "In 797, Barcelona, the greatest city of the region, fell to the Franks when Zeid, its governor, rebelled against Cordova and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis of Aquitaine marched the entire army of his kingdom over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. The Franks continued to press forward against the emir. They probably took Tarragona and forced the submission of Tortosa in 809. The last conquest brought them to the mouth of the Ebro and gave them raiding access to Valencia, prompting the Emir al-Hakam I to recognise their conquests in 813.",
"title": "Southern expansion"
},
{
"paragraph_id": 29,
"text": "Charlemagne was engaged in almost constant warfare throughout his reign, often at the head of his elite scara bodyguard squadrons. In the Saxon Wars, spanning thirty years and eighteen battles, he conquered Saxonia and proceeded to convert it to Christianity.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 30,
"text": "The Germanic Saxons were divided into four subgroups in four regions. Nearest to Austrasia was Westphalia and farthest away was Eastphalia. Between them was Engria and north of these three, at the base of the Jutland peninsula, was Nordalbingia.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 31,
"text": "In his first campaign, in 773, Charlemagne forced the Engrians to submit and cut down an Irminsul pillar near Paderborn. The campaign was cut short by his first expedition to Italy. He returned in 775, marching through Westphalia and conquering the Saxon fort at Sigiburg. He then crossed Engria, where he defeated the Saxons again. Finally, in Eastphalia, he defeated a Saxon force, and its leader Hessi [de] converted to Christianity. Charlemagne returned through Westphalia, leaving encampments at Sigiburg and Eresburg, which had been important Saxon bastions. He then controlled Saxony with the exception of Nordalbingia, but Saxon resistance had not ended.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 32,
"text": "Following his subjugation of the Dukes of Friuli and Spoleto, Charlemagne returned rapidly to Saxony in 776, where a rebellion had destroyed his fortress at Eresburg. The Saxons were once again defeated, but their main leader, Widukind, escaped to Denmark, his wife's home. Charlemagne built a new camp at Karlstadt. In 777, he called a national diet at Paderborn to integrate Saxony fully into the Frankish kingdom. Many Saxons were baptised as Christians.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 33,
"text": "In the summer of 779, he again invaded Saxony and reconquered Eastphalia, Engria and Westphalia. At a diet near Lippe, he divided the land into missionary districts and himself assisted in several mass baptisms (780). He then returned to Italy and, for the first time, the Saxons did not immediately revolt. Saxony was peaceful from 780 to 782.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 34,
"text": "He returned to Saxony in 782 and instituted a code of law and appointed counts, both Saxon and Frank. The laws were draconian on religious issues; for example, the Capitulatio de partibus Saxoniae prescribed death to Saxon pagans who refused to convert to Christianity. This led to renewed conflict. That year, in autumn, Widukind returned and led a new revolt. In response, at Verden in Lower Saxony, Charlemagne is recorded as having ordered the execution of 4,500 Saxon prisoners by beheading, known as the Massacre of Verden (\"Verdener Blutgericht\"). The killings triggered three years of renewed bloody warfare. During this war, the East Frisians between the Lauwers and the Weser joined the Saxons in revolt and were finally subdued. The war ended with Widukind accepting baptism. The Frisians afterwards asked for missionaries to be sent to them and a bishop of their own nation, Ludger, was sent. Charlemagne also promulgated a law code, the Lex Frisonum, as he did for most subject peoples.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 35,
"text": "Thereafter, the Saxons maintained the peace for seven years, but in 792, Westphalia again rebelled. The Eastphalians and Nordalbingians joined them in 793, but the insurrection was unpopular and was put down by 794. An Engrian rebellion followed in 796, but the presence of Charlemagne, Christian Saxons and Slavs quickly crushed it. The last insurrection occurred in 804, more than thirty years after Charlemagne's first campaign against them, but also failed. According to Einhard:",
"title": "Eastern campaigns"
},
{
"paragraph_id": 36,
"text": "The war that had lasted so many years was at length ended by their acceding to the terms offered by the King; which were renunciation of their national religious customs and the worship of devils, acceptance of the sacraments of the Christian faith and religion, and union with the Franks to form one people.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 37,
"text": "By 774, Charlemagne had invaded the Kingdom of Lombardy, and he later annexed the Lombardian territories and assumed its crown, placing the Papal States under Frankish protection. The Duchy of Spoleto south of Rome was acquired in 774, while in the central western parts of Europe, the Duchy of Bavaria was absorbed and the Bavarian policy continued of establishing tributary marches, (borders protected in return for tribute or taxes) among the Slavic Sorbs and Czechs. The remaining power confronting the Franks in the east were the Avars. However, Charlemagne acquired other Slavic areas, including Bohemia, Moravia, Austria and Croatia.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 38,
"text": "In 789, Charlemagne turned to Bavaria. He claimed that Tassilo III, Duke of Bavaria was an unfit ruler, due to his oath-breaking. The charges were exaggerated, but Tassilo was deposed anyway and put in the monastery of Jumièges. In 794, Tassilo was made to renounce any claim to Bavaria for himself and his family (the Agilolfings) at the synod of Frankfurt; he formally handed over to the king all of the rights he had held. Bavaria was subdivided into Frankish counties, as had been done with Saxony.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 39,
"text": "In 788, the Avars, an Asian nomadic group that had settled down in what is today Hungary (Einhard called them Huns), invaded Friuli and Bavaria. Charlemagne was preoccupied with other matters until 790 when he marched down the Danube and ravaged Avar territory to the Győr. A Lombard army under Pippin then marched into the Drava valley and ravaged Pannonia. The campaigns ended when the Saxons revolted again in 792.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 40,
"text": "For the next two years, Charlemagne was occupied, along with the Slavs, against the Saxons. Pippin and Duke Eric of Friuli continued, however, to assault the Avars' ring-shaped strongholds. The great Ring of the Avars, their capital fortress, was taken twice. The booty was sent to Charlemagne at his capital, Aachen, and redistributed to his followers and to foreign rulers, including King Offa of Mercia. Soon the Avar tuduns had lost the will to fight and travelled to Aachen to become vassals to Charlemagne and to become Christians. Charlemagne accepted their surrender and sent one native chief, baptised Abraham, back to Avaria with the ancient title of khagan. Abraham kept his people in line, but in 800, the Bulgarians under Khan Krum attacked the remains of the Avar state.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 41,
"text": "In 803, Charlemagne sent a Bavarian army into Pannonia, defeating and bringing an end to the Avar confederation.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 42,
"text": "In November of the same year, Charlemagne went to Regensburg where the Avar leaders acknowledged him as their ruler. In 805, the Avar khagan, who had already been baptised, went to Aachen to ask permission to settle with his people south-eastward from Vienna. The Transdanubian territories became integral parts of the Frankish realm, which was abolished by the Magyars in 899–900.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 43,
"text": "In 789, in recognition of his new pagan neighbours, the Slavs, Charlemagne marched an Austrasian-Saxon army across the Elbe into Obotrite territory. The Slavs ultimately submitted, led by their leader Witzin. Charlemagne then accepted the surrender of the Veleti under Dragovit and demanded many hostages. He also demanded permission to send missionaries into this pagan region unmolested. The army marched to the Baltic before turning around and marching to the Rhine, winning much booty with no harassment. The tributary Slavs became loyal allies. In 795, when the Saxons broke the peace, the Abotrites and Veleti rebelled with their new ruler against the Saxons. Witzin died in battle and Charlemagne avenged him by harrying the Eastphalians on the Elbe. Thrasuco, his successor, led his men to conquest over the Nordalbingians and handed their leaders over to Charlemagne, who honoured him. The Abotrites remained loyal until Charles' death and fought later against the Danes.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 44,
"text": "When Charlemagne incorporated much of Central Europe, he brought the Frankish state face to face with the Avars and Slavs in the southeast. The most southeast Frankish neighbours were Croats, who settled in Lower Pannonia and Duchy of Croatia. While fighting the Avars, the Franks had called for their support. During the 790s, he won a major victory over them in 796. Duke Vojnomir of Lower Pannonia aided Charlemagne, and the Franks made themselves overlords over the Croats of northern Dalmatia, Slavonia and Pannonia.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 45,
"text": "The Frankish commander Eric of Friuli wanted to extend his dominion by conquering the Littoral Croat Duchy. During that time, Dalmatian Croatia was ruled by Duke Višeslav of Croatia. In the Battle of Trsat, the forces of Eric fled their positions and were routed by the forces of Višeslav. Eric was among those killed, which was a great blow for the Carolingian Empire.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 46,
"text": "Charlemagne also directed his attention to the Slavs to the west of the Avar khaganate: the Carantanians and Carniolans. These people were subdued by the Lombards and Bavarii and made tributaries, but were never fully incorporated into the Frankish state.",
"title": "Eastern campaigns"
},
{
"paragraph_id": 47,
"text": "In April 799, Pope Leo III, who had faced difficulties since his accession in 795, was attacked in Rome and accused of various crimes by political enemies. Leo escaped and fled north to seek Charlemagne's help. Charlemagne continued his campaign against the Saxons before breaking off to meet Leo at Paderborn in September. Charlemagne, hearing evidence from both the Pope and his enemies, sent Leo back to Rome along with royal legates, who had instructions to reinstate the Pope and investigate the matter further. It was not until August of the next year that Charlemagne himself made plans to go to Rome, after an extensive tour of his lands in Neustria. Charlemagne met Leo in November near Mentana, at the twelfth milestone outside Rome, the traditional location where Roman emperors began their formal entry to the city. In Rome, Leo stood trial before the king and swore his innocence of all charges made against him. On 25 December 800, at mass on Christmas Day, Leo acclaimed Charlemagne as emperor and crowned him. In doing so, Charlemagne became the first reigning emperor in the west since the deposition of Romulus Augustulus in 476. His son Charles the Younger was anointed as king by Leo at the same time.",
"title": "Reign as emperor"
},
{
"paragraph_id": 48,
"text": "Historians differ as to intentions behind the imperial coronation, the extent to which Charlemagne was aware of it or participated in its planning, and the significance of the events to those present and to Charlemagne's reign. Contemporary Frankish and papal sources differ in their emphasis and representation of events. Charlemagne's 9th century biographer Einhard insists that he would not have entered the church had he known of the Pope's plan has variously been taken as truthful or as a \"literary device\" used as a sign of Charlemagne's humility. Roger Collins argues that the actions surrounding the coronation indicate that it was planned by Charlemagne as early as his meeting with Leo in 799, and Johannes Fried argues Charlemagne planned to adopt the title of emperor by 798 \"at the latest.\" In the years before the coronation, Charlemagne's courtier Alcuin had referred to Charlemagne's realm as an Imperium Christianum (\"Christian Empire\"), wherein, \"just as the inhabitants of the Roman Empire had been united by a common Roman citizenship\", presumably this new empire would be united by a common Christian faith. This is the view of Henri Pirenne when he says \"Charles was the Emperor of the ecclesia as the Pope conceived it, of the Roman Church, regarded as the universal Church\".",
"title": "Reign as emperor"
},
{
"paragraph_id": 49,
"text": "For both Leo and Charlemagne, the Roman Empire remained a significant contemporary power in European politics, especially in Italy. The Byzantines continued to hold a substantial portion of Italy, with borders not far south of Rome. In sitting in judgment of the Pope, Charlemagne could have been seen as usurping the prerogatives of the emperor in Constantinople. One of the earliest narrative sources, the Annals of Lorsch present the position of Emperess Irene, a woman, on the throne indicated an absence in the imperial title that Leo and Charlemagne could therefore fill. Pirenne disputes this, saying that the coronation \"was not in any sense explained by the fact that at this moment a woman was reigning in Constantinople.\" Leo's main motivations may have been the desire to increase his own standing after his political difficulties, showing himself as a king-maker and securing Charlemagne as his powerful ally and protector. The Byzantine Empire's lack of ability to influence events in Italy and support the papacy were also important in Leo's position. The act of Leo crowning Charlemagne can also be viewed as showing the Pope's spiritual power over Charlemagne as a temporal ruler. The Royal Frankish Annals, on the other hand, records Leo prostrating himself before Charlemagne after crowning him, an act of submission standard in Roman coronation rituals from the time of Diocletian. This account represents Leo, rather being the superior of Charlemagne, merely acting as an agent of the Roman people in recognizing their acclimation of Charlemagne as emperor.",
"title": "Reign as emperor"
},
{
"paragraph_id": 50,
"text": "Henry Mayr-Harting argues that assumption of the imperial title by Charlemagne was an effort to incorporate the Saxons into the Frankish realm, as they did not have a native tradition of kingship. However, Costambeys, Innes, and MacLean note in The Carolingian World that \"since Saxony had not been in the Roman empire it is hard to see on what basis an emperor would have been any more welcomed.\" These authors argue that the decision to take the title of emperor was more aimed at furthering Charlemagne's influence in Italy, as an appeal to traditional authority recognized by Italian elites both within and especially outside his current control.",
"title": "Reign as emperor"
},
{
"paragraph_id": 51,
"text": "Collins concurs that becoming emperor gave Charlemagne \"the right to try to impose his rule over the whole of [Italy]\", and regards this as a motivator for the coronation. He also notes the \"element of political and military risk\" inherent in the affair, due to the opposition of the Byzantine Empire as well as potential opposition from the Frankish elite as the imperial title could draw him further into Mediterranean politics. Collins sees several actions of Charlemagne as attempts to ensure his new title was cast in a distinctly Frankish context.",
"title": "Reign as emperor"
},
{
"paragraph_id": 52,
"text": "Charlemagne's coronation led to a centuries-long ideological conflict between his successors and Constantinople, termed the problem of two emperors, as it could be seen as a repudiation of the Byzantine singular claim to imperial title as preeminent among Christian rulers. Charlemagne may have had a more limited view of his role, seeing the title simply representing dominion over the lands he already ruled. Still, the title of emperor gave Charlemagne enhanced prestige and ideological authority. He immediately incorporated his new title into documents issued, adopting the formula Charles, most serene Augustus, crowned by God, great peaceful emperor governing the Roman empire, and who is by the mercy of God king of the Franks and the Lombards as opposed to the earlier form Charles, by the grace of God king of the Franks and Lombards and patrician of the Romans. The avoidance of the specific claim of being a \"Roman emperor\" as an opposed to the more neutral \"emperor governing the Roman empire\" may have been to improve relations with the Byzantines. This phrasing, alongside the continuation of his earlier royal titles, may also represent a view of his role as emperor as merely being the ruler of the people of the city of Rome, just as he was of the Franks and the Lombards.",
"title": "Reign as emperor"
},
{
"paragraph_id": 53,
"text": "Charlemagne left Italy in the summer of 801 after judging several ecclesiastical disputes in Rome and further stops in Ravenna, Pavia, and Bologna. He would not return to Rome again. Although the trends of his later realm began in the 790s, period of Charlemagne's reign from 801 onward marks a \"distinct phase\" characterised by a more stationary rule from the palace at Aachen. Expansion of the realm largely ended, marked by the establishment of marches to defend the empire's frontiers. While there continued to be conflict until the end of Charlemagne's reign, the relative peace of the imperial period saw an increased focus on internal governance through the issuing of laws and capitularies.",
"title": "Reign as emperor"
},
{
"paragraph_id": 54,
"text": "Charlemagne did not campaign in either 802 or 803. The Capitulare missorum generale issued in 802, called the \"programmatic capitulary\", was an expansive piece of legislation, with provisions governing the conduct of royal officials and requiring a loyalty oath to the emperor to be taken by all free men under his rule. The capitulary reformed the institution of the missi dominici, officials who would now be assigned in pairs (a cleric and a lay aristocrat) to administer justice and oversee governance in defined territories. The emperor also ordered revisions of the Lombard and Frankish law codes.",
"title": "Reign as emperor"
},
{
"paragraph_id": 55,
"text": "In addition to the missi, Charlemagne also ruled the empire through his sons as sub-kings. Pepin and Louis had been appointed kings of the Italy and Aquitaine respectively in 781, though both were children at the time and were ruled by regents in their minority. Though both had some devolved authority as kings in adulthood, Charlemagne still had ultimate authority and intervened in matters directly. Charles, the eldest son, had been given rule over realms in Neustria in 789 or 790, and had been made a king in 800.",
"title": "Reign as emperor"
},
{
"paragraph_id": 56,
"text": "The 806 charter Divisio Regnorum (\"division of the realm\"), set the terms of succession of the empire in the event of Charlemagne's death. Charles, as eldest son, was given the largest share of the inheritance, with rule of Francia proper along with Saxony, Nordgau, and parts of Alemannia. The two younger sons were confirmed in their kingdoms and gained additional territories, with most of Bavaria and Alemmannia given to Pepin and Provence, Septimania, and parts of Burgundy to Louis. Charlemagne did not address the inheritance of the imperial title. The Divisio also addressed the event of any of the brothers, and urged peace between them and between any of their nephews who might inherit.",
"title": "Reign as emperor"
},
{
"paragraph_id": 57,
"text": "The iconoclasm of the Byzantine Isaurian Dynasty was endorsed by the Franks. The Second Council of Nicaea reintroduced the veneration of icons under Empress Irene. The council was not recognised by Charlemagne since no Frankish emissaries had been invited, even though Charlemagne ruled more than three provinces of the classical Roman empire and was considered equal in rank to the Byzantine emperor. And while the Pope supported the reintroduction of the iconic veneration, he politically digressed from Byzantium. He certainly desired to increase the influence of the papacy, to honour his saviour Charlemagne, and to solve the constitutional issues then most troubling to European jurists in an era when Rome was not in the hands of an emperor. Thus, Charlemagne's assumption of the imperial title was not a usurpation in the eyes of the Franks or Italians. It was, however, seen as such in Byzantium, where it was protested by Irene and her successor Nikephoros I—neither of whom had any great effect in enforcing their protests.",
"title": "Reign as emperor"
},
{
"paragraph_id": 58,
"text": "The East Romans, however, still held several territories in Italy: Venice (what was left of the Exarchate of Ravenna), Reggio (in Calabria), Otranto (in Apulia), and Naples (the Ducatus Neapolitanus). These regions remained outside of Frankish hands until 804, when the Venetians, torn by infighting, transferred their allegiance to the Iron Crown of Pippin, Charles' son. The Pax Nicephori ended. Nicephorus ravaged the coasts with a fleet, initiating the only instance of war between the Byzantines and the Franks. The conflict lasted until 810 when the pro-Byzantine party in Venice gave their city back to the Byzantine Emperor, and the two emperors of Europe made peace: Charlemagne received the Istrian peninsula and in 812 the emperor Michael I Rangabe recognised his status as Emperor, although not necessarily as \"Emperor of the Romans\".",
"title": "Reign as emperor"
},
{
"paragraph_id": 59,
"text": "After the conquest of Nordalbingia, the Frankish frontier was brought into contact with Scandinavia. The pagan Danes, \"a race almost unknown to his ancestors, but destined to be only too well known to his sons\" as Charles Oman described them, inhabiting the Jutland peninsula, had heard many stories from Widukind and his allies who had taken refuge with them about the dangers of the Franks and the fury which their Christian king could direct against pagan neighbours.",
"title": "Reign as emperor"
},
{
"paragraph_id": 60,
"text": "In 808, the king of the Danes, Godfred, expanded the vast Danevirke across the isthmus of Schleswig. This defence, last employed in the Danish-Prussian War of 1864, was at its beginning a 30 km (19 mi) long earthenwork rampart. The Danevirke protected Danish land and gave Godfred the opportunity to harass Frisia and Flanders with pirate raids. He also subdued the Frank-allied Veleti and fought the Abotrites.",
"title": "Reign as emperor"
},
{
"paragraph_id": 61,
"text": "Godfred invaded Frisia, joked of visiting Aachen, but was murdered before he could do any more, either by a Frankish assassin or by one of his own men. Godfred was succeeded by his nephew Hemming, who concluded the Treaty of Heiligen with Charlemagne in late 811.",
"title": "Reign as emperor"
},
{
"paragraph_id": 62,
"text": "In 813, Charlemagne called Louis the Pious, king of Aquitaine, his only surviving legitimate son, to his court. There Charlemagne crowned his son as co-emperor and sent him back to Aquitaine. He then spent the autumn hunting before returning to Aachen on 1 November. In January, he fell ill with pleurisy. In deep depression (mostly because many of his plans were not yet realised), he took to his bed on 21 January and as Einhard tells it:",
"title": "Reign as emperor"
},
{
"paragraph_id": 63,
"text": "He died January twenty-eighth, the seventh day from the time that he took to his bed, at nine o'clock in the morning, after partaking of the Holy Communion, in the seventy-second year of his age and the forty-seventh of his reign.",
"title": "Reign as emperor"
},
{
"paragraph_id": 64,
"text": "He was buried that same day, in Aachen Cathedral. The earliest surviving planctus, the Planctus de obitu Karoli, was composed by a monk of Bobbio, which he had patronised. A later story, told by Otho of Lomello, Count of the Palace at Aachen in the time of Emperor Otto III, would claim that he and Otto had discovered Charlemagne's tomb: Charlemagne, they claimed, was seated upon a throne, wearing a crown and holding a sceptre, his flesh almost entirely incorrupt. In 1165, Emperor Frederick I re-opened the tomb again and placed the emperor in a sarcophagus beneath the floor of the cathedral. In 1215 Emperor Frederick II re-interred him in a casket made of gold and silver known as the Karlsschrein.",
"title": "Reign as emperor"
},
{
"paragraph_id": 65,
"text": "Charlemagne's death emotionally affected many of his subjects, particularly those of the literary clique who had surrounded him at Aachen. An anonymous monk of Bobbio lamented:",
"title": "Reign as emperor"
},
{
"paragraph_id": 66,
"text": "From the lands where the sun rises to western shores, people are crying and wailing ... the Franks, the Romans, all Christians, are stung with mourning and great worry ... the young and old, glorious nobles, all lament the loss of their Caesar ... the world laments the death of Charles ... O Christ, you who govern the heavenly host, grant a peaceful place to Charles in your kingdom. Alas for miserable me.",
"title": "Reign as emperor"
},
{
"paragraph_id": 67,
"text": "Louis succeeded him as Charles had intended. He left a testament allocating his assets in 811 that was not updated prior to his death. He left most of his wealth to the Church, to be used for charity. His empire lasted only another generation in its entirety; its division, according to custom, between Louis's own sons after their father's death laid the foundation for the modern states of Germany and France.",
"title": "Reign as emperor"
},
{
"paragraph_id": 68,
"text": "The Carolingian king exercised the bannum, the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the \"Carolingian Renaissance\".",
"title": "Administration"
},
{
"paragraph_id": 69,
"text": "Charlemagne's success rested primarily on novel siege technologies and excellent logistics rather than the long-claimed \"cavalry revolution\" led by Charles Martel in 730s. However, the stirrup, which made the \"shock cavalry\" lance charge possible, was not introduced to the Frankish kingdom until the late eighth century.",
"title": "Administration"
},
{
"paragraph_id": 70,
"text": "Horses were used extensively by the Frankish military because they provided a quick, long-distance method of transporting troops, which was critical to building and maintaining the large empire.",
"title": "Administration"
},
{
"paragraph_id": 71,
"text": "Charlemagne had an important role in determining Europe's immediate economic future. Pursuing his father's reforms, Charlemagne abolished the monetary system based on the gold sou. Instead, he and the Anglo-Saxon King Offa of Mercia took up Pippin's system for pragmatic reasons, notably a shortage of the metal.",
"title": "Administration"
},
{
"paragraph_id": 72,
"text": "The gold shortage was a direct consequence of the conclusion of peace with Byzantium, which resulted in ceding Venice and Sicily to the East and losing their trade routes to Africa. The resulting standardisation economically harmonised and unified the complex array of currencies that had been in use at the commencement of his reign, thus simplifying trade and commerce.",
"title": "Administration"
},
{
"paragraph_id": 73,
"text": "Charlemagne established a new standard, the livre carolinienne (from the Latin libra, the modern pound), which was based upon a pound of silver—a unit of both money and weight—worth 20 sous (from the Latin solidus [which was primarily an accounting device and never actually minted], the modern shilling) or 240 deniers (from the Latin denarius, the modern penny). During this period, the livre and the sou were counting units; only the denier was a coin of the realm.",
"title": "Administration"
},
{
"paragraph_id": 74,
"text": "Charlemagne instituted principles for accounting practice by means of the Capitulare de villis of 802, which laid down strict rules for the way in which incomes and expenses were to be recorded.",
"title": "Administration"
},
{
"paragraph_id": 75,
"text": "Charlemagne applied this system to much of the European continent, and Offa's standard was voluntarily adopted by much of England. After Charlemagne's death, continental coinage degraded, and most of Europe resorted to using the continued high-quality English coin until about 1100.",
"title": "Administration"
},
{
"paragraph_id": 76,
"text": "Early in Charlemagne's rule he tacitly allowed Jews to monopolise money lending. He invited Italian Jews to immigrate, as royal clients independent of the feudal landowners, and form trading communities in the agricultural regions of Provence and the Rhineland. Their trading activities augmented the otherwise almost exclusively agricultural economies of these regions. His personal physician was Jewish, and he employed a Jew named Isaac as his personal representative to the Muslim caliphate of Baghdad.",
"title": "Administration"
},
{
"paragraph_id": 77,
"text": "Part of Charlemagne's success as a warrior, an administrator and ruler can be traced to his admiration for learning and education. His reign is often referred to as the Carolingian Renaissance because of the flowering of scholarship, literature, art and architecture that characterise it. Charlemagne came into contact with the culture and learning of other countries (especially Moorish Spain, Anglo-Saxon England, and Lombard Italy) due to his vast conquests. He greatly increased the provision of monastic schools and scriptoria (centres for book-copying) in Francia.",
"title": "Administration"
},
{
"paragraph_id": 78,
"text": "Charlemagne was a lover of books, sometimes having them read to him during meals. He was thought to enjoy the works of Augustine of Hippo. His court played a key role in producing books that taught elementary Latin and different aspects of the church. It also played a part in creating a royal library that contained in-depth works on language and Christian faith.",
"title": "Administration"
},
{
"paragraph_id": 79,
"text": "Charlemagne encouraged clerics to translate Christian creeds and prayers into their respective vernaculars as well to teach grammar and music. Due to the increased interest of intellectual pursuits and the urging of their king, the monks accomplished so much copying that almost every manuscript from that time was preserved. At the same time, at the urging of their king, scholars were producing more secular books on many subjects, including history, poetry, art, music, law, theology, etc. Due to the increased number of titles, private libraries flourished. These were mainly supported by aristocrats and churchmen who could afford to sustain them. At Charlemagne's court, a library was founded and a number of copies of books were produced, to be distributed by Charlemagne. Book production was completed slowly by hand and took place mainly in large monastic libraries. Books were so in demand during Charlemagne's time that these libraries lent out some books, but only if that borrower offered valuable collateral in return.",
"title": "Administration"
},
{
"paragraph_id": 80,
"text": "Most of the surviving works of classical Latin were copied and preserved by Carolingian scholars. Indeed, the earliest manuscripts available for many ancient texts are Carolingian. It is almost certain that a text which survived to the Carolingian age survives still.",
"title": "Administration"
},
{
"paragraph_id": 81,
"text": "The pan-European nature of Charlemagne's influence is indicated by the origins of many of the men who worked for him: Alcuin, an Anglo-Saxon from York; Theodulf, a Visigoth, probably from Septimania; Paul the Deacon, Lombard; Italians Peter of Pisa and Paulinus of Aquileia; and Franks Angilbert, Angilram, Einhard and Waldo of Reichenau.",
"title": "Administration"
},
{
"paragraph_id": 82,
"text": "Charlemagne promoted the liberal arts at court, ordering that his children and grandchildren be well-educated, and even studying himself (in a time when even leaders who promoted education did not take time to learn themselves) under the tutelage of Peter of Pisa, from whom he learned grammar; Alcuin, with whom he studied rhetoric, dialectic (logic), and astronomy (he was particularly interested in the movements of the stars); and Einhard, who tutored him in arithmetic.",
"title": "Administration"
},
{
"paragraph_id": 83,
"text": "His great scholarly failure, as Einhard relates, was his inability to write: when in his old age he attempted to learn—practising the formation of letters in his bed during his free time on books and wax tablets he hid under his pillow—\"his effort came too late in life and achieved little success\", and his ability to read—which Einhard is silent about, and which no contemporary source supports—has also been called into question.",
"title": "Administration"
},
{
"paragraph_id": 84,
"text": "In 800, Charlemagne enlarged the hostel at the Muristan in Jerusalem and added a library to it. He certainly had not been personally in Jerusalem.",
"title": "Administration"
},
{
"paragraph_id": 85,
"text": "Charlemagne expanded the reform Church's programme unlike his father, Pippin, and uncle, Carloman. The deepening of the spiritual life was later to be seen as central to public policy and royal governance. His reform focused on strengthening the church's power structure, improving clergy's skill and moral quality, standardising liturgical practices, improvements on the basic tenets of the faith and the rooting out of paganism. His authority extended over church and state. He could discipline clerics, control ecclesiastical property and define orthodox doctrine. Despite the harsh legislation and sudden change, he had developed support from clergy who approved his desire to deepen the piety and morals of his subjects.",
"title": "Administration"
},
{
"paragraph_id": 86,
"text": "In 809–810, Charlemagne called a church council in Aachen, which confirmed the unanimous belief in the West that the Holy Spirit proceeds from the Father and the Son (ex Patre Filioque) and sanctioned inclusion in the Nicene Creed of the phrase Filioque (and the Son). For this Charlemagne sought the approval of Pope Leo III. The Pope, while affirming the doctrine and approving its use in teaching, opposed its inclusion in the text of the Creed as adopted in the 381 First Council of Constantinople. This spoke of the procession of the Holy Spirit from the Father, without adding phrases such as \"and the Son\", \"through the Son\", or \"alone\". Stressing his opposition, the Pope had the original text inscribed in Greek and Latin on two heavy shields that were displayed in Saint Peter's Basilica.",
"title": "Administration"
},
{
"paragraph_id": 87,
"text": "During Charles' reign, the Roman half uncial script and its cursive version, which had given rise to various continental minuscule scripts, were combined with features from the insular scripts in use in Irish and English monasteries. Carolingian minuscule was created partly under the patronage of Charlemagne. Alcuin, who ran the palace school and scriptorium at Aachen, was probably a chief influence.",
"title": "Administration"
},
{
"paragraph_id": 88,
"text": "The revolutionary character of the Carolingian reform, however, can be overemphasised; efforts at taming Merovingian and Germanic influence had been underway before Alcuin arrived at Aachen. The new minuscule was disseminated first from Aachen and later from the influential scriptorium at Tours, where Alcuin retired as an abbot.",
"title": "Administration"
},
{
"paragraph_id": 89,
"text": "Einhard tells in his twenty-fourth chapter:",
"title": "Appearance"
},
{
"paragraph_id": 90,
"text": "Charles was temperate in eating, and particularly so in drinking, for he abominated drunkenness in anybody, much more in himself and those of his household; but he could not easily abstain from food, and often complained that fasts injured his health. He very rarely gave entertainments, only on great feast-days, and then to large numbers of people. His meals ordinarily consisted of four courses, not counting the roast, which his huntsmen used to bring in on the spit; he was more fond of this than of any other dish. While at table, he listened to reading or music. The subjects of the readings were the stories and deeds of olden time: he was fond, too, of St. Augustine's books, and especially of the one titled \"The City of God\".",
"title": "Appearance"
},
{
"paragraph_id": 91,
"text": "Charlemagne threw grand banquets and feasts for special occasions such as religious holidays and four of his weddings. When he was not working, he loved Christian books, horseback riding, swimming, bathing in natural hot springs with his friends and family, and hunting. Franks were well known for horsemanship and hunting skills. Charles was a light sleeper and would stay in his bed chambers for entire days at a time due to restless nights. During these days, he would not get out of bed when a quarrel occurred in his kingdom, instead summoning all members of the situation into his bedroom to be given orders. Einhard tells again in the twenty-fourth chapter: \"In summer after the midday meal, he would eat some fruit, drain a single cup, put off his clothes and shoes, just as he did for the night, and rest for two or three hours. He was in the habit of awaking and rising from bed four or five times during the night.\"",
"title": "Appearance"
},
{
"paragraph_id": 92,
"text": "Einhard speaks of Charlemagne's patrius sermo, \"father\" or \"native toungue\". Most scholars have identified this as a form of Old High German, probably a Rhenish Franconian dialect. Einhard wrote from his experiences in Charlemagne's court in the 790s onward. Due to the prevalence in Francia of the \"rustic Roman\" language that was rapidly developing into Old French, he was probably functionally bilingual in both Germanic and Romance dialects from a young age. Charlemagne also spoke Latin, and according to Einhard could understand and perhaps speak some Greek. Fried considers it likely that Charlemagne would have been literate, though Einhard recorded that he only attempted to learn to write later in life.",
"title": "Appearance"
},
{
"paragraph_id": 93,
"text": "The largely fictional account of Charlemagne's Iberian campaigns by Pseudo-Turpin, written some three centuries after his death, gave rise to a legend that the king also spoke Arabic.",
"title": "Appearance"
},
{
"paragraph_id": 94,
"text": "Charlemagne's personal appearance is known from a good description by Einhard after his death in the biography Vita Karoli Magni. Einhard states:",
"title": "Appearance"
},
{
"paragraph_id": 95,
"text": "He was heavily built, sturdy, and of considerable stature, although not exceptionally so, since his height was seven times the length of his own foot. He had a round head, large and lively eyes, a slightly larger nose than usual, white but still attractive hair, a bright and cheerful expression, a short and fat neck, and he enjoyed good health, except for the fevers that affected him in the last few years of his life. Towards the end, he dragged one leg. Even then, he stubbornly did what he wanted and refused to listen to doctors, indeed he detested them, because they wanted to persuade him to stop eating roast meat, as was his wont, and to be content with boiled meat.",
"title": "Appearance"
},
{
"paragraph_id": 96,
"text": "The physical portrait provided by Einhard is confirmed by contemporary depictions such as coins and his 8-inch (20 cm) bronze statuette kept in the Louvre. In 1861, Charlemagne's tomb was opened by scientists who reconstructed his skeleton and estimated it to be measured 1.95 metres (6 ft 5 in). A 2010 estimate of his height from an X-ray and CT scan of his tibia was 1.84 metres (6 ft 0 in). This puts him in the 99th percentile of height for his period, given that average male height of his time was 1.69 metres (5 ft 7 in). The width of the bone suggested he was slim in build.",
"title": "Appearance"
},
{
"paragraph_id": 97,
"text": "Charlemagne wore the traditional costume of the Frankish people, described by Einhard thus:",
"title": "Appearance"
},
{
"paragraph_id": 98,
"text": "He used to wear the national, that is to say, the Frank, dress—next his skin a linen shirt and linen breeches, and above these a tunic fringed with silk; while hose fastened by bands covered his lower limbs, and shoes his feet, and he protected his shoulders and chest in winter by a close-fitting coat of otter or marten skins.",
"title": "Appearance"
},
{
"paragraph_id": 99,
"text": "He wore a blue cloak and always carried a sword typically of a golden or silver hilt. He wore intricately jeweled swords to banquets or ambassadorial receptions. Nevertheless:",
"title": "Appearance"
},
{
"paragraph_id": 100,
"text": "He despised foreign costumes, however handsome, and never allowed himself to be robed in them, except twice in Rome, when he donned the Roman tunic, chlamys, and shoes; the first time at the request of Pope Hadrian, the second to gratify Leo, Hadrian's successor.",
"title": "Appearance"
},
{
"paragraph_id": 101,
"text": "On great feast days, he wore embroidery and jewels on his clothing and shoes. He had a golden buckle for his cloak on such occasions and would appear with his great diadem, but he despised such apparel according to Einhard, and usually dressed like the common people.",
"title": "Appearance"
},
{
"paragraph_id": 102,
"text": "Charlemagne had residences across his kingdom, including numerous private estates that were governed in accordance with the Capitulare de villis. A 9th-century document detailing the inventory of an estate at Asnapium listed amounts of livestock, plants and vegetables and kitchenware including cauldrons, drinking cups, brass kettles and firewood. The manor contained seventeen houses built inside the courtyard for nobles and family members and was separated from its supporting villas.",
"title": "Appearance"
},
{
"paragraph_id": 103,
"text": "Charlemagne had eighteen children with seven of his ten known wives or concubines. Nonetheless, he had only four legitimate grandsons, the four sons of his fourth son, Louis. In addition, he had a grandson (Bernard of Italy, the only son of his third son, Pepin of Italy), who was illegitimate but included in the line of inheritance. Among his descendants are several royal dynasties, including the Habsburg, and Capetian dynasties. By consequence, most if not all established European noble families ever since can genealogically trace some of their background to Charlemagne.",
"title": "Wives, concubines, and children"
},
{
"paragraph_id": 104,
"text": "During the first peace of any substantial length (780–782), Charles began to appoint his sons to positions of authority. In 781, during a visit to Rome, he made his two youngest sons kings, crowned by the Pope. The elder of these two, Carloman, was made the king of Italy, taking the Iron Crown that his father had first worn in 774, and in the same ceremony was renamed \"Pepin\" (not to be confused with Charlemagne's eldest, possibly illegitimate son, Pepin the Hunchback). The younger of the two, Louis, became King of Aquitaine. Charlemagne ordered Pepin and Louis to be raised in the customs of their kingdoms, and he gave their regents some control of their subkingdoms, but kept the real power, though he intended his sons to inherit their realms. He did not tolerate insubordination in his sons: in 792, he banished Pepin the Hunchback to Prüm Abbey because the young man had joined a rebellion against him.",
"title": "Wives, concubines, and children"
},
{
"paragraph_id": 105,
"text": "Charles was determined to have his children educated, including his daughters, as his parents had instilled the importance of learning in him at an early age. His children were also taught skills in accord with their aristocratic status, which included training in riding and weaponry for his sons, and embroidery, spinning and weaving for his daughters.",
"title": "Wives, concubines, and children"
},
{
"paragraph_id": 106,
"text": "The sons fought many wars on behalf of their father. Charles was mostly preoccupied with the Bretons, whose border he shared and who insurrected on at least two occasions and were easily put down. He also fought the Saxons on multiple occasions. In 805 and 806, he was sent into the Böhmerwald (modern Bohemia) to deal with the Slavs living there (Bohemian tribes, ancestors of the modern Czechs). He subjected them to Frankish authority and devastated the valley of the Elbe, forcing tribute from them. Pippin had to hold the Avar and Beneventan borders and fought the Slavs to his north. He was uniquely poised to fight the Byzantine Empire when that conflict arose after Charlemagne's imperial coronation and a Venetian rebellion. Finally, Louis was in charge of the Spanish March and fought the Duke of Benevento in southern Italy on at least one occasion. He took Barcelona in a great siege in 801.",
"title": "Wives, concubines, and children"
},
{
"paragraph_id": 107,
"text": "Charlemagne kept his daughters at home with him and refused to allow them to contract sacramental marriages (though he originally condoned an engagement between his eldest daughter Rotrude and Constantine VI of Byzantium, this engagement was annulled when Rotrude was 11). Charlemagne's opposition to his daughters' marriages may possibly have intended to prevent the creation of cadet branches of the family to challenge the main line, as had been the case with Tassilo of Bavaria. However, he tolerated their extramarital relationships, even rewarding their common-law husbands and treasuring the illegitimate grandchildren they produced for him. He also refused to believe stories of their wild behaviour. After his death the surviving daughters were banished from the court by their brother, the pious Louis, to take up residence in the convents they had been bequeathed by their father. At least one of them, Bertha, had a recognised relationship, if not a marriage, with Angilbert, a member of Charlemagne's court circle.",
"title": "Wives, concubines, and children"
},
{
"paragraph_id": 108,
"text": "The stability and peace of Charlemagne's reign would not long outlast him. Louis' reign was marked by strife, including multiple rebellions of his own sons. Following Louis' death, the empire was divided between West, East, and Middle Francia. Middle Francia saw several more divisions over subsequent generations. Carolingians would rule with some interruptions in East Francia until 911 and in West Francia (which would become France) until 987. After 887, the imperial title was held sporadically by a series of non-dynastic Italian rulers before lapsing in 924. East Francian King Otto the Great conquered Italy and was crowned emperor in 962. The Holy Roman Empire founded by Otto would last until its dissolution in 1806.",
"title": "Legacy"
},
{
"paragraph_id": 109,
"text": "Charlemagne served as a model for medieval rulership \"at least until the final end of empire in the West in the early nineteenth century.\" Charlemagne is often given the epithet \"the father of Europe\" because of the influence of his reign, and the legacy he left across the large area of the continent he ruled. The political structures Charlemagne established remained in place through his Carolingian successors, and continued to have influence into the eleventh century. During his reign, groundwork was laid for the process of concentration of power in military aristocrats that would characterize the later Middle Ages.",
"title": "Legacy"
},
{
"paragraph_id": 110,
"text": "Despite the end of ruling Carolingian lines, Charlemagne is considered a direct ancestor of European ruling houses, including the Capetian dynasty, the Ottonian dynasty, the House of Luxembourg, the House of Ivrea and the House of Habsburg. The Ottonians and Capetians, direct successors of the Carolingans, drew on the legacy of Charlemagne to bolster their legitimacy and prestige. Ottonians and future emperors would continue to hold their German coronations at Aachen through the Middle Ages. The marriage of Philip II of France to Isabella of Hainault, a direct descendant of Charlemagne was seen as a sign of increased legitimacy for their son Louis VIII, and association with Charlemagne by French kings continue until the monarchy's end. Frederick Barbarossa, Charles V, and Napoleon all directly cited the influence of and associated themselves with Charlemagne.",
"title": "Legacy"
},
{
"paragraph_id": 111,
"text": "The city of Aachen has, since 1949, awarded an international prize (called the Karlspreis der Stadt Aachen) in honour of Charlemagne. It is awarded annually to those who have promoted the idea of European unity. Winners of the prize include Richard von Coudenhove-Kalergi, the founder of the pan-European movement, Alcide De Gasperi, and Winston Churchill.",
"title": "Legacy"
},
{
"paragraph_id": 112,
"text": "Charlemagne was a frequent subject of and inspiration for medieval writers after his death. Einhard's Vita Karoli Magni \"can be said to have revived the defunct literary genre of the secular biography.\" Einhard drew on classical sources such as Suetonius' De vita Caesarum, the orations of Cicero, and Tacitus' Agricola to frame the structure and style of his work. The Carolingian period also saw an revival in the genre of mirrors for princes genre. The author of the Visio Karoli Magni written around 865 uses facts gathered apparently from Einhard and his own observations on the decline of Charlemagne's family after the dissensions war (840–43) as the basis for a visionary tale of Charles' meeting with a prophetic spectre in a dream. Notker's Gesta Karoli Magni, written for Charlemagne's great-grandson Charles the Fat, presents moral anecdotes to highlight the emperor's qualities as a ruler.",
"title": "Legacy"
},
{
"paragraph_id": 113,
"text": "Charlemagne was depicted as one of the Nine Worthies, becoming a fixture in medieval literature and art as an exemplar of a Christian king. Charlemagne is the main figure of the medieval literary cycle known as Matter of France. Works of this cycle, which originated during the period of the Crusades centre depictions of the emperor as a leader of Christian knights in wars against Muslims. The cycle includes chansons de geste (epic poems) such as the Roland, and chronicles such as the Historia Caroli Magni. Geoffrey of Monmouth's legends of King Arthur and his knights may have drawn on the legendary depiction of Charlemagne and his knights as a source and archetype.",
"title": "Legacy"
},
{
"paragraph_id": 114,
"text": "In the Divine Comedy, the spirit of Charlemagne appears to Dante in the Heaven of Mars, among the other \"warriors of the faith\".",
"title": "Legacy"
},
{
"paragraph_id": 115,
"text": "Emperor Otto III attempted to have Charlemagne canonized as a saint in 1000. In 1165, Frederick Barbossa convinced the Antipope Paschal III to elevate him to sainthood. As Paschal's acts were not considered valid, Charlemagne was not recognized as a saint by the Holy See in Rome. He is not enumerated among the 28 saints named \"Charles\" in the Roman Martyrology. Despite this lack of recognition, Charlemagne's cult became observed in Aachen, Reims, Frankfurt am Main, Zurich, and Regensburg, and he has been venerated in France since the reign of Charles V. Pope Benedict XIV recognized his cult, beatifying him, in the eighteenth century Benedict also quoted Charlemagne's capitularies in his apostolic constitution 'Providas' against freemasonry: \"For in no way are we able to understand how they can be faithful to us, who have shown themselves unfaithful to God and disobedient to their Priests\".",
"title": "Legacy"
}
] | Charlemagne was King of the Franks from 768, King of the Lombards from 774, and Emperor from 800, all until his death. Charlemagne succeeded in uniting the majority of Western and Central Europe, and he was the first recognized emperor to rule Western Europe after the fall of the Western Roman Empire approximately three centuries earlier. Charlemagne's rule saw a program of political and societal changes that had a lasting impact on Europe in the Middle Ages. A member of the Frankish Carolingian dynasty, Charlemagne was the eldest son of Pepin the Short and Bertrada of Laon. With his brother Carloman I, he became king of the Franks in 768 following Pepins's death, and became sole ruler in 771. As king, he continued his father's policy towards the protection of the papacy and became its chief defender, removing the Lombards from power in northern Italy in 774. Charlemagne's reign saw a period of expansion that led to conquests of Bavaria, Saxony, and northern Spain, as well as other campaigns that led Charles to extend his rule over a vast area of Europe. He spread Christianity to his new conquests, often by force, as seen at the Massacre of Verden against the Saxons. In 800, Charlemagne was crowned as emperor in Rome by Pope Leo III. While historians debate about the exact significance of the coronation, the title represented the height of prestige and authority he had achieved. Charlemagne's position as the first emperor in the West since Romulus Augustulus brought him into conflict with the contemporary Eastern Roman Empire based in Constantinople. As king and emperor, Charlemagne engaged in a series of reforms in administration, law, education, military organization, and religion which shaped Europe for centuries. The stability of his reign saw the beginning of a period of significant cultural activity known as the Carolingian Renaissance. Charlemagne died in 814, and was laid to rest in the Aachen Cathedral, in his imperial capital city of Aachen. He was succeeded by his only surviving son Louis the Pious. After Louis, the Frankish kingdom would be divided, eventually coalescing into West and East Francia, which would respectively become France and the Holy Roman Empire. Charlemagne's profound impact on the Middle Ages, and the influence on the vast territory he ruled has led him to be called the "Father of Europe". He is seen as a founding figure by multiple European states, and many historical royal houses of Europe trace their lineage back to him. Charlemagne has been the subject of artwork, monuments, and literature since the medieval period, and has received veneration in the Catholic Church. | 2001-10-11T23:24:15Z | 2023-12-30T12:44:52Z | [
"Template:Internet Archive author",
"Template:Cite EB1911",
"Template:Unreferenced section",
"Template:Cite encyclopedia",
"Template:Notelist",
"Template:S-ttl",
"Template:Lang-ru",
"Template:Blockquote",
"Template:Webarchive",
"Template:S-hou",
"Template:S-bef",
"Template:S-end",
"Template:Infobox royalty",
"Template:Cite journal",
"Template:Convert",
"Template:S-break",
"Template:Holy Roman Emperors",
"Template:Lang-pl",
"Template:Centre",
"Template:Sister project links",
"Template:S-start",
"Template:Lang-bg",
"Template:Cite book",
"Template:Carolingians",
"Template:Sfn",
"Template:Infobox saint",
"Template:Refend",
"Template:History of the Catholic Church",
"Template:Short description",
"Template:Use dmy dates",
"Template:Lang-uk",
"Template:Lang-lv",
"Template:Harv",
"Template:S-new",
"Template:Antique Kings of Italy",
"Template:Other uses",
"Template:Pp-move",
"Template:Further",
"Template:Reflist",
"Template:Geschichtsquellen Person",
"Template:Lang",
"Template:See also",
"Template:Refbegin",
"Template:S-reg",
"Template:Monarchs of France",
"Template:Efn",
"Template:Lang-sh",
"Template:Lang-mk",
"Template:Col-begin",
"Template:Carolingians footer",
"Template:EngvarB",
"Template:IPAc-en",
"Template:Lang-lt",
"Template:Lang-tr",
"Template:Matter of France",
"Template:Pp-pc",
"Template:Lang-sk",
"Template:Main",
"Template:Col-end",
"Template:Librivox author",
"Template:S-aft",
"Template:Lang-cs",
"Template:Ill",
"Template:Cite web",
"Template:Sfn whitelist",
"Template:Respell",
"Template:Circa",
"Template:Authority control",
"Template:Citation needed",
"Template:Col-2"
] | https://en.wikipedia.org/wiki/Charlemagne |
5,315 | Character encodings in HTML | While Hypertext Markup Language (HTML) has been in use since 1991, HTML 4.0 from December 1997 was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display.
There are two general ways to specify which character encoding is used in the document.
First, the web server can include the character encoding or "charset" in the Hypertext Transfer Protocol (HTTP) Content-Type header, which would typically look like this:
This method gives the HTTP server a convenient way to alter document's encoding according to content negotiation; certain HTTP server software can do it, for example Apache with the module mod_charset_lite.
Second, a declaration can be included within the document itself.
For HTML it is possible to include this information inside the head element near the top of the document:
HTML5 also allows the following syntax to mean exactly the same:
XHTML documents have a third option: to express the character encoding via XML declaration, as follows:
With this second approach, because the character encoding cannot be known until the declaration is parsed, there is a problem knowing which character encoding is used in the document up to and including the declaration itself. If the character encoding is an ASCII extension then the content up to and including the declaration itself should be pure ASCII and this will work correctly. For character encodings that are not ASCII extensions (i.e. not a superset of ASCII), such as UTF-16BE and UTF-16LE, a processor of HTML, such as a web browser, should be able to parse the declaration in some cases through the use of heuristics.
As of HTML5 the recommended charset is UTF-8. An "encoding sniffing algorithm" is defined in the specification to determine the character encoding of the document based on multiple sources of input, including:
Characters outside of the printable ASCII range (32 to 126) usually appear incorrectly. This presents few problems for English-speaking users, but other languages regularly—in some cases, always—require characters outside that range. In Chinese, Japanese, and Korean (CJK) language environments where there are several different multi-byte encodings in use, auto-detection is also often employed. Finally, browsers usually permit the user to override incorrect charset label manually as well.
It is increasingly common for multilingual websites and websites in non-Western languages to use UTF-8, which allows use of the same encoding for all languages. UTF-16 or UTF-32, which can be used for all languages as well, are less widely used because they can be harder to handle in programming languages that assume a byte-oriented ASCII superset encoding, and they are less efficient for text with a high frequency of ASCII characters, which is usually the case for HTML documents.
Successful viewing of a page is not necessarily an indication that its encoding is specified correctly. If the page's creator and reader are both assuming some platform-specific character encoding, and the server does not send any identifying information, then the reader will nonetheless see the page as the creator intended, but other readers on different platforms or with different native languages will not see the page as intended.
The WHATWG Encoding Standard, referenced by recent HTML standards (the current WHATWG HTML Living Standard, as well as the formerly competing W3C HTML 5.0 and 5.1) specifies a list of encodings which browsers must support. The HTML standards forbid support of other encodings. The Encoding Standard further stipulates that new formats, new protocols (even when existing formats are used) and authors of new documents are required to use UTF-8 exclusively.
Besides UTF-8, the following encodings are explicitly listed in the HTML standard itself, with reference to the Encoding Standard:
The following additional encodings are listed in the Encoding Standard, and support for them is therefore also required:
The following encodings are listed as explicit examples of forbidden encodings:
The standard also defines a "replacement" decoder, which maps all content labelled as certain encodings to the replacement character (�), refusing to process it at all. This is intended to prevent attacks (e.g. cross site scripting) which may exploit a difference between the client and server in what encodings are supported in order to mask malicious content. Although the same security concern applies to ISO-2022-JP and UTF-16, which also allow sequences of ASCII bytes to be interpreted differently, this approach was not seen as feasible for them since they are comparatively more frequently used in deployed content. The following encodings receive this treatment:
In addition to native character encodings, characters can also be encoded as character references, which can be numeric character references (decimal or hexadecimal) or character entity references. Character entity references are also sometimes referred to as named entities, or HTML entities for HTML. HTML's usage of character references derives from SGML.
A numeric character reference in HTML refers to a character by its Universal Character Set/Unicode code point, and uses the format
or
where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style.
Not all web browsers or email clients used by receivers of HTML documents, or text editors used by authors of HTML documents, will be able to render all HTML characters. Most modern software is able to display most or all of the characters for the user's language, and will draw a box or other clear indicator for characters they cannot render.
For codes from 0 to 127, the original 7-bit ASCII standard set, most of these characters can be used without a character reference. Codes from 160 to 255 can all be created using character entity names. Only a few higher-numbered codes can be created using entity names, but all can be created by decimal number character reference.
Character entity references can also have the format &name; where name is a case-sensitive alphanumeric string. For example, "λ" can also be encoded as λ in an HTML document. The character entity references <, >, " and & are predefined in HTML and SGML, because <, >, " and & are already used to delimit markup. This notably did not include XML's ' (') entity prior to HTML5. For a list of all named HTML character entity references along with the versions in which they were introduced, see List of XML and HTML character entity references.
Unnecessary use of HTML character references may significantly reduce HTML readability. If the character encoding for a web page is chosen appropriately, then HTML character references are usually only required for markup delimiting characters as mentioned above, and for a few special characters (or none at all if a native Unicode encoding like UTF-8 is used). Incorrect HTML entity escaping may also open up security vulnerabilities for injection attacks such as cross-site scripting. If HTML attributes are left unquoted, certain characters, most importantly whitespace, such as space and tab, must be escaped using entities. Other languages related to HTML have their own methods of escaping characters.
Unlike traditional HTML with its large range of character entity references, in XML there are only five predefined character entity references. These are used to escape characters that are markup sensitive in certain contexts:
All other character entity references have to be defined before they can be used. For example, use of é (which gives é, Latin lower-case E with acute accent, U+00E9 in Unicode) in an XML document will generate an error unless the entity has already been defined. XML also requires that the x in hexadecimal numeric references be in lowercase: for example ਛ rather than ਛ. XHTML, which is an XML application, supports the HTML entity set, along with XML's predefined entities. | [
{
"paragraph_id": 0,
"text": "While Hypertext Markup Language (HTML) has been in use since 1991, HTML 4.0 from December 1997 was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are two general ways to specify which character encoding is used in the document.",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 2,
"text": "First, the web server can include the character encoding or \"charset\" in the Hypertext Transfer Protocol (HTTP) Content-Type header, which would typically look like this:",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 3,
"text": "This method gives the HTTP server a convenient way to alter document's encoding according to content negotiation; certain HTTP server software can do it, for example Apache with the module mod_charset_lite.",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 4,
"text": "Second, a declaration can be included within the document itself.",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 5,
"text": "For HTML it is possible to include this information inside the head element near the top of the document:",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 6,
"text": "HTML5 also allows the following syntax to mean exactly the same:",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 7,
"text": "XHTML documents have a third option: to express the character encoding via XML declaration, as follows:",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 8,
"text": "With this second approach, because the character encoding cannot be known until the declaration is parsed, there is a problem knowing which character encoding is used in the document up to and including the declaration itself. If the character encoding is an ASCII extension then the content up to and including the declaration itself should be pure ASCII and this will work correctly. For character encodings that are not ASCII extensions (i.e. not a superset of ASCII), such as UTF-16BE and UTF-16LE, a processor of HTML, such as a web browser, should be able to parse the declaration in some cases through the use of heuristics.",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 9,
"text": "As of HTML5 the recommended charset is UTF-8. An \"encoding sniffing algorithm\" is defined in the specification to determine the character encoding of the document based on multiple sources of input, including:",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 10,
"text": "Characters outside of the printable ASCII range (32 to 126) usually appear incorrectly. This presents few problems for English-speaking users, but other languages regularly—in some cases, always—require characters outside that range. In Chinese, Japanese, and Korean (CJK) language environments where there are several different multi-byte encodings in use, auto-detection is also often employed. Finally, browsers usually permit the user to override incorrect charset label manually as well.",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 11,
"text": "It is increasingly common for multilingual websites and websites in non-Western languages to use UTF-8, which allows use of the same encoding for all languages. UTF-16 or UTF-32, which can be used for all languages as well, are less widely used because they can be harder to handle in programming languages that assume a byte-oriented ASCII superset encoding, and they are less efficient for text with a high frequency of ASCII characters, which is usually the case for HTML documents.",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 12,
"text": "Successful viewing of a page is not necessarily an indication that its encoding is specified correctly. If the page's creator and reader are both assuming some platform-specific character encoding, and the server does not send any identifying information, then the reader will nonetheless see the page as the creator intended, but other readers on different platforms or with different native languages will not see the page as intended.",
"title": "Specifying the document's character encoding"
},
{
"paragraph_id": 13,
"text": "The WHATWG Encoding Standard, referenced by recent HTML standards (the current WHATWG HTML Living Standard, as well as the formerly competing W3C HTML 5.0 and 5.1) specifies a list of encodings which browsers must support. The HTML standards forbid support of other encodings. The Encoding Standard further stipulates that new formats, new protocols (even when existing formats are used) and authors of new documents are required to use UTF-8 exclusively.",
"title": "Permitted encodings"
},
{
"paragraph_id": 14,
"text": "Besides UTF-8, the following encodings are explicitly listed in the HTML standard itself, with reference to the Encoding Standard:",
"title": "Permitted encodings"
},
{
"paragraph_id": 15,
"text": "The following additional encodings are listed in the Encoding Standard, and support for them is therefore also required:",
"title": "Permitted encodings"
},
{
"paragraph_id": 16,
"text": "The following encodings are listed as explicit examples of forbidden encodings:",
"title": "Permitted encodings"
},
{
"paragraph_id": 17,
"text": "The standard also defines a \"replacement\" decoder, which maps all content labelled as certain encodings to the replacement character (�), refusing to process it at all. This is intended to prevent attacks (e.g. cross site scripting) which may exploit a difference between the client and server in what encodings are supported in order to mask malicious content. Although the same security concern applies to ISO-2022-JP and UTF-16, which also allow sequences of ASCII bytes to be interpreted differently, this approach was not seen as feasible for them since they are comparatively more frequently used in deployed content. The following encodings receive this treatment:",
"title": "Permitted encodings"
},
{
"paragraph_id": 18,
"text": "In addition to native character encodings, characters can also be encoded as character references, which can be numeric character references (decimal or hexadecimal) or character entity references. Character entity references are also sometimes referred to as named entities, or HTML entities for HTML. HTML's usage of character references derives from SGML.",
"title": "Character references"
},
{
"paragraph_id": 19,
"text": "A numeric character reference in HTML refers to a character by its Universal Character Set/Unicode code point, and uses the format",
"title": "Character references"
},
{
"paragraph_id": 20,
"text": "or",
"title": "Character references"
},
{
"paragraph_id": 21,
"text": "where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style.",
"title": "Character references"
},
{
"paragraph_id": 22,
"text": "Not all web browsers or email clients used by receivers of HTML documents, or text editors used by authors of HTML documents, will be able to render all HTML characters. Most modern software is able to display most or all of the characters for the user's language, and will draw a box or other clear indicator for characters they cannot render.",
"title": "Character references"
},
{
"paragraph_id": 23,
"text": "For codes from 0 to 127, the original 7-bit ASCII standard set, most of these characters can be used without a character reference. Codes from 160 to 255 can all be created using character entity names. Only a few higher-numbered codes can be created using entity names, but all can be created by decimal number character reference.",
"title": "Character references"
},
{
"paragraph_id": 24,
"text": "Character entity references can also have the format &name; where name is a case-sensitive alphanumeric string. For example, \"λ\" can also be encoded as λ in an HTML document. The character entity references <, >, " and & are predefined in HTML and SGML, because <, >, \" and & are already used to delimit markup. This notably did not include XML's ' (') entity prior to HTML5. For a list of all named HTML character entity references along with the versions in which they were introduced, see List of XML and HTML character entity references.",
"title": "Character references"
},
{
"paragraph_id": 25,
"text": "Unnecessary use of HTML character references may significantly reduce HTML readability. If the character encoding for a web page is chosen appropriately, then HTML character references are usually only required for markup delimiting characters as mentioned above, and for a few special characters (or none at all if a native Unicode encoding like UTF-8 is used). Incorrect HTML entity escaping may also open up security vulnerabilities for injection attacks such as cross-site scripting. If HTML attributes are left unquoted, certain characters, most importantly whitespace, such as space and tab, must be escaped using entities. Other languages related to HTML have their own methods of escaping characters.",
"title": "Character references"
},
{
"paragraph_id": 26,
"text": "Unlike traditional HTML with its large range of character entity references, in XML there are only five predefined character entity references. These are used to escape characters that are markup sensitive in certain contexts:",
"title": "Character references"
},
{
"paragraph_id": 27,
"text": "All other character entity references have to be defined before they can be used. For example, use of é (which gives é, Latin lower-case E with acute accent, U+00E9 in Unicode) in an XML document will generate an error unless the entity has already been defined. XML also requires that the x in hexadecimal numeric references be in lowercase: for example ਛ rather than ਛ. XHTML, which is an XML application, supports the HTML entity set, along with XML's predefined entities.",
"title": "Character references"
}
] | While Hypertext Markup Language (HTML) has been in use since 1991, HTML 4.0 from December 1997 was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display. | 2001-03-23T21:46:28Z | 2023-11-28T09:38:07Z | [
"Template:Reflist",
"Template:Citation",
"Template:Cite web",
"Template:For",
"Template:Hatnote",
"Template:Use dmy dates",
"Template:Html series",
"Template:Columns-list",
"Template:Short description",
"Template:Notelist",
"Template:Main"
] | https://en.wikipedia.org/wiki/Character_encodings_in_HTML |
5,320 | Carbon nanotube | A carbon nanotube (CNT) is a tube made of carbon with a diameter in the nanometer range (nanoscale). They are one of the allotropes of carbon.
Single-walled carbon nanotubes (SWCNTs) have diameters around 0.5–2.0 nanometers, about 100,000 times smaller than the width of a human hair. They can be idealized as cutouts from a two-dimensional graphene sheet rolled up to form a hollow cylinder.
Multi-walled carbon nanotubes (MWCNTs) consist of nested single-wall carbon nanotubes in a nested, tube-in-tube structure. Double- and triple-walled carbon nanotubes are special cases of MWCNT.
Carbon nanotubes can exhibit remarkable properties, such as exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. Some SWCNT structures exhibit high electrical conductivity while others are semiconductors. In addition, carbon nanotubes can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibers), nanotechnology, and other applications of materials science.
The predicted properties for SWCNTs were tantalizing, but a path to synthesizing them was lacking until 1993, when Iijima and Ichihashi at NEC and Bethune et al. at IBM independently discovered that co-vaporizing carbon and transition metals such as iron and cobalt could specifically catalyze SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterize and find applications for SWCNTs.
The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it.
In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of the zigzag (or armchair) type consists entirely of closed zigzag (or armchair) paths, connected to each other.
The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class.
Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms.
The structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w, may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical.
A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) "zigzag" tubes and the (k,k) "armchair" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube.
From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be:
in picometres. The diameter d {\displaystyle d} of the tube is then c / π {\displaystyle c/\pi } , that is
also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.)
The tilt angle α between u and w and the circumference c are related to the type indices n and m by:
where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas:
which must evaluate to integers.
If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable.
The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations.
The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs.
The observation of the longest carbon nanotubes grown so far, around 0.5 metre (550 mm) long, was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes.
The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since.
The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm. The material showed ohmic conductivity (lowest resistance ~22 kΩ).
There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization uses single-wall or multi-wall in its documents.
Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.
Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen.
The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as the main movable arms in upcoming nanomechanical devices. The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN.
Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes.
Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures.
Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors.
Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties.
A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions.
In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube.
Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures.
Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers.
Many properties of single-walled carbon nanotubes depend significantly on the (n,m) type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior.
Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of 63 gigapascals (9,100,000 psi). (For illustration, this translates into the ability to endure tension of a weight equivalent to 6,422 kilograms-force (62,980 N; 14,160 lbf) on a cable with cross-section of 1 square millimetre (0.0016 sq in)). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈100 gigapascals (15,000,000 psi), which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm, its specific strength of up to 48,000 kN·m·kg is the best of known materials, compared to high-carbon steel's 154 kN·m·kg.
Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.
On the other hand, there was evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Young's modulus of on the order of several GPa showed that CNTs are in fact very soft in the radial direction.
It was reported in 2020, CNT-filled polymer nanocomposites with 4 wt% and 6 wt% loadings are the most optimal concentrations, as they provide a good balance between mechanical properties and resilience of mechanical properties against UV exposure for the offshore umbilical sheathing layer.
Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3 and n ≠ m, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus, all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting. Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the K point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion.
The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 10 A/cm, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes.
Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e/h is the conductance of a single ballistic quantum channel.
Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band.
Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate.
In 2021, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, published department findings on the use of carbon nanotubes to create an electric current. By immersing the structures in an organic solvent, the liquid drew electrons out of the carbon particles. Strano was quoted as saying, "This allows you to do electrochemistry, but with no wires," and represents a significant breakthrough in the technology. Future applications include powering micro- or nanoscale robots, as well as driving alcohol oxidation reactions, which are important in the chemicals industry.
Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in metallic armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties.
Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality, such as the non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features then determine nearly all other significant optical, mechanical, and electrical properties.
Carbon nanotube optical properties have been explored for use in applications such as for light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications.
All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m·K; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m·K. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m·K, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m·K so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m·K to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.
Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone–Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.
Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation are batch by batch process, Chemical Vapor Deposition can be used both for batch by batch or continuous processes, and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, and industrialisation is well on its way, with several CNT and CNT fibers factory in the world. One problem of CVD processes is the high variability in the nanotube's characteristics The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900-1100 °C and high pressure ~30-50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalysts provide a nucleation site for the nanotubes to grow, while cheaper iron based catalysts like Ferrocene can be used for CVD process.
Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, carbon fibers, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties. When the substrate is heated to the growth temperature (~600 to 850 °C), the continuous iron film breaks up into small islands with each island then nucleating a carbon nanotube. The sputtered thickness controls the island size and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands and drive down the diameter of the nanotubes grown. The amount of time the metal island can sit at the growth temperature is limited as they are mobile and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm) while increasing the catalyst diameter.
The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications.
CNTs are known to have weak dispersibility in many solvents such as water as a consequence of strong intermolecular p–p interactions. This hinders the processability of CNTs in industrial applications. In order to tackle the issue, various techniques have been developed to modify the surface of CNTs in order to improve their stability and solubility in water. This enhances the processing and manipulation of insoluble CNTs rendering them useful for synthesizing innovative CNT nanofluids with impressive properties that are tunable for a wide range of applications. Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents.
Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. The solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents even at a low degree of functionalization. Recently an innovative environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water producing a highly stable multi-walled carbon nanotube aqueous suspension (nanofluids).
Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of computer model to replicate the actual behavior of CNT reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single wall carbon nanotubes.
There are many metrology standards and reference materials available for carbon nanotubes.
For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis.
NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube.
For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes.
Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment.
The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes.
In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so called Fluocar materials) with grafted (halo)fluoroalkyl functionality.
Carbon nanotubes are currently used in multiple industrial and consumer applications. These include battery components, polymer composites, to improve the mechanical, thermal and electrical properties of the bulk product, and as a highly absorptive black paint. Many other applications are under development, including field effect transistors for electronics, high-strength fabrics, biosensors for biomedical and agricultural applications, and many others.
Applications of nanotubes in development in academia and industry include:
Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel.
IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020.
The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants.
CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm without electromigration damage.
Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometre can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule.
Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at 300K. By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed.
CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding.
The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. Increase in length and diameter of CNT is correlated to increased toxicity and pathological alterations in lung. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common mineral fibers (such as asbestos), most SWCNTs and MWCNTs do not fit the size and aspect-ratio criteria to be classified as respirable fibers. In 2013, given that the long-term health effects have not yet been measured, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) of 1 μg/m for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration. Although CNT caused pulmonary inflammation and toxicity in mice, exposure to aerosols generated from sanding of composites containing polymer-coated MWCNTs, representative of the actual end-product, did not exert such toxicity.
As of October 2016, single wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 10 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single wall carbon nanotubes manufactured by OCSiAl, which submitted the application.
The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometre-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. His paper initiated a flurry of excitement and could be credited with inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991.
In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometre diameter tubes made of carbon in the Journal of Physical Chemistry Of Russia. This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their Carbon editorial:
The fact is, Radushkevich and Lukyanovich [...] should be credited for the discovery that carbon filaments could be hollow and have a nanometre-size diameter, that is to say for the discovery of carbon nanotubes.
In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the "Endo-process", out of respect for his early work and patents. In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given, as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.
In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytic disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their "carbon multi-layer tubular crystals" were formed by rolling graphene layers into cylinders. They speculated that via this rolling, many different arrangements of graphene hexagonal nets are possible. They suggested two such possible arrangements: circular arrangement (armchair nanotube); and a spiral, helical arrangement (chiral tube).
In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 10 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core...."
Helping to create the initial excitement associated with carbon nanotubes were Iijima's 1991 discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods; and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, they would exhibit remarkable conducting properties. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune et al. at IBM of methods to specifically produce single-walled carbon nanotubes by adding transition-metal catalysts to the carbon in an arc discharge. Thess et al. refined this catalytic method by vaporizing the carbon/transition-metal combination in a high temperature furnace, which greatly improved the yield and purity of the SWNTs and made them widely available for characterization and application experiments. The arc discharge technique, well known to produce the famed Buckminsterfullerene on a preparative scale, thus played a role in the discoveries of both multi- and single-wall nanotubes, extending the run of serendipitous discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole.
In 2020, during archaeological excavation of Keezhadi in Tamil Nadu, India, ~2600-year-old pottery was discovered whose coatings appear to contain carbon nanotubes. The robust mechanical properties of the nanotubes are partially why the coatings have lasted for so many years, say the scientists.
This article incorporates public domain text from National Institute of Environmental Health Sciences (NIEHS) as quoted. | [
{
"paragraph_id": 0,
"text": "A carbon nanotube (CNT) is a tube made of carbon with a diameter in the nanometer range (nanoscale). They are one of the allotropes of carbon.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Single-walled carbon nanotubes (SWCNTs) have diameters around 0.5–2.0 nanometers, about 100,000 times smaller than the width of a human hair. They can be idealized as cutouts from a two-dimensional graphene sheet rolled up to form a hollow cylinder.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Multi-walled carbon nanotubes (MWCNTs) consist of nested single-wall carbon nanotubes in a nested, tube-in-tube structure. Double- and triple-walled carbon nanotubes are special cases of MWCNT.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Carbon nanotubes can exhibit remarkable properties, such as exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. Some SWCNT structures exhibit high electrical conductivity while others are semiconductors. In addition, carbon nanotubes can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibers), nanotechnology, and other applications of materials science.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The predicted properties for SWCNTs were tantalizing, but a path to synthesizing them was lacking until 1993, when Iijima and Ichihashi at NEC and Bethune et al. at IBM independently discovered that co-vaporizing carbon and transition metals such as iron and cobalt could specifically catalyze SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterize and find applications for SWCNTs.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it.",
"title": "Structure of SWNTs"
},
{
"paragraph_id": 6,
"text": "In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of the zigzag (or armchair) type consists entirely of closed zigzag (or armchair) paths, connected to each other.",
"title": "Structure of SWNTs"
},
{
"paragraph_id": 7,
"text": "The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class.",
"title": "Structure of SWNTs"
},
{
"paragraph_id": 8,
"text": "Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms.",
"title": "Structure of SWNTs"
},
{
"paragraph_id": 9,
"text": "The structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w, may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) \"zigzag\" tubes and the (k,k) \"armchair\" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the \"chiral angle\" of the nanotube.",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be:",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "in picometres. The diameter d {\\displaystyle d} of the tube is then c / π {\\displaystyle c/\\pi } , that is",
"title": "Types"
},
{
"paragraph_id": 13,
"text": "also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.)",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "The tilt angle α between u and w and the circumference c are related to the type indices n and m by:",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas:",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "which must evaluate to integers.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a \"tube\", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting \"zigzag\" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting \"armchair\" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable.",
"title": "Physical limits"
},
{
"paragraph_id": 18,
"text": "The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations.",
"title": "Physical limits"
},
{
"paragraph_id": 19,
"text": "The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs.",
"title": "Physical limits"
},
{
"paragraph_id": 20,
"text": "The observation of the longest carbon nanotubes grown so far, around 0.5 metre (550 mm) long, was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes.",
"title": "Physical limits"
},
{
"paragraph_id": 21,
"text": "The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since.",
"title": "Physical limits"
},
{
"paragraph_id": 22,
"text": "The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm. The material showed ohmic conductivity (lowest resistance ~22 kΩ).",
"title": "Physical limits"
},
{
"paragraph_id": 23,
"text": "There is no consensus on some terms describing carbon nanotubes in scientific literature: both \"-wall\" and \"-walled\" are being used in combination with \"single\", \"double\", \"triple\", or \"multi\", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization uses single-wall or multi-wall in its documents.",
"title": "Variants"
},
{
"paragraph_id": 24,
"text": "Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.",
"title": "Variants"
},
{
"paragraph_id": 25,
"text": "Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving \"holes\" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen.",
"title": "Variants"
},
{
"paragraph_id": 26,
"text": "The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as the main movable arms in upcoming nanomechanical devices. The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN.",
"title": "Variants"
},
{
"paragraph_id": 27,
"text": "Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes.",
"title": "Variants"
},
{
"paragraph_id": 28,
"text": "Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures.",
"title": "Variants"
},
{
"paragraph_id": 29,
"text": "Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors.",
"title": "Variants"
},
{
"paragraph_id": 30,
"text": "Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like \"buds\" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties.",
"title": "Variants"
},
{
"paragraph_id": 31,
"text": "A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions.",
"title": "Variants"
},
{
"paragraph_id": 32,
"text": "In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube.",
"title": "Variants"
},
{
"paragraph_id": 33,
"text": "Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures.",
"title": "Variants"
},
{
"paragraph_id": 34,
"text": "Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers.",
"title": "Variants"
},
{
"paragraph_id": 35,
"text": "Many properties of single-walled carbon nanotubes depend significantly on the (n,m) type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior.",
"title": "Properties"
},
{
"paragraph_id": 36,
"text": "Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of 63 gigapascals (9,100,000 psi). (For illustration, this translates into the ability to endure tension of a weight equivalent to 6,422 kilograms-force (62,980 N; 14,160 lbf) on a cable with cross-section of 1 square millimetre (0.0016 sq in)). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈100 gigapascals (15,000,000 psi), which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm, its specific strength of up to 48,000 kN·m·kg is the best of known materials, compared to high-carbon steel's 154 kN·m·kg.",
"title": "Properties"
},
{
"paragraph_id": 37,
"text": "Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.",
"title": "Properties"
},
{
"paragraph_id": 38,
"text": "On the other hand, there was evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Young's modulus of on the order of several GPa showed that CNTs are in fact very soft in the radial direction.",
"title": "Properties"
},
{
"paragraph_id": 39,
"text": "It was reported in 2020, CNT-filled polymer nanocomposites with 4 wt% and 6 wt% loadings are the most optimal concentrations, as they provide a good balance between mechanical properties and resilience of mechanical properties against UV exposure for the offshore umbilical sheathing layer.",
"title": "Properties"
},
{
"paragraph_id": 40,
"text": "Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3 and n ≠ m, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus, all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting. Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the K point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion.",
"title": "Properties"
},
{
"paragraph_id": 41,
"text": "The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 10 A/cm, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes.",
"title": "Properties"
},
{
"paragraph_id": 42,
"text": "Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e/h is the conductance of a single ballistic quantum channel.",
"title": "Properties"
},
{
"paragraph_id": 43,
"text": "Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band.",
"title": "Properties"
},
{
"paragraph_id": 44,
"text": "Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate.",
"title": "Properties"
},
{
"paragraph_id": 45,
"text": "In 2021, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, published department findings on the use of carbon nanotubes to create an electric current. By immersing the structures in an organic solvent, the liquid drew electrons out of the carbon particles. Strano was quoted as saying, \"This allows you to do electrochemistry, but with no wires,\" and represents a significant breakthrough in the technology. Future applications include powering micro- or nanoscale robots, as well as driving alcohol oxidation reactions, which are important in the chemicals industry.",
"title": "Properties"
},
{
"paragraph_id": 46,
"text": "Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in metallic armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties.",
"title": "Properties"
},
{
"paragraph_id": 47,
"text": "Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality, such as the non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features then determine nearly all other significant optical, mechanical, and electrical properties.",
"title": "Properties"
},
{
"paragraph_id": 48,
"text": "Carbon nanotube optical properties have been explored for use in applications such as for light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications.",
"title": "Properties"
},
{
"paragraph_id": 49,
"text": "All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as \"ballistic conduction\", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m·K; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m·K. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m·K, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m·K so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m·K to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.",
"title": "Properties"
},
{
"paragraph_id": 50,
"text": "Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone–Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.",
"title": "Properties"
},
{
"paragraph_id": 51,
"text": "Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation are batch by batch process, Chemical Vapor Deposition can be used both for batch by batch or continuous processes, and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, and industrialisation is well on its way, with several CNT and CNT fibers factory in the world. One problem of CVD processes is the high variability in the nanotube's characteristics The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900-1100 °C and high pressure ~30-50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalysts provide a nucleation site for the nanotubes to grow, while cheaper iron based catalysts like Ferrocene can be used for CVD process.",
"title": "Synthesis"
},
{
"paragraph_id": 52,
"text": "Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, carbon fibers, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties. When the substrate is heated to the growth temperature (~600 to 850 °C), the continuous iron film breaks up into small islands with each island then nucleating a carbon nanotube. The sputtered thickness controls the island size and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands and drive down the diameter of the nanotubes grown. The amount of time the metal island can sit at the growth temperature is limited as they are mobile and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm) while increasing the catalyst diameter.",
"title": "Synthesis"
},
{
"paragraph_id": 53,
"text": "The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications.",
"title": "Synthesis"
},
{
"paragraph_id": 54,
"text": "CNTs are known to have weak dispersibility in many solvents such as water as a consequence of strong intermolecular p–p interactions. This hinders the processability of CNTs in industrial applications. In order to tackle the issue, various techniques have been developed to modify the surface of CNTs in order to improve their stability and solubility in water. This enhances the processing and manipulation of insoluble CNTs rendering them useful for synthesizing innovative CNT nanofluids with impressive properties that are tunable for a wide range of applications. Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents.",
"title": "Functionalization"
},
{
"paragraph_id": 55,
"text": "Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. The solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents even at a low degree of functionalization. Recently an innovative environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water producing a highly stable multi-walled carbon nanotube aqueous suspension (nanofluids).",
"title": "Functionalization"
},
{
"paragraph_id": 56,
"text": "Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of computer model to replicate the actual behavior of CNT reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single wall carbon nanotubes.",
"title": "Modeling"
},
{
"paragraph_id": 57,
"text": "There are many metrology standards and reference materials available for carbon nanotubes.",
"title": "Metrology"
},
{
"paragraph_id": 58,
"text": "For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis.",
"title": "Metrology"
},
{
"paragraph_id": 59,
"text": "NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube.",
"title": "Metrology"
},
{
"paragraph_id": 60,
"text": "For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes.",
"title": "Metrology"
},
{
"paragraph_id": 61,
"text": "Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment.",
"title": "Chemical modification"
},
{
"paragraph_id": 62,
"text": "The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes.",
"title": "Chemical modification"
},
{
"paragraph_id": 63,
"text": "In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so called Fluocar materials) with grafted (halo)fluoroalkyl functionality.",
"title": "Chemical modification"
},
{
"paragraph_id": 64,
"text": "Carbon nanotubes are currently used in multiple industrial and consumer applications. These include battery components, polymer composites, to improve the mechanical, thermal and electrical properties of the bulk product, and as a highly absorptive black paint. Many other applications are under development, including field effect transistors for electronics, high-strength fabrics, biosensors for biomedical and agricultural applications, and many others.",
"title": "Applications"
},
{
"paragraph_id": 65,
"text": "Applications of nanotubes in development in academia and industry include:",
"title": "Current industrial applications"
},
{
"paragraph_id": 66,
"text": "Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel.",
"title": "Current industrial applications"
},
{
"paragraph_id": 67,
"text": "IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020.",
"title": "Current industrial applications"
},
{
"paragraph_id": 68,
"text": "The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants.",
"title": "Current industrial applications"
},
{
"paragraph_id": 69,
"text": "CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm without electromigration damage.",
"title": "Current industrial applications"
},
{
"paragraph_id": 70,
"text": "Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometre can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule.",
"title": "Current industrial applications"
},
{
"paragraph_id": 71,
"text": "Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at 300K. By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed.",
"title": "Current industrial applications"
},
{
"paragraph_id": 72,
"text": "CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding.",
"title": "Current industrial applications"
},
{
"paragraph_id": 73,
"text": "The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. Increase in length and diameter of CNT is correlated to increased toxicity and pathological alterations in lung. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common mineral fibers (such as asbestos), most SWCNTs and MWCNTs do not fit the size and aspect-ratio criteria to be classified as respirable fibers. In 2013, given that the long-term health effects have not yet been measured, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) of 1 μg/m for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration. Although CNT caused pulmonary inflammation and toxicity in mice, exposure to aerosols generated from sanding of composites containing polymer-coated MWCNTs, representative of the actual end-product, did not exert such toxicity.",
"title": "Safety and health"
},
{
"paragraph_id": 74,
"text": "As of October 2016, single wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 10 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single wall carbon nanotubes manufactured by OCSiAl, which submitted the application.",
"title": "Safety and health"
},
{
"paragraph_id": 75,
"text": "The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometre-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. His paper initiated a flurry of excitement and could be credited with inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometre diameter tubes made of carbon in the Journal of Physical Chemistry Of Russia. This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their Carbon editorial:",
"title": "History"
},
{
"paragraph_id": 77,
"text": "The fact is, Radushkevich and Lukyanovich [...] should be credited for the discovery that carbon filaments could be hollow and have a nanometre-size diameter, that is to say for the discovery of carbon nanotubes.",
"title": "History"
},
{
"paragraph_id": 78,
"text": "In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the \"Endo-process\", out of respect for his early work and patents. In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given, as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.",
"title": "History"
},
{
"paragraph_id": 79,
"text": "In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytic disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their \"carbon multi-layer tubular crystals\" were formed by rolling graphene layers into cylinders. They speculated that via this rolling, many different arrangements of graphene hexagonal nets are possible. They suggested two such possible arrangements: circular arrangement (armchair nanotube); and a spiral, helical arrangement (chiral tube).",
"title": "History"
},
{
"paragraph_id": 80,
"text": "In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of \"cylindrical discrete carbon fibrils\" with a \"constant diameter between about 3.5 and about 70 nanometers..., length 10 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core....\"",
"title": "History"
},
{
"paragraph_id": 81,
"text": "Helping to create the initial excitement associated with carbon nanotubes were Iijima's 1991 discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods; and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, they would exhibit remarkable conducting properties. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune et al. at IBM of methods to specifically produce single-walled carbon nanotubes by adding transition-metal catalysts to the carbon in an arc discharge. Thess et al. refined this catalytic method by vaporizing the carbon/transition-metal combination in a high temperature furnace, which greatly improved the yield and purity of the SWNTs and made them widely available for characterization and application experiments. The arc discharge technique, well known to produce the famed Buckminsterfullerene on a preparative scale, thus played a role in the discoveries of both multi- and single-wall nanotubes, extending the run of serendipitous discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole.",
"title": "History"
},
{
"paragraph_id": 82,
"text": "In 2020, during archaeological excavation of Keezhadi in Tamil Nadu, India, ~2600-year-old pottery was discovered whose coatings appear to contain carbon nanotubes. The robust mechanical properties of the nanotubes are partially why the coatings have lasted for so many years, say the scientists.",
"title": "History"
},
{
"paragraph_id": 83,
"text": "This article incorporates public domain text from National Institute of Environmental Health Sciences (NIEHS) as quoted.",
"title": "References"
}
] | A carbon nanotube (CNT) is a tube made of carbon with a diameter in the nanometer range (nanoscale). They are one of the allotropes of carbon. Single-walled carbon nanotubes (SWCNTs) have diameters around 0.5–2.0 nanometers, about 100,000 times smaller than the width of a human hair. They can be idealized as cutouts from a two-dimensional graphene sheet rolled up to form a hollow cylinder. Multi-walled carbon nanotubes (MWCNTs) consist of nested single-wall carbon nanotubes in a nested, tube-in-tube structure. Double- and triple-walled carbon nanotubes are special cases of MWCNT. Carbon nanotubes can exhibit remarkable properties, such as exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. Some SWCNT structures exhibit high electrical conductivity while others are semiconductors. In addition, carbon nanotubes can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials, nanotechnology, and other applications of materials science. The predicted properties for SWCNTs were tantalizing, but a path to synthesizing them was lacking until 1993, when Iijima and Ichihashi at NEC and Bethune et al. at IBM independently discovered that co-vaporizing carbon and transition metals such as iron and cobalt could specifically catalyze SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterize and find applications for SWCNTs. | 2001-04-10T09:03:35Z | 2023-12-26T23:14:24Z | [
"Template:Scholia",
"Template:Emerging technologies",
"Template:Main",
"Template:Reflist",
"Template:Cite book",
"Template:Cite news",
"Template:Cite web",
"Template:Cite patent",
"Template:Full citation needed",
"Template:Space elevator",
"Template:Use dmy dates",
"Template:Clarify span",
"Template:Cite periodical",
"Template:Cite report",
"Template:Ref patent",
"Template:Short description",
"Template:Speculation inline",
"Template:Convert",
"Template:Blockquote",
"Template:Allotropes of carbon",
"Template:Nanomaterials",
"Template:Multiple image",
"Template:Citation needed",
"Template:See also",
"Template:Authority control",
"Template:Clear",
"Template:Cite journal",
"Template:Webarchive",
"Template:Commons"
] | https://en.wikipedia.org/wiki/Carbon_nanotube |
5,321 | Czech Republic | The Czech Republic, also known as Czechia, is a landlocked country in Central Europe. Historically known as Bohemia, it is bordered by Austria to the south, Germany to the west, Poland to the northeast, and Slovakia to the southeast. The Czech Republic has a hilly landscape that covers an area of 78,871 square kilometers (30,452 sq mi) with a mostly temperate continental and oceanic climate. The capital and largest city is Prague; other major cities and urban areas include Brno, Ostrava, Plzeň and Liberec.
The Duchy of Bohemia was founded in the late 9th century under Great Moravia. It was formally recognized as an Imperial State of the Holy Roman Empire in 1002 and became a kingdom in 1198. Following the Battle of Mohács in 1526, all of the Crown lands of Bohemia were gradually integrated into the Habsburg monarchy. Nearly a hundred years later, the Protestant Bohemian Revolt led to the Thirty Years' War. After the Battle of White Mountain, the Habsburgs consolidated their rule. With the dissolution of the Holy Roman Empire in 1806, the Crown lands became part of the Austrian Empire.
In the 19th century, the Czech lands became more industrialized, and in 1918 most of it became part of the First Czechoslovak Republic following the collapse of Austria-Hungary after World War I. Czechoslovakia was the only country in Central and Eastern Europe to remain a parliamentary democracy during the entirety of the interwar period. After the Munich Agreement in 1938, Nazi Germany systematically took control over the Czech lands.
Czechoslovakia was restored in 1945 and three years later became an Eastern Bloc communist state following a coup d'état in 1948. Attempts to liberalize the government and economy were suppressed by a Soviet-led invasion of the country during the Prague Spring in 1968. In November 1989, the Velvet Revolution ended communist rule in the country and restored democracy. On 31 December 1992, Czechoslovakia was peacefully dissolved, with its constituent states becoming the independent states of the Czech Republic and Slovakia.
The Czech Republic is a unitary parliamentary republic and developed country with an advanced, high-income social market economy. It is a welfare state with a European social model, universal health care and free-tuition university education. It ranks 32nd in the Human Development Index. The Czech Republic is a member of the United Nations, NATO, the European Union, the OECD, the OSCE, the Council of Europe and the Visegrád Group.
The traditional English name "Bohemia" derives from Latin: Boiohaemum, which means "home of the Boii" (a Gallic tribe). The current English name ultimately comes from the Czech word Čech. The name comes from the Slavic tribe (Czech: Češi, Čechové) and, according to legend, their leader Čech, who brought them to Bohemia, to settle on Říp Mountain. The etymology of the word Čech can be traced back to the Proto-Slavic root *čel-, meaning "member of the people; kinsman", thus making it cognate to the Czech word člověk (a person).
The country has been traditionally divided into three lands, namely Bohemia (Čechy) in the west, Moravia (Morava) in the east, and Czech Silesia (Slezsko; the smaller, south-eastern part of historical Silesia, most of which is located within modern Poland) in the northeast. Known as the lands of the Bohemian Crown since the 14th century, a number of other names for the country have been used, including Czech/Bohemian lands, Bohemian Crown, Czechia, and the lands of the Crown of Saint Wenceslaus. When the country regained its independence after the dissolution of the Austro-Hungarian empire in 1918, the new name of Czechoslovakia was coined to reflect the union of the Czech and Slovak nations within one country.
After Czechoslovakia dissolved on the last day of 1992, Česko was adopted as the Czech short name for the new state and the Ministry of Foreign Affairs of the Czech Republic recommended Czechia for the English-language equivalent. This form was not widely adopted at the time, leading to the long name Czech Republic being used in English in nearly all circumstances. The Czech government directed use of Czechia as the official English short name in 2016. The short name has been listed by the United Nations and is used by other organizations such as the European Union, NATO, the CIA, Google Maps, and the European Broadcasting Union. In 2022, the American AP Stylebook stated in its entry on the country that "Czechia, the Czech Republic. Both are acceptable. The shorter name Czechia is preferred by the Czech government. If using Czechia, clarify in the story that the country is more widely known in English as the Czech Republic."
Archaeologists have found evidence of prehistoric human settlements in the area, dating back to the Paleolithic era.
In the classical era, as a result of the 3rd century BC Celtic migrations, Bohemia became associated with the Boii. The Boii founded an oppidum near the site of modern Prague. Later in the 1st century, the Germanic tribes of the Marcomanni and Quadi settled there.
Slavs from the Black Sea–Carpathian region settled in the area (their migration was pushed by an invasion of peoples from Siberia and Eastern Europe into their area: Huns, Avars, Bulgars and Magyars). In the sixth century, the Huns had moved westwards into Bohemia, Moravia, and some of present-day Austria and Germany.
During the 7th century, the Frankish merchant Samo, supporting the Slavs fighting against nearby settled Avars, became the ruler of the first documented Slavic state in Central Europe, Samo's Empire. The principality of Great Moravia, controlled by Moymir dynasty, arose in the 8th century. It reached its zenith in the 9th (during the reign of Svatopluk I of Moravia), holding off the influence of the Franks. Great Moravia was Christianized, with a role being played by the Byzantine mission of Cyril and Methodius. They codified the Old Church Slavonic language, the first literary and liturgical language of the Slavs, and the Glagolitic script.
The Duchy of Bohemia emerged in the late 9th century when it was unified by the Přemyslid dynasty. Bohemia was from 1002 until 1806 an Imperial Estate of the Holy Roman Empire.
In 1212, Přemysl Ottokar I extracted the Golden Bull of Sicily from the emperor, confirming Ottokar and his descendants' royal status; the Duchy of Bohemia was raised to a Kingdom. German immigrants settled in the Bohemian periphery in the 13th century. The Mongols in the invasion of Europe carried their raids into Moravia but were defensively defeated at Olomouc.
After a series of dynastic wars, the House of Luxembourg gained the Bohemian throne.
Efforts for a reform of the church in Bohemia started already in the late 14th century. Jan Hus' followers seceded from some practices of the Roman Church and in the Hussite Wars (1419–1434) defeated five crusades organized against them by Sigismund. During the next two centuries, 90% of the population in Bohemia and Moravia were considered Hussites. The pacifist thinker Petr Chelčický inspired the movement of the Moravian Brethren (by the middle of the 15th century) that completely separated from the Roman Catholic Church.
On 21 December 1421, Jan Žižka, a successful military commander and mercenary, led his group of forces in the Battle of Kutná Hora, resulting in a victory for the Hussites. He is honoured to this day as a national hero.
After 1526 Bohemia came increasingly under Habsburg control as the Habsburgs became first the elected and then in 1627 the hereditary rulers of Bohemia. Between 1583 and 1611 Prague was the official seat of the Holy Roman Emperor Rudolf II and his court.
The Defenestration of Prague and subsequent revolt against the Habsburgs in 1618 marked the start of the Thirty Years' War. In 1620, the rebellion in Bohemia was crushed at the Battle of White Mountain and the ties between Bohemia and the Habsburgs' hereditary lands in Austria were strengthened. The leaders of the Bohemian Revolt were executed in 1621. The nobility and the middle class Protestants had to either convert to Catholicism or leave the country.
The following era of 1620 to the late 18th century became known as the "Dark Age". During the Thirty Years' War, the population of the Czech lands declined by a third through the expulsion of Czech Protestants as well as due to the war, disease and famine. The Habsburgs prohibited all Christian confessions other than Catholicism. The flowering of Baroque culture shows the ambiguity of this historical period. Ottoman Turks and Tatars invaded Moravia in 1663. In 1679–1680 the Czech lands faced the Great Plague of Vienna and an uprising of serfs.
There were peasant uprisings influenced by famine. Serfdom was abolished between 1781 and 1848. Several battles of the Napoleonic Wars took place on the current territory of the Czech Republic.
The end of the Holy Roman Empire in 1806 led to degradation of the political status of Bohemia which lost its position of an electorate of the Holy Roman Empire as well as its own political representation in the Imperial Diet. Bohemian lands became part of the Austrian Empire. During the 18th and 19th century the Czech National Revival began its rise, with the purpose to revive Czech language, culture, and national identity. The Revolution of 1848 in Prague, striving for liberal reforms and autonomy of the Bohemian Crown within the Austrian Empire, was suppressed.
It seemed that some concessions would be made also to Bohemia, but in the end, the Emperor Franz Joseph I affected a compromise with Hungary only. The Austro-Hungarian Compromise of 1867 and the never realized coronation of Franz Joseph as King of Bohemia led to a disappointment of some Czech politicians. The Bohemian Crown lands became part of the so-called Cisleithania.
The Czech Social Democratic and progressive politicians started the fight for universal suffrage. The first elections under universal male suffrage were held in 1907.
In 1918, during the collapse of the Habsburg monarchy at the end of World War I, the independent republic of Czechoslovakia, which joined the winning Allied powers, was created, with Tomáš Garrigue Masaryk in the lead. This new country incorporated the Bohemian Crown.
The First Czechoslovak Republic comprised only 27% of the population of the former Austria-Hungary, but nearly 80% of the industry, which enabled it to compete with Western industrial states. In 1929 compared to 1913, the gross domestic product increased by 52% and industrial production by 41%. In 1938 Czechoslovakia held 10th place in the world industrial production. Czechoslovakia was the only country in Central and Eastern Europe to remain a liberal democracy throughout the entire interwar period. Although the First Czechoslovak Republic was a unitary state, it provided certain rights to its minorities, the largest being Germans (23.6% in 1921), Hungarians (5.6%) and Ukrainians (3.5%).
Western Czechoslovakia was occupied by Nazi Germany, which placed most of the region into the Protectorate of Bohemia and Moravia. The Protectorate was proclaimed part of the Third Reich, and the president and prime minister were subordinated to Nazi Germany's Reichsprotektor. One Nazi concentration camp was located within the Czech territory at Terezín, north of Prague. The vast majority of the Protectorate's Jews were murdered in Nazi-run concentration camps. The Nazi Generalplan Ost called for the extermination, expulsion, Germanization or enslavement of most or all Czechs for the purpose of providing more living space for the German people. There was Czechoslovak resistance to Nazi occupation as well as reprisals against the Czechoslovaks for their anti-Nazi resistance. The German occupation ended on 9 May 1945, with the arrival of the Soviet and American armies and the Prague uprising. Most of Czechoslovakia's German-speakers were forcibly expelled from the country, first as a result of local acts of violence and then under the aegis of an "organized transfer" confirmed by the Soviet Union, the United States, and Great Britain at the Potsdam Conference.
In the 1946 elections, the Communist Party gained 38% of the votes and became the largest party in the Czechoslovak parliament, formed a coalition with other parties, and consolidated power. A coup d'état came in 1948 and a single-party government was formed. For the next 41 years, the Czechoslovak Communist state conformed to Eastern Bloc economic and political features. The Prague Spring political liberalization was stopped by the 1968 Warsaw Pact invasion of Czechoslovakia. Analysts believe that the invasion caused the communist movement to fracture, ultimately leading to the Revolutions of 1989.
In November 1989, Czechoslovakia again became a liberal democracy through the Velvet Revolution. However, Slovak national aspirations strengthened (Hyphen War) and on 31 December 1992, the country peacefully split into the independent countries of the Czech Republic and Slovakia. Both countries went through economic reforms and privatizations, with the intention of creating a market economy, as they have been trying to do since 1990, when Czechs and Slovaks still shared the common state. This process was largely successful; in 2006 the Czech Republic was recognized by the World Bank as a "developed country", and in 2009 the Human Development Index ranked it as a nation of "Very High Human Development".
From 1991, the Czech Republic, originally as part of Czechoslovakia and since 1993 in its own right, has been a member of the Visegrád Group and from 1995, the OECD. The Czech Republic joined NATO on 12 March 1999 and the European Union on 1 May 2004. On 21 December 2007 the Czech Republic joined the Schengen Area.
Until 2017, either the centre-left Czech Social Democratic Party or the centre-right Civic Democratic Party led the governments of the Czech Republic. In October 2017, the populist movement ANO 2011, led by the country's second-richest man, Andrej Babiš, won the elections with three times more votes than its closest rival, the Civic Democrats. In December 2017, Czech president Miloš Zeman appointed Andrej Babiš as the new prime minister.
In the 2021 elections, ANO 2011 was narrowly defeated and Petr Fiala became the new prime minister. He formed a government coalition of the alliance SPOLU (Civic Democratic Party, KDU-ČSL and TOP 09) and the alliance of Pirates and Mayors. In January 2023, retired general Petr Pavel won the presidential election, becoming new Czech president to succeed Miloš Zeman. Following the 2022 Russian invasion of Ukraine, the country took in half a million Ukrainian refugees, the largest number per capita in the world.
On 21 December 2023, the worst mass shooting in Czech history took place at Charles University in central Prague. In total, 15 people were killed, including the perpetrator.
The Czech Republic lies mostly between latitudes 48° and 51° N and longitudes 12° and 19° E.
Bohemia, to the west, consists of a basin drained by the Elbe (Czech: Labe) and the Vltava rivers, surrounded by mostly low mountains, such as the Krkonoše range of the Sudetes. The highest point in the country, Sněžka at 1,603 m (5,259 ft), is located here. Moravia, the eastern part of the country, is also hilly. It is drained mainly by the Morava River, but it also contains the source of the Oder River (Czech: Odra).
Water from the Czech Republic flows to three different seas: the North Sea, Baltic Sea, and Black Sea. The Czech Republic also leases the Moldauhafen, a 30,000-square-meter (7.4-acre) lot in the middle of the Hamburg Docks, which was awarded to Czechoslovakia by Article 363 of the Treaty of Versailles, to allow the landlocked country a place where goods transported down river could be transferred to seagoing ships. The territory reverts to Germany in 2028.
Phytogeographically, the Czech Republic belongs to the Central European province of the Circumboreal Region, within the Boreal Kingdom. According to the World Wide Fund for Nature, the territory of the Czech Republic can be subdivided into four ecoregions: the Western European broadleaf forests, Central European mixed forests, Pannonian mixed forests, and Carpathian montane conifer forests.
There are four national parks in the Czech Republic. The oldest is Krkonoše National Park (Biosphere Reserve), and the others are Šumava National Park (Biosphere Reserve), Podyjí National Park, and Bohemian Switzerland.
The three historical lands of the Czech Republic (formerly some countries of the Bohemian Crown) correspond with the river basins of the Elbe and the Vltava basin for Bohemia, the Morava one for Moravia, and the Oder river basin for Czech Silesia (in terms of the Czech territory).
The Czech Republic has a temperate climate, situated in the transition zone between the oceanic and continental climate types, with warm summers and cold, cloudy and snowy winters. The temperature difference between summer and winter is due to the landlocked geographical position.
Temperatures vary depending on the elevation. In general, at higher altitudes, the temperatures decrease and precipitation increases. The wettest area in the Czech Republic is found around Bílý Potok in Jizera Mountains and the driest region is the Louny District to the northwest of Prague. Another factor is the distribution of the mountains.
At the highest peak of Sněžka (1,603 m or 5,259 ft), the average temperature is −0.4 °C (31 °F), whereas in the lowlands of the South Moravian Region, the average temperature is as high as 10 °C (50 °F). The country's capital, Prague, has a similar average temperature, although this is influenced by urban factors.
The coldest month is usually January, followed by February and December. During these months, there is snow in the mountains and sometimes in the cities and lowlands. During March, April, and May, the temperature usually increases, especially during April, when the temperature and weather tends to vary during the day. Spring is also characterized by higher water levels in the rivers, due to melting snow with occasional flooding.
The warmest month of the year is July, followed by August and June. On average, summer temperatures are about 20–30 °C (36–54 °F) higher than during winter. Summer is also characterized by rain and storms.
Autumn generally begins in September, which is still warm and dry. During October, temperatures usually fall below 15 °C (59 °F) or 10 °C (50 °F) and deciduous trees begin to shed their leaves. By the end of November, temperatures usually range around the freezing point.
The coldest temperature ever measured was in Litvínovice near České Budějovice in 1929, at −42.2 °C (−44.0 °F) and the hottest measured, was at 40.4 °C (104.7 °F) in Dobřichovice in 2012.
Most rain falls during the summer. Sporadic rainfall is throughout the year (in Prague, the average number of days per month experiencing at least 0.1 mm (0.0039 in) of rain varies from 12 in September and October to 16 in November) but concentrated rainfall (days with more than 10 mm (0.39 in) per day) are more frequent in the months of May to August (average around two such days per month). Severe thunderstorms, producing damaging straight-line winds, hail, and occasional tornadoes occur, especially during the summer period.
As of 2020, the Czech Republic ranks as the 21st most environmentally conscious country in the world in Environmental Performance Index. It had a 2018 Forest Landscape Integrity Index mean score of 1.71/10, ranking it 160th globally out of 172 countries. The Czech Republic has four National Parks (Šumava National Park, Krkonoše National Park, České Švýcarsko National Park, Podyjí National Park) and 25 Protected Landscape Areas.
The Czech Republic is a pluralist multi-party parliamentary representative democracy. The Parliament (Parlament České republiky) is bicameral, with the Chamber of Deputies (Czech: Poslanecká sněmovna, 200 members) and the Senate (Czech: Senát, 81 members). The members of the Chamber of Deputies are elected for a four-year term by proportional representation, with a 5% election threshold. There are 14 voting districts, identical to the country's administrative regions. The Chamber of Deputies, the successor to the Czech National Council, has the powers and responsibilities of the now defunct federal parliament of the former Czechoslovakia. The members of the Senate are elected in single-seat constituencies by two-round runoff voting for a six-year term, with one-third elected every even year in the autumn. This arrangement is modeled on the U.S. Senate, but each constituency is roughly the same size and the voting system used is a two-round runoff.
The president is a formal head of state with limited and specific powers, who appoints the prime minister, as well the other members of the cabinet on a proposal by the prime minister. From 1993 until 2012, the President of the Czech Republic was selected by a joint session of the parliament for a five-year term, with no more than two consecutive terms (2x Václav Havel, 2x Václav Klaus). Since 2013, the president has been elected directly. Some commentators have argued that, with the introduction of direct election of the President, the Czech Republic has moved away from the parliamentary system and towards a semi-presidential one. The Government's exercise of executive power derives from the Constitution. The members of the government are the Prime Minister, Deputy prime ministers and other ministers. The Government is responsible to the Chamber of Deputies. The Prime Minister is the head of government and wields powers such as the right to set the agenda for most foreign and domestic policy and choose government ministers.
The Czech Republic is a unitary state, with a civil law system based on the continental type, rooted in Germanic legal culture. The basis of the legal system is the Constitution of the Czech Republic adopted in 1993. The Penal Code is effective from 2010. A new Civil code became effective in 2014. The court system includes district, county, and supreme courts and is divided into civil, criminal, and administrative branches. The Czech judiciary has a triumvirate of supreme courts. The Constitutional Court consists of 15 constitutional judges and oversees violations of the Constitution by either the legislature or by the government. The Supreme Court is formed of 67 judges and is the court of highest appeal for most legal cases heard in the Czech Republic. The Supreme Administrative Court decides on issues of procedural and administrative propriety. It also has jurisdiction over certain political matters, such as the formation and closure of political parties, jurisdictional boundaries between government entities, and the eligibility of persons to stand for public office. The Supreme Court and the Supreme Administrative Court are both based in Brno, as is the Supreme Public Prosecutor's Office.
The Czech Republic has ranked as one of the safest or most peaceful countries for the past few decades. It is a member of the United Nations, the European Union, NATO, OECD, Council of Europe and is an observer to the Organization of American States. The embassies of most countries with diplomatic relations with the Czech Republic are located in Prague, while consulates are located across the country.
The Czech passport is restricted by visas. According to the 2018 Henley & Partners Visa Restrictions Index, Czech citizens have visa-free access to 173 countries, which ranks them 7th along with Malta and New Zealand. The World Tourism Organization ranks the Czech passport 24th. The US Visa Waiver Program applies to Czech nationals.
The Prime Minister and Minister of Foreign Affairs have primary roles in setting foreign policy, although the President also has influence and represents the country abroad. Membership in the European Union and NATO is central to the Czech Republic's foreign policy. The Office for Foreign Relations and Information (ÚZSI) serves as the foreign intelligence agency responsible for espionage and foreign policy briefings, as well as protection of Czech Republic's embassies abroad.
The Czech Republic has ties with Slovakia, Poland and Hungary as a member of the Visegrád Group, as well as with Germany, Israel, the United States and the European Union and its members. After 2020, relations with Asian democratic states, such as Taiwan, are being strengthened. On the contrary, the Czech Republic has long had bad relations with Russia, and from 2021 the Czech Republic appears on Russia's official list of enemy countries. The Czech Republic also has problematic relations with China.
Czech officials have supported dissenters in Belarus, Moldova, Myanmar and Cuba.
Famous Czech diplomats of the past included Jaroslav Lev of Rožmitál, Humprecht Jan Czernin, Count Philip Kinsky of Wchinitz and Tettau, Wenzel Anton, Prince of Kaunitz-Rietberg, Prince Karl Philipp Schwarzenberg, Alois Lexa von Aehrenthal, Ottokar Czernin, Edvard Beneš, Jan Masaryk, Jiří Hájek, Jiří Dienstbier, Michael Žantovský, Petr Kolář, Alexandr Vondra, Prince Karel Schwarzenberg and Petr Pavel.
The Czech armed forces consist of the Czech Land Forces, the Czech Air Force and of specialized support units. The armed forces are managed by the Ministry of Defence. The President of the Czech Republic is Commander-in-chief of the armed forces. In 2004 the army transformed itself into a fully professional organization and compulsory military service was abolished. The country has been a member of NATO since 12 March 1999. Defence spending is approximately 1.28% of the GDP (2021). The armed forces are charged with protecting the Czech Republic and its allies, promoting global security interests, and contributing to NATO.
Currently, as a member of NATO, the Czech military are participating in the Resolute Support and KFOR operations and have soldiers in Afghanistan, Mali, Bosnia and Herzegovina, Kosovo, Egypt, Israel and Somalia. The Czech Air Force also served in the Baltic states and Iceland. The main equipment of the Czech military includes JAS 39 Gripen multi-role fighters, Aero L-159 Alca combat aircraft, Mi-35 attack helicopters, armored vehicles (Pandur II, OT-64, OT-90, BVP-2) and tanks (T-72 and T-72M4CZ).
The most famous Czech, and therefore Czechoslovak, soldiers and military leaders of the past were Ottokar II of Bohemia, John of Bohemia, Jan Žižka, Albrecht von Wallenstein, Karl Philipp, Prince of Schwarzenberg, Joseph Radetzky von Radetz, Josef Šnejdárek, Heliodor Píka, Ludvík Svoboda, Jan Kubiš, Jozef Gabčík, František Fajtl and Petr Pavel.
Human rights in the Czech Republic are guaranteed by the Charter of Fundamental Rights and Freedoms and international treaties on human rights. Nevertheless, there were cases of human rights violations such as discrimination against Roma children, for which the European Commission asked the Czech Republic to provide an explanation, or the illegal sterilization of Roma women, for which the government apologized.
Prague is the seat of Radio Free Europe/Radio Liberty. Today, the station is based in Hagibor. At the beginning of the 1990s, Václav Havel personally invited her to Czechoslovakia.
People of the same sex can enter into a "registered partnership" in the Czech Republic. Conducting same-sex marriage is not legal under current Czech law.
The best-known Czech activists and supporters of human rights include Berta von Suttner, born in Prague, who won the Nobel Peace Prize for her pacifist struggle, philosopher and the first Czechoslovak president Tomáš Garrigue Masaryk, student Jan Palach, who set himself on fire in 1969 in protest against the Soviet occupation, Karel Schwarzenberg, who was chairman of the International Helsinki Committee for Human Rights between 1984 and 1990, Václav Havel, long-time dissident and later president, sociologist and dissident Jiřina Šiklová and Šimon Pánek, founder and director of the People in Need organization.
Since 2000, the Czech Republic has been divided into thirteen regions (Czech: kraje, singular kraj) and the capital city of Prague. Every region has its own elected regional assembly and a regional governor. In Prague, the assembly and presidential powers are executed by the city council and the mayor.
The older seventy-six districts (okresy, singular okres) including three "statutory cities" (without Prague, which had special status) lost most of their importance in 1999 in an administrative reform; they remain as territorial divisions and seats of various branches of state administration.
The smallest administrative units are obce (municipalities). As of 2021, the Czech Republic is divided into 6,254 municipalities. Cities and towns are also municipalities. The capital city of Prague is a region and municipality at the same time.
The Czech Republic has a developed, high-income export-oriented social market economy based in services, manufacturing and innovation, that maintains a welfare state and the European social model. The Czech Republic participates in the European Single Market as a member of the European Union and is therefore a part of the economy of the European Union, but uses its own currency, the Czech koruna, instead of the euro. It has a per capita GDP rate that is 91% of the EU average and is a member of the OECD. Monetary policy is conducted by the Czech National Bank, whose independence is guaranteed by the Constitution. The Czech Republic ranks 12th in the UN inequality-adjusted human development and 24th in World Bank Human Capital Index. It was described by The Guardian as "one of Europe's most flourishing economies".
As of 2023, the country's GDP per capita at purchasing power parity is $51,329 and $29,856 at nominal value. According to Allianz A.G., in 2018 the country was an MWC (mean wealth country), ranking 26th in net financial assets. The country experienced a 4.5% GDP growth in 2017. The 2016 unemployment rate was the lowest in the EU at 2.4%, and the 2016 poverty rate was the second lowest of OECD members. Czech Republic ranks 27th in the 2021 Index of Economic Freedom, 31st in the 2023 Global Innovation Index, down from 24th in the 2016, 29th in the Global Competitiveness Report, and 25th in the Global Enabling Trade Report. The Czech Republic has a diverse economy that ranks 7th in the 2016 Economic Complexity Index. The industrial sector accounts for 37.5% of the economy, while services account for 60% and agriculture for 2.5%. The largest trading partner for both export and import is Germany and the EU in general. Dividends worth CZK 270 billion were paid to the foreign owners of Czech companies in 2017, which has become a political issue. The country has been a member of the Schengen Area since 1 May 2004, having abolished border controls, completely opening its borders with all of its neighbors on 21 December 2007.
In 2018 the largest companies by revenue in the Czech Republic were: automobile manufacturer Škoda Auto, utility company ČEZ Group, conglomerate Agrofert, energy trading company EPH, oil processing company Unipetrol, electronics manufacturer Foxconn CZ and steel producer Moravia Steel. Other Czech transportation companies include: Škoda Transportation (tramways, trolleybuses, metro), Tatra (heavy trucks, the second oldest car maker in the world), Avia (medium trucks), Karosa and SOR Libchavy (buses), Aero Vodochody (military aircraft), Let Kunovice (civil aircraft), Zetor (tractors), Jawa Moto (motorcycles) and Čezeta (electric scooters).
Škoda Transportation is the fourth largest tram producer in the world; nearly one third of all trams in the world come from Czech factories. The Czech Republic is also the world's largest vinyl records manufacturer, with GZ Media producing about 6 million pieces annually in Loděnice. Česká zbrojovka is among the ten largest firearms producers in the world and five who produce automatic weapons.
In the food industry, Czech companies include Agrofert, Kofola and Hamé.
Production of Czech electricity exceeds consumption by about 10 TWh per year, the excess being exported. Nuclear power presently provides about 30 percent of the total power needs, its share is projected to increase to 40 percent. In 2005, 65.4 percent of electricity was produced by steam and combustion power plants (mostly coal); 30 percent by nuclear plants; and 4.6 percent came from renewable sources, including hydropower. The largest Czech power resource is Temelín Nuclear Power Station, with another nuclear power plant in Dukovany.
The Czech Republic is reducing its dependence on highly polluting low-grade brown coal as a source of energy. Natural gas is purchased from Norwegian companies and as liquefied gas LNG from the Netherlands and Belgium. In the past, three-quarters of gas supplies came from Russia, but after the outbreak of the war in Ukraine, the government gradually stopped these supplies. Gas consumption (approx. 100 TWh in 2003–2005) is almost double electricity consumption. South Moravia has small oil and gas deposits.
As of 2020, the road network in the Czech Republic is 55,768.3 kilometers (34,652.82 mi) long, out of which 1,276.4 km (793.1 mi) are motorways. The speed limit is 50 km/h (31 mph) within towns, 90 km/h (56 mph) outside of towns and 130 km/h (81 mph) on motorways.
The Czech Republic has one of the densest rail networks in the world. As of 2020, the country has 9,542 kilometers (5,929 mi) of lines. Of that number, 3,236 km (2,011 mi) is electrified, 7,503 km (4,662 mi) are single-line tracks and 2,040 km (1,270 mi) are double and multiple-line tracks. The length of tracks is 15,360 km (9,540 mi), out of which 6,917 km (4,298 mi) is electrified.
České dráhy (the Czech Railways) is the main railway operator in the country, with about 180 million passengers carried yearly. Maximum speed is limited to 160 km/h (99 mph).
Václav Havel Airport in Prague is the main international airport in the country. In 2019, it handled 17.8 million passengers. In total, the Czech Republic has 91 airports, six of which provide international air services. The public international airports are in Brno, Karlovy Vary, Mnichovo Hradiště, Mošnov (near Ostrava), Pardubice and Prague. The non-public international airports capable of handling airliners are in Kunovice and Vodochody.
Russia, via pipelines through Ukraine and to a lesser extent, Norway, via pipelines through Germany, supply the Czech Republic with liquid and natural gas.
The Czech Republic ranks in the top 10 countries worldwide with the fastest average internet speed. By the beginning of 2008, there were over 800 mostly local WISPs, with about 350,000 subscribers in 2007. Plans based on either GPRS, EDGE, UMTS or CDMA2000 are being offered by all three mobile phone operators (T-Mobile, O2, Vodafone) and internet provider U:fon. Government-owned Český Telecom slowed down broadband penetration. At the beginning of 2004, local-loop unbundling began and alternative operators started to offer ADSL and also SDSL. This and later privatization of Český Telecom helped drive down prices.
On 1 July 2006, Český Telecom was acquired by globalized company (Spain-owned) Telefónica group and adopted the new name Telefónica O2 Czech Republic. As of 2017, VDSL and ADSL2+ are offered in variants, with download speeds of up to 50 Mbit/s and upload speeds of up to 5 Mbit/s. Cable internet is gaining more popularity with its higher download speeds ranging from 50 Mbit/s to 1 Gbit/s.
Two computer security companies, Avast and AVG, were founded in the Czech Republic. In 2016, Avast led by Pavel Baudiš bought rival AVG for US$1.3 billion, together at the time, these companies had a user base of about 400 million people and 40% of the consumer market outside of China. Avast is the leading provider of antivirus software, with a 20.5% market share.
Prague is the fifth most visited city in Europe after London, Paris, Istanbul and Rome. In 2001, the total earnings from tourism reached 118 billion CZK, making up 5.5% of GNP and 9% of overall export earnings. The industry employs more than 110,000 people – over 1% of the population. Guidebooks and tourists reporting overcharging by taxi drivers and pickpocketing problems are mainly in Prague, though the situation has improved recently. Since 2005, Prague's mayor, Pavel Bém, has worked to improve this reputation by cracking down on petty crime and, aside from these problems, Prague is a "safe" city. The Czech Republic's crime rate is described by the United States State department as "low".
One of the tourist attractions in the Czech Republic is the Nether district Vítkovice in Ostrava.
The Czech Republic boasts 16 UNESCO World Heritage Sites, 3 of them are transnational. As of 2021, further 14 sites are on the tentative list.
Architectural heritage is an object of interest to visitors – it includes castles and châteaux from different historical epoques, namely Karlštejn Castle, Český Krumlov and the Lednice–Valtice Cultural Landscape. There are 12 cathedrals and 15 churches elevated to the rank of basilica by the Pope, calm monasteries.
Away from the towns, areas such as Bohemian Paradise, Bohemian Forest and the Giant Mountains attract visitors seeking outdoor pursuits. There is a number of beer festivals.
The country is also known for its various museums. Puppetry and marionette exhibitions are with a number of puppet festivals throughout the country. Aquapalace Prague in Čestlice is the largest water park in the country.
The Czech lands have a long and well-documented history of scientific innovation. Today, the Czech Republic has a highly sophisticated, developed, high-performing, innovation-oriented scientific community supported by the government, industry, and leading universities. Czech scientists are embedded members of the global scientific community. They contribute annually to multiple international academic journals and collaborate with their colleagues across boundaries and fields. The Czech Republic was ranked 24th in the Global Innovation Index in 2020 and 2021, up from 26th in 2019.
Historically, the Czech lands, especially Prague, have been the seat of scientific discovery going back to early modern times, including Tycho Brahe, Nicolaus Copernicus, and Johannes Kepler. In 1784 the scientific community was first formally organized under the charter of the Royal Czech Society of Sciences. Currently, this organization is known as the Czech Academy of Sciences. Similarly, the Czech lands have a well-established history of scientists, including Nobel laureates biochemists Gerty and Carl Ferdinand Cori, chemists Jaroslav Heyrovský and Otto Wichterle, physicists Ernst Mach and Peter Grünberg, physiologist Jan Evangelista Purkyně and chemist Antonín Holý. Sigmund Freud, the founder of psychoanalysis, was born in Příbor, Gregor Mendel, the founder of genetics, was born in Hynčice and spent most of his life in Brno, logician and mathematician Kurt Gödel was born in Brno.
Historically, most scientific research was recorded in Latin, but from the 18th century onwards increasingly in German and later in Czech, archived in libraries supported and managed by religious groups and other denominations as evidenced by historical locations of international renown and heritage such as the Strahov Monastery and the Clementinum in Prague. Increasingly, Czech scientists publish their work and that of their history in English.
The current important scientific institution is the already mentioned Academy of Sciences of the Czech Republic, the CEITEC Institute in Brno or the HiLASE and Eli Beamlines centers with the most powerful laser in the world in Dolní Břežany. Prague is the seat of the administrative center of the GSA Agency operating the European navigation system Galileo and the European Union Agency for the Space Programme.
The total fertility rate (TFR) in 2020 was estimated at 1.71 children per woman, which is below the replacement rate of 2.1. The Czech Republic's population has an average age of 43.3 years. The life expectancy in 2021 was estimated to be 79.5 years (76.55 years male, 82.61 years female). About 77,000 people immigrate to the Czech Republic annually. Vietnamese immigrants began settling in the country during the Communist period, when they were invited as guest workers by the Czechoslovak government. In 2009, there were about 70,000 Vietnamese in the Czech Republic. Most decide to stay in the country permanently.
According to results of the 2021 census, the majority of the inhabitants of the Czech Republic are Czechs (57.3%), followed by Moravians (3.4%), Slovaks (0.9%), Ukrainians (0.7%), Viets (0.3%), Poles (0.3%), Russians (0.2%), Silesians (0.1%) and Germans (0.1%). Another 4.0% declared combination of two nationalities (3.6% combination of Czech and other nationality). As the 'nationality' was an optional item, a number of people left this field blank (31.6%). According to some estimates, there are about 250,000 Romani people in the Czech Republic. The Polish minority resides mainly in the Trans-Olza region.
There were 658,564 foreigners residing in the country in 2021, according to the Czech Statistical Office, with the largest groups being Ukrainian (22%), Slovak (22%), Vietnamese (12%), Russian (7%) and German (4%). Most of the foreign population lives in Prague (37.3%) and Central Bohemia Region (13.2%).
The Jewish population of Bohemia and Moravia, 118,000 according to the 1930 census, was nearly annihilated by the Nazi Germans during the Holocaust. There were approximately 3,900 Jews in the Czech Republic in 2021. The former Czech prime minister, Jan Fischer, is of Jewish faith.
Nationality of residents, who answered the question in the Census 2021:
About 75% to 79% of residents of the Czech Republic do not declare having any religion or faith in surveys, and the proportion of convinced atheists (30%) is the third highest in the world behind those of China (47%) and Japan (31%). The Czech people have been historically characterized as "tolerant and even indifferent towards religion". The religious identity of the country has changed drastically since the first half of the 20th century, when more than 90% of Czechs were Christians.
Christianization in the 9th and 10th centuries introduced Catholicism. After the Bohemian Reformation, most Czechs became followers of Jan Hus, Petr Chelčický and other regional Protestant Reformers. Taborites and Utraquists were Hussite groups. Towards the end of the Hussite Wars, the Utraquists changed sides and allied with the Catholic Church. Following the joint Utraquist—Catholic victory, Utraquism was accepted as a distinct form of Christianity to be practiced in Bohemia by the Catholic Church while all remaining Hussite groups were prohibited. After the Reformation, some Bohemians went with the teachings of Martin Luther, especially Sudeten Germans. In the wake of the Reformation, Utraquist Hussites took a renewed increasingly anti-Catholic stance, while some of the defeated Hussite factions were revived. After the Habsburgs regained control of Bohemia, the whole population was forcibly converted to Catholicism—even the Utraquist Hussites. Going forward, Czechs have become more wary and pessimistic of religion as such. A history of resistance to the Catholic Church followed. It suffered a schism with the neo-Hussite Czechoslovak Hussite Church in 1920, lost the bulk of its adherents during the Communist era and continues to lose in the modern, ongoing secularization. Protestantism never recovered after the Counter-Reformation was introduced by the Austrian Habsburgs in 1620. Prior to the Holocaust, the Czech Republic had a sizable Jewish community of around 100,000. There are many historically important and culturally relevant Synagogues in the Czech Republic such as Europe's oldest active Synagogue, The Old New Synagogue and the second largest Synagogue in Europe, the Great Synagogue (Plzeň). The Holocaust decimated Czech Jewry and the Jewish population as of 2021 is 3,900.
According to the 2011 census, 34% of the population stated they had no religion, 10.3% was Catholic, 0.8% was Protestant (0.5% Czech Brethren and 0.4% Hussite), and 9% followed other forms of religion both denominational or not (of which 863 people answered they are Pagan). 45% of the population did not answer the question about religion. From 1991 to 2001 and further to 2011 the adherence to Catholicism decreased from 39% to 27% and then to 10%; Protestantism similarly declined from 3.7% to 2% and then to 0.8%. The Muslim population is estimated to be 20,000 representing 0.2% of the population.
The proportion of religious believers varies significantly across the country, from 55% in Zlín Region to 16% in Ústí nad Labem Region.
Education in the Czech Republic is compulsory for nine years and citizens have access to a free-tuition university education, while the average number of years of education is 13.1. Additionally, the Czech Republic has a "relatively equal" educational system in comparison with other countries in Europe. Founded in 1348, Charles University was the first university in Central Europe. Other major universities in the country are Masaryk University, Czech Technical University, Palacký University, Academy of Performing Arts and University of Economics.
The Programme for International Student Assessment, coordinated by the OECD, currently ranks the Czech education system as the 15th most successful in the world, higher than the OECD average. The UN Education Index ranks the Czech Republic 10th as of 2013 (positioned behind Denmark and ahead of South Korea).
Health care in the Czech Republic is similar in quality to that of other developed nations. The Czech universal health care system is based on a compulsory insurance model, with fee-for-service care funded by mandatory employment-related insurance plans. According to the 2016 Euro health consumer index, a comparison of healthcare in Europe, the Czech healthcare is 13th, ranked behind Sweden and two positions ahead of the United Kingdom.
Venus of Dolní Věstonice is the treasure of prehistoric art. Theodoric of Prague was a painter in the Gothic era who decorated the castle Karlstejn. In the Baroque era, there were Wenceslaus Hollar, Jan Kupecký, Karel Škréta, Anton Raphael Mengs or Petr Brandl, sculptors Matthias Braun and Ferdinand Brokoff. In the first half of the 19th century, Josef Mánes joined the romantic movement. In the second half of the 19th century had the main say the so-called "National Theatre generation": sculptor Josef Václav Myslbek and painters Mikoláš Aleš, Václav Brožík, Vojtěch Hynais or Julius Mařák. At the end of the century came a wave of Art Nouveau. Alfons Mucha became the main representative. He is known for Art Nouveau posters and his cycle of 20 large canvases named the Slav Epic, which depicts the history of Czechs and other Slavs. As of 2012, the Slav Epic can be seen in the Veletržní Palace of the National Gallery in Prague, which manages the largest collection of art in the Czech Republic. Max Švabinský was another Art nouveau painter. The 20th century brought an avant-garde revolution. In the Czech lands mainly expressionist and cubist: Josef Čapek, Emil Filla, Bohumil Kubišta, Jan Zrzavý. Surrealism emerged particularly in the work of Toyen, Josef Šíma and Karel Teige. In the world, however, he pushed mainly František Kupka, a pioneer of abstract painting. As illustrators and cartoonists in the first half of the 20th century gained fame Josef Lada, Zdeněk Burian or Emil Orlík. Art photography has become a new field (František Drtikol, Josef Sudek, later Jan Saudek or Josef Koudelka).
The Czech Republic is known for its individually made, mouth-blown, and decorated Bohemian glass.
The earliest preserved stone buildings in Bohemia and Moravia date back to the time of the Christianization in the 9th and 10th centuries. Since the Middle Ages, the Czech lands have been using the same architectural styles as most of Western and Central Europe. The oldest still standing churches were built in the Romanesque style. During the 13th century, it was replaced by the Gothic style. In the 14th century, Emperor Charles IV invited architects from France and Germany, Matthias of Arras and Peter Parler, to his court in Prague. During the Middle Ages, some fortified castles were built by the king and aristocracy, as well as some monasteries.
The Renaissance style penetrated the Bohemian Crown in the late 15th century when the older Gothic style started to be mixed with Renaissance elements. An example of pure Renaissance architecture in Bohemia is the Queen Anne's Summer Palace, which was situated in the garden of Prague Castle. Evidence of the general reception of the Renaissance in Bohemia, involving an influx of Italian architects, can be found in spacious chateaus with arcade courtyards and geometrically arranged gardens. Emphasis was placed on comfort, and buildings that were built for entertainment purposes also appeared.
In the 17th century, the Baroque style spread throughout the Crown of Bohemia.
In the 18th century, Bohemia produced an architectural peculiarity – the Baroque Gothic style, a synthesis of the Gothic and Baroque styles.
During the 19th century stands the revival architectural styles. Some churches were restored to their presumed medieval appearance and there were constructed buildings in the Neo-Romanesque, Neo-Gothic and Neo-Renaissance styles. At the turn of the 19th and 20th centuries, the new art style appeared in the Czech lands – Art Nouveau.
Bohemia contributed an unusual style to the world's architectural heritage when Czech architects attempted to transpose the Cubism of painting and sculpture into architecture.
Between World Wars I and II, Functionalism, with its sober, progressive forms, took over as the main architectural style.
After World War II and the Communist coup in 1948, art in Czechoslovakia became Soviet-influenced. The Czechoslovak avant-garde artistic movement is known as the Brussels style came up in the time of political liberalization of Czechoslovakia in the 1960s. Brutalism dominated in the 1970s and 1980s.
The Czech Republic is not shying away from the more modern trends of international architecture, an example is the Dancing House (Tančící dům) in Prague, Golden Angel in Prague or Congress Centre in Zlín.
Influential Czech architects include Peter Parler, Benedikt Rejt, Jan Santini Aichel, Kilian Ignaz Dientzenhofer, Josef Fanta, Josef Hlávka, Josef Gočár, Pavel Janák, Jan Kotěra, Věra Machoninová, Karel Prager, Karel Hubáček, Jan Kaplický, Eva Jiřičná or Josef Pleskot.
The literature from the area of today's Czech Republic was mostly written in Czech, but also in Latin and German or even Old Church Slavonic. Franz Kafka, although a competent user of Czech, wrote in his mother tongue, German. His included: (The Trial and The Castle).
In the second half of the 13th century, the royal court in Prague became one of the centers of German Minnesang and courtly literature. The Czech German-language literature can be seen in the first half of the 20th century.
Bible translations played a role in the development of Czech literature. The oldest Czech translation of the Psalms originated in the late 13th century and the first complete Czech translation of the Bible was finished around 1360. The first complete printed Czech Bible was published in 1488. The first complete Czech Bible translation from the original languages was published between 1579 and 1593. The Codex Gigas from the 12th century is the largest extant medieval manuscript in the world.
Czech-language literature can be divided into several periods: the Middle Ages; the Hussite period; the Renaissance humanism; the Baroque period; the Enlightenment and Czech reawakening in the first half of the 19th century, modern literature in the second half of the 19th century; the avant-garde of the interwar period; the years under Communism; and the Czech Republic.
The antiwar comedy novel The Good Soldier Švejk is the most translated Czech book in history.
The international literary award the Franz Kafka Prize is awarded in the Czech Republic.
The Czech Republic has the densest network of libraries in Europe.
Czech literature and culture played a role on at least two occasions when Czechs lived under oppression and political activity was suppressed. On both of these occasions, in the early 19th century and then again in the 1960s, the Czechs used their cultural and literary effort to strive for political freedom, establishing a confident, politically aware nation.
The musical tradition of the Czech lands arose from the first church hymns, whose first evidence is suggested at the break of the 10th and 11th centuries. Some pieces of Czech music include two chorales, which in their time performed the function of anthems: "Lord, Have Mercy on Us" and the hymn "Saint Wenceslaus" or "Saint Wenceslaus Chorale". The authorship of the anthem "Lord, Have Mercy on Us" is ascribed by some historians to Saint Adalbert of Prague (sv.Vojtěch), bishop of Prague, living between 956 and 997.
The wealth of musical culture lies in the classical music tradition during all historical periods, especially in the Baroque, Classicism, Romantic, modern classical music and in the traditional folk music of Bohemia, Moravia and Silesia. Since the early era of artificial music, Czech musicians and composers have been influenced the folk music of the region and dance.
Czech music can be considered to have been "beneficial" in both the European and worldwide context, several times co-determined or even determined a newly arriving era in musical art, above all of Classical era, as well as by original attitudes in Baroque, Romantic and modern classical music. Some Czech musical works are The Bartered Bride, New World Symphony, Sinfonietta and Jenůfa.
A music festival in the country is Prague Spring International Music Festival of classical music, a permanent showcase for performing artists, symphony orchestras and chamber music ensembles of the world.
The roots of Czech theatre can be found in the Middle Ages, especially in the cultural life of the Gothic period. In the 19th century, the theatre played a role in the national awakening movement and later, in the 20th century, it became a part of modern European theatre art. The original Czech cultural phenomenon came into being at the end of the 1950s. This project called Laterna magika, resulting in productions that combined theater, dance, and film in a poetic manner, considered the first multimedia art project in an international context.
A drama is Karel Čapek's play R.U.R., which introduced the word "robot".
The country has a tradition of puppet theater. In 2016, Czech and Slovak Puppetry was included on the UNESCO Intangible Cultural Heritage Lists.
The tradition of Czech cinematography started in the second half of the 1890s. Peaks of the production in the era of silent movies include the historical drama The Builder of the Temple and the social and erotic drama Erotikon directed by Gustav Machatý. The early Czech sound film era was productive, above all in mainstream genres, with the comedies of Martin Frič or Karel Lamač. There were dramatic movies sought internationally.
Hermína Týrlová was a prominent Czech animator, screenwriter, and film director. She was often called the mother of Czech animation. Over the course of her career, she produced over 60 animated children's short films using puppets and the technique of stop motion animation.
Before the German occupation, in 1933, filmmaker and animator Irena Dodalová [cs] established the first Czech animation studio "IRE Film" with her husband Karel Dodal.
After the period of Nazi occupation and early communist official dramaturgy of socialist realism in movies at the turn of the 1940s and 1950s with fewer exceptions such as Krakatit or Men without wings (awarded by Palme d'Or in 1946), an era of the Czech film began with animated films, performed in anglophone countries under the name "The Fabulous World of Jules Verne" from 1958, which combined acted drama with animation, and Jiří Trnka, the founder of the modern puppet film. This began a tradition of animated films (Mole etc.).
In the 1960s, the hallmark of Czechoslovak New Wave's films were improvised dialogues, black and absurd humor and the occupation of non-actors. Directors are trying to preserve natural atmosphere without refinement and artificial arrangement of scenes. A personality of the 1960s and the beginning of the 1970s with original manuscript and psychological impact is František Vláčil. Another international author is Jan Švankmajer, a filmmaker and artist whose work spans several media. He is a self-labeled surrealist known for animations and features.
The Barrandov Studios in Prague are the largest film studios with film locations in the country. Filmmakers have come to Prague to shoot scenery no longer found in Berlin, Paris and Vienna. The city of Karlovy Vary was used as a location for the 2006 James Bond film Casino Royale.
The Czech Lion is the highest Czech award for film achievement. Karlovy Vary International Film Festival is one of the film festivals that have been given competitive status by the FIAPF. Other film festivals held in the country include Febiofest, Jihlava International Documentary Film Festival, One World Film Festival, Zlín Film Festival and Fresh Film Festival.
Czech journalists and media enjoy a degree of freedom. There are restrictions against writing in support of Nazism, racism or violating Czech law. The Czech press was ranked as the 40th most free press in the World Freedom Index by Reporters Without Borders in 2021. Radio Free Europe/Radio Liberty has its headquarters in Prague.
The national public television service is Czech Television that operates the 24-hour news channel ČT24 and the news website ct24.cz. As of 2020, Czech Television is the most watched television, followed by private televisions TV Nova and Prima TV. However, TV Nova has the most watched main news program and prime time program. Other public services include the Czech Radio and the Czech News Agency.
The best-selling daily national newspapers in 2020/21 are Blesk (average 703,000 daily readers), Mladá fronta DNES (average 461,000 daily readers), Právo (average 182,000 daily readers), Lidové noviny (average 163,000 daily readers) and Hospodářské noviny (average 162,000 daily readers).
Most Czechs (87%) read their news online, with Seznam.cz, iDNES.cz, Novinky.cz, iPrima.cz and Seznam Zprávy.cz being the most visited as of 2021.
Czech cuisine is marked by an emphasis on meat dishes with pork, beef, and chicken. Goose, duck, rabbit, and venison are served. Fish is less common, with the occasional exception of fresh trout and carp, which is served at Christmas.
There is a variety of local sausages, wurst, pâtés, and smoked and cured meats. Czech desserts include a variety of whipped cream, chocolate, and fruit pastries and tarts, crêpes, creme desserts and cheese, poppy-seed-filled and other types of traditional cakes such as buchty, koláče and štrúdl.
Czech beer has a history extending more than a millennium; the earliest known brewery existed in 993. Today the Czech Republic has the highest beer consumption per capita in the world. The pilsner style beer (pils) originated in Plzeň, where the world's first blond lager Pilsner Urquell is still produced. It has served as the inspiration for more than two-thirds of the beer produced in the world today. The city of České Budějovice has similarly lent its name to its beer, known as Budweiser Budvar.
The South Moravian region has been producing wine since the Middle Ages; about 94% of vineyards in the Czech Republic are Moravian. Aside from beer, slivovitz and wine, the Czech Republic also produces two liquors, Fernet Stock and Becherovka. Kofola is a non-alcoholic domestic cola soft drink which competes with Coca-Cola and Pepsi.
The two leading sports in the Czech Republic are football and ice hockey. The most watched sporting events are the Olympic tournament and World Championships of ice hockey. Other most popular sports include tennis, volleyball, floorball, golf, ball hockey, athletics, basketball and skiing.
The country has won 15 gold medals in the Summer Olympics and nine in the Winter Games. (See Olympic history.) The Czech ice hockey team won the gold medal at the 1998 Winter Olympics and has won twelve gold medals at the World Championships, including three straight from 1999 to 2001.
The Škoda Motorsport is engaged in competition racing since 1901 and has gained a number of titles with various vehicles around the world. MTX automobile company was formerly engaged in the manufacture of racing and formula cars since 1969.
Hiking is a popular sport. The word for 'tourist' in Czech, turista, also means 'trekker' or 'hiker'. For hikers, thanks to the more than 120-year-old tradition, there is the Czech Hiking Markers System of trail blazing, that has been adopted by countries worldwide. There is a network of around 40,000 km of marked short- and long-distance trails crossing the whole country and all the Czech mountains.
49°45′N 15°30′E / 49.750°N 15.500°E / 49.750; 15.500 | [
{
"paragraph_id": 0,
"text": "The Czech Republic, also known as Czechia, is a landlocked country in Central Europe. Historically known as Bohemia, it is bordered by Austria to the south, Germany to the west, Poland to the northeast, and Slovakia to the southeast. The Czech Republic has a hilly landscape that covers an area of 78,871 square kilometers (30,452 sq mi) with a mostly temperate continental and oceanic climate. The capital and largest city is Prague; other major cities and urban areas include Brno, Ostrava, Plzeň and Liberec.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Duchy of Bohemia was founded in the late 9th century under Great Moravia. It was formally recognized as an Imperial State of the Holy Roman Empire in 1002 and became a kingdom in 1198. Following the Battle of Mohács in 1526, all of the Crown lands of Bohemia were gradually integrated into the Habsburg monarchy. Nearly a hundred years later, the Protestant Bohemian Revolt led to the Thirty Years' War. After the Battle of White Mountain, the Habsburgs consolidated their rule. With the dissolution of the Holy Roman Empire in 1806, the Crown lands became part of the Austrian Empire.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the 19th century, the Czech lands became more industrialized, and in 1918 most of it became part of the First Czechoslovak Republic following the collapse of Austria-Hungary after World War I. Czechoslovakia was the only country in Central and Eastern Europe to remain a parliamentary democracy during the entirety of the interwar period. After the Munich Agreement in 1938, Nazi Germany systematically took control over the Czech lands.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Czechoslovakia was restored in 1945 and three years later became an Eastern Bloc communist state following a coup d'état in 1948. Attempts to liberalize the government and economy were suppressed by a Soviet-led invasion of the country during the Prague Spring in 1968. In November 1989, the Velvet Revolution ended communist rule in the country and restored democracy. On 31 December 1992, Czechoslovakia was peacefully dissolved, with its constituent states becoming the independent states of the Czech Republic and Slovakia.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Czech Republic is a unitary parliamentary republic and developed country with an advanced, high-income social market economy. It is a welfare state with a European social model, universal health care and free-tuition university education. It ranks 32nd in the Human Development Index. The Czech Republic is a member of the United Nations, NATO, the European Union, the OECD, the OSCE, the Council of Europe and the Visegrád Group.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The traditional English name \"Bohemia\" derives from Latin: Boiohaemum, which means \"home of the Boii\" (a Gallic tribe). The current English name ultimately comes from the Czech word Čech. The name comes from the Slavic tribe (Czech: Češi, Čechové) and, according to legend, their leader Čech, who brought them to Bohemia, to settle on Říp Mountain. The etymology of the word Čech can be traced back to the Proto-Slavic root *čel-, meaning \"member of the people; kinsman\", thus making it cognate to the Czech word člověk (a person).",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "The country has been traditionally divided into three lands, namely Bohemia (Čechy) in the west, Moravia (Morava) in the east, and Czech Silesia (Slezsko; the smaller, south-eastern part of historical Silesia, most of which is located within modern Poland) in the northeast. Known as the lands of the Bohemian Crown since the 14th century, a number of other names for the country have been used, including Czech/Bohemian lands, Bohemian Crown, Czechia, and the lands of the Crown of Saint Wenceslaus. When the country regained its independence after the dissolution of the Austro-Hungarian empire in 1918, the new name of Czechoslovakia was coined to reflect the union of the Czech and Slovak nations within one country.",
"title": "Name"
},
{
"paragraph_id": 7,
"text": "After Czechoslovakia dissolved on the last day of 1992, Česko was adopted as the Czech short name for the new state and the Ministry of Foreign Affairs of the Czech Republic recommended Czechia for the English-language equivalent. This form was not widely adopted at the time, leading to the long name Czech Republic being used in English in nearly all circumstances. The Czech government directed use of Czechia as the official English short name in 2016. The short name has been listed by the United Nations and is used by other organizations such as the European Union, NATO, the CIA, Google Maps, and the European Broadcasting Union. In 2022, the American AP Stylebook stated in its entry on the country that \"Czechia, the Czech Republic. Both are acceptable. The shorter name Czechia is preferred by the Czech government. If using Czechia, clarify in the story that the country is more widely known in English as the Czech Republic.\"",
"title": "Name"
},
{
"paragraph_id": 8,
"text": "Archaeologists have found evidence of prehistoric human settlements in the area, dating back to the Paleolithic era.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the classical era, as a result of the 3rd century BC Celtic migrations, Bohemia became associated with the Boii. The Boii founded an oppidum near the site of modern Prague. Later in the 1st century, the Germanic tribes of the Marcomanni and Quadi settled there.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Slavs from the Black Sea–Carpathian region settled in the area (their migration was pushed by an invasion of peoples from Siberia and Eastern Europe into their area: Huns, Avars, Bulgars and Magyars). In the sixth century, the Huns had moved westwards into Bohemia, Moravia, and some of present-day Austria and Germany.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "During the 7th century, the Frankish merchant Samo, supporting the Slavs fighting against nearby settled Avars, became the ruler of the first documented Slavic state in Central Europe, Samo's Empire. The principality of Great Moravia, controlled by Moymir dynasty, arose in the 8th century. It reached its zenith in the 9th (during the reign of Svatopluk I of Moravia), holding off the influence of the Franks. Great Moravia was Christianized, with a role being played by the Byzantine mission of Cyril and Methodius. They codified the Old Church Slavonic language, the first literary and liturgical language of the Slavs, and the Glagolitic script.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Duchy of Bohemia emerged in the late 9th century when it was unified by the Přemyslid dynasty. Bohemia was from 1002 until 1806 an Imperial Estate of the Holy Roman Empire.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1212, Přemysl Ottokar I extracted the Golden Bull of Sicily from the emperor, confirming Ottokar and his descendants' royal status; the Duchy of Bohemia was raised to a Kingdom. German immigrants settled in the Bohemian periphery in the 13th century. The Mongols in the invasion of Europe carried their raids into Moravia but were defensively defeated at Olomouc.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "After a series of dynastic wars, the House of Luxembourg gained the Bohemian throne.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Efforts for a reform of the church in Bohemia started already in the late 14th century. Jan Hus' followers seceded from some practices of the Roman Church and in the Hussite Wars (1419–1434) defeated five crusades organized against them by Sigismund. During the next two centuries, 90% of the population in Bohemia and Moravia were considered Hussites. The pacifist thinker Petr Chelčický inspired the movement of the Moravian Brethren (by the middle of the 15th century) that completely separated from the Roman Catholic Church.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "On 21 December 1421, Jan Žižka, a successful military commander and mercenary, led his group of forces in the Battle of Kutná Hora, resulting in a victory for the Hussites. He is honoured to this day as a national hero.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "After 1526 Bohemia came increasingly under Habsburg control as the Habsburgs became first the elected and then in 1627 the hereditary rulers of Bohemia. Between 1583 and 1611 Prague was the official seat of the Holy Roman Emperor Rudolf II and his court.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The Defenestration of Prague and subsequent revolt against the Habsburgs in 1618 marked the start of the Thirty Years' War. In 1620, the rebellion in Bohemia was crushed at the Battle of White Mountain and the ties between Bohemia and the Habsburgs' hereditary lands in Austria were strengthened. The leaders of the Bohemian Revolt were executed in 1621. The nobility and the middle class Protestants had to either convert to Catholicism or leave the country.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The following era of 1620 to the late 18th century became known as the \"Dark Age\". During the Thirty Years' War, the population of the Czech lands declined by a third through the expulsion of Czech Protestants as well as due to the war, disease and famine. The Habsburgs prohibited all Christian confessions other than Catholicism. The flowering of Baroque culture shows the ambiguity of this historical period. Ottoman Turks and Tatars invaded Moravia in 1663. In 1679–1680 the Czech lands faced the Great Plague of Vienna and an uprising of serfs.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "There were peasant uprisings influenced by famine. Serfdom was abolished between 1781 and 1848. Several battles of the Napoleonic Wars took place on the current territory of the Czech Republic.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The end of the Holy Roman Empire in 1806 led to degradation of the political status of Bohemia which lost its position of an electorate of the Holy Roman Empire as well as its own political representation in the Imperial Diet. Bohemian lands became part of the Austrian Empire. During the 18th and 19th century the Czech National Revival began its rise, with the purpose to revive Czech language, culture, and national identity. The Revolution of 1848 in Prague, striving for liberal reforms and autonomy of the Bohemian Crown within the Austrian Empire, was suppressed.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "It seemed that some concessions would be made also to Bohemia, but in the end, the Emperor Franz Joseph I affected a compromise with Hungary only. The Austro-Hungarian Compromise of 1867 and the never realized coronation of Franz Joseph as King of Bohemia led to a disappointment of some Czech politicians. The Bohemian Crown lands became part of the so-called Cisleithania.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The Czech Social Democratic and progressive politicians started the fight for universal suffrage. The first elections under universal male suffrage were held in 1907.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 1918, during the collapse of the Habsburg monarchy at the end of World War I, the independent republic of Czechoslovakia, which joined the winning Allied powers, was created, with Tomáš Garrigue Masaryk in the lead. This new country incorporated the Bohemian Crown.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The First Czechoslovak Republic comprised only 27% of the population of the former Austria-Hungary, but nearly 80% of the industry, which enabled it to compete with Western industrial states. In 1929 compared to 1913, the gross domestic product increased by 52% and industrial production by 41%. In 1938 Czechoslovakia held 10th place in the world industrial production. Czechoslovakia was the only country in Central and Eastern Europe to remain a liberal democracy throughout the entire interwar period. Although the First Czechoslovak Republic was a unitary state, it provided certain rights to its minorities, the largest being Germans (23.6% in 1921), Hungarians (5.6%) and Ukrainians (3.5%).",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Western Czechoslovakia was occupied by Nazi Germany, which placed most of the region into the Protectorate of Bohemia and Moravia. The Protectorate was proclaimed part of the Third Reich, and the president and prime minister were subordinated to Nazi Germany's Reichsprotektor. One Nazi concentration camp was located within the Czech territory at Terezín, north of Prague. The vast majority of the Protectorate's Jews were murdered in Nazi-run concentration camps. The Nazi Generalplan Ost called for the extermination, expulsion, Germanization or enslavement of most or all Czechs for the purpose of providing more living space for the German people. There was Czechoslovak resistance to Nazi occupation as well as reprisals against the Czechoslovaks for their anti-Nazi resistance. The German occupation ended on 9 May 1945, with the arrival of the Soviet and American armies and the Prague uprising. Most of Czechoslovakia's German-speakers were forcibly expelled from the country, first as a result of local acts of violence and then under the aegis of an \"organized transfer\" confirmed by the Soviet Union, the United States, and Great Britain at the Potsdam Conference.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In the 1946 elections, the Communist Party gained 38% of the votes and became the largest party in the Czechoslovak parliament, formed a coalition with other parties, and consolidated power. A coup d'état came in 1948 and a single-party government was formed. For the next 41 years, the Czechoslovak Communist state conformed to Eastern Bloc economic and political features. The Prague Spring political liberalization was stopped by the 1968 Warsaw Pact invasion of Czechoslovakia. Analysts believe that the invasion caused the communist movement to fracture, ultimately leading to the Revolutions of 1989.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In November 1989, Czechoslovakia again became a liberal democracy through the Velvet Revolution. However, Slovak national aspirations strengthened (Hyphen War) and on 31 December 1992, the country peacefully split into the independent countries of the Czech Republic and Slovakia. Both countries went through economic reforms and privatizations, with the intention of creating a market economy, as they have been trying to do since 1990, when Czechs and Slovaks still shared the common state. This process was largely successful; in 2006 the Czech Republic was recognized by the World Bank as a \"developed country\", and in 2009 the Human Development Index ranked it as a nation of \"Very High Human Development\".",
"title": "History"
},
{
"paragraph_id": 29,
"text": "From 1991, the Czech Republic, originally as part of Czechoslovakia and since 1993 in its own right, has been a member of the Visegrád Group and from 1995, the OECD. The Czech Republic joined NATO on 12 March 1999 and the European Union on 1 May 2004. On 21 December 2007 the Czech Republic joined the Schengen Area.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Until 2017, either the centre-left Czech Social Democratic Party or the centre-right Civic Democratic Party led the governments of the Czech Republic. In October 2017, the populist movement ANO 2011, led by the country's second-richest man, Andrej Babiš, won the elections with three times more votes than its closest rival, the Civic Democrats. In December 2017, Czech president Miloš Zeman appointed Andrej Babiš as the new prime minister.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In the 2021 elections, ANO 2011 was narrowly defeated and Petr Fiala became the new prime minister. He formed a government coalition of the alliance SPOLU (Civic Democratic Party, KDU-ČSL and TOP 09) and the alliance of Pirates and Mayors. In January 2023, retired general Petr Pavel won the presidential election, becoming new Czech president to succeed Miloš Zeman. Following the 2022 Russian invasion of Ukraine, the country took in half a million Ukrainian refugees, the largest number per capita in the world.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "On 21 December 2023, the worst mass shooting in Czech history took place at Charles University in central Prague. In total, 15 people were killed, including the perpetrator.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The Czech Republic lies mostly between latitudes 48° and 51° N and longitudes 12° and 19° E.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "Bohemia, to the west, consists of a basin drained by the Elbe (Czech: Labe) and the Vltava rivers, surrounded by mostly low mountains, such as the Krkonoše range of the Sudetes. The highest point in the country, Sněžka at 1,603 m (5,259 ft), is located here. Moravia, the eastern part of the country, is also hilly. It is drained mainly by the Morava River, but it also contains the source of the Oder River (Czech: Odra).",
"title": "Geography"
},
{
"paragraph_id": 35,
"text": "Water from the Czech Republic flows to three different seas: the North Sea, Baltic Sea, and Black Sea. The Czech Republic also leases the Moldauhafen, a 30,000-square-meter (7.4-acre) lot in the middle of the Hamburg Docks, which was awarded to Czechoslovakia by Article 363 of the Treaty of Versailles, to allow the landlocked country a place where goods transported down river could be transferred to seagoing ships. The territory reverts to Germany in 2028.",
"title": "Geography"
},
{
"paragraph_id": 36,
"text": "Phytogeographically, the Czech Republic belongs to the Central European province of the Circumboreal Region, within the Boreal Kingdom. According to the World Wide Fund for Nature, the territory of the Czech Republic can be subdivided into four ecoregions: the Western European broadleaf forests, Central European mixed forests, Pannonian mixed forests, and Carpathian montane conifer forests.",
"title": "Geography"
},
{
"paragraph_id": 37,
"text": "There are four national parks in the Czech Republic. The oldest is Krkonoše National Park (Biosphere Reserve), and the others are Šumava National Park (Biosphere Reserve), Podyjí National Park, and Bohemian Switzerland.",
"title": "Geography"
},
{
"paragraph_id": 38,
"text": "The three historical lands of the Czech Republic (formerly some countries of the Bohemian Crown) correspond with the river basins of the Elbe and the Vltava basin for Bohemia, the Morava one for Moravia, and the Oder river basin for Czech Silesia (in terms of the Czech territory).",
"title": "Geography"
},
{
"paragraph_id": 39,
"text": "The Czech Republic has a temperate climate, situated in the transition zone between the oceanic and continental climate types, with warm summers and cold, cloudy and snowy winters. The temperature difference between summer and winter is due to the landlocked geographical position.",
"title": "Geography"
},
{
"paragraph_id": 40,
"text": "Temperatures vary depending on the elevation. In general, at higher altitudes, the temperatures decrease and precipitation increases. The wettest area in the Czech Republic is found around Bílý Potok in Jizera Mountains and the driest region is the Louny District to the northwest of Prague. Another factor is the distribution of the mountains.",
"title": "Geography"
},
{
"paragraph_id": 41,
"text": "At the highest peak of Sněžka (1,603 m or 5,259 ft), the average temperature is −0.4 °C (31 °F), whereas in the lowlands of the South Moravian Region, the average temperature is as high as 10 °C (50 °F). The country's capital, Prague, has a similar average temperature, although this is influenced by urban factors.",
"title": "Geography"
},
{
"paragraph_id": 42,
"text": "The coldest month is usually January, followed by February and December. During these months, there is snow in the mountains and sometimes in the cities and lowlands. During March, April, and May, the temperature usually increases, especially during April, when the temperature and weather tends to vary during the day. Spring is also characterized by higher water levels in the rivers, due to melting snow with occasional flooding.",
"title": "Geography"
},
{
"paragraph_id": 43,
"text": "The warmest month of the year is July, followed by August and June. On average, summer temperatures are about 20–30 °C (36–54 °F) higher than during winter. Summer is also characterized by rain and storms.",
"title": "Geography"
},
{
"paragraph_id": 44,
"text": "Autumn generally begins in September, which is still warm and dry. During October, temperatures usually fall below 15 °C (59 °F) or 10 °C (50 °F) and deciduous trees begin to shed their leaves. By the end of November, temperatures usually range around the freezing point.",
"title": "Geography"
},
{
"paragraph_id": 45,
"text": "The coldest temperature ever measured was in Litvínovice near České Budějovice in 1929, at −42.2 °C (−44.0 °F) and the hottest measured, was at 40.4 °C (104.7 °F) in Dobřichovice in 2012.",
"title": "Geography"
},
{
"paragraph_id": 46,
"text": "Most rain falls during the summer. Sporadic rainfall is throughout the year (in Prague, the average number of days per month experiencing at least 0.1 mm (0.0039 in) of rain varies from 12 in September and October to 16 in November) but concentrated rainfall (days with more than 10 mm (0.39 in) per day) are more frequent in the months of May to August (average around two such days per month). Severe thunderstorms, producing damaging straight-line winds, hail, and occasional tornadoes occur, especially during the summer period.",
"title": "Geography"
},
{
"paragraph_id": 47,
"text": "As of 2020, the Czech Republic ranks as the 21st most environmentally conscious country in the world in Environmental Performance Index. It had a 2018 Forest Landscape Integrity Index mean score of 1.71/10, ranking it 160th globally out of 172 countries. The Czech Republic has four National Parks (Šumava National Park, Krkonoše National Park, České Švýcarsko National Park, Podyjí National Park) and 25 Protected Landscape Areas.",
"title": "Geography"
},
{
"paragraph_id": 48,
"text": "The Czech Republic is a pluralist multi-party parliamentary representative democracy. The Parliament (Parlament České republiky) is bicameral, with the Chamber of Deputies (Czech: Poslanecká sněmovna, 200 members) and the Senate (Czech: Senát, 81 members). The members of the Chamber of Deputies are elected for a four-year term by proportional representation, with a 5% election threshold. There are 14 voting districts, identical to the country's administrative regions. The Chamber of Deputies, the successor to the Czech National Council, has the powers and responsibilities of the now defunct federal parliament of the former Czechoslovakia. The members of the Senate are elected in single-seat constituencies by two-round runoff voting for a six-year term, with one-third elected every even year in the autumn. This arrangement is modeled on the U.S. Senate, but each constituency is roughly the same size and the voting system used is a two-round runoff.",
"title": "Government"
},
{
"paragraph_id": 49,
"text": "The president is a formal head of state with limited and specific powers, who appoints the prime minister, as well the other members of the cabinet on a proposal by the prime minister. From 1993 until 2012, the President of the Czech Republic was selected by a joint session of the parliament for a five-year term, with no more than two consecutive terms (2x Václav Havel, 2x Václav Klaus). Since 2013, the president has been elected directly. Some commentators have argued that, with the introduction of direct election of the President, the Czech Republic has moved away from the parliamentary system and towards a semi-presidential one. The Government's exercise of executive power derives from the Constitution. The members of the government are the Prime Minister, Deputy prime ministers and other ministers. The Government is responsible to the Chamber of Deputies. The Prime Minister is the head of government and wields powers such as the right to set the agenda for most foreign and domestic policy and choose government ministers.",
"title": "Government"
},
{
"paragraph_id": 50,
"text": "The Czech Republic is a unitary state, with a civil law system based on the continental type, rooted in Germanic legal culture. The basis of the legal system is the Constitution of the Czech Republic adopted in 1993. The Penal Code is effective from 2010. A new Civil code became effective in 2014. The court system includes district, county, and supreme courts and is divided into civil, criminal, and administrative branches. The Czech judiciary has a triumvirate of supreme courts. The Constitutional Court consists of 15 constitutional judges and oversees violations of the Constitution by either the legislature or by the government. The Supreme Court is formed of 67 judges and is the court of highest appeal for most legal cases heard in the Czech Republic. The Supreme Administrative Court decides on issues of procedural and administrative propriety. It also has jurisdiction over certain political matters, such as the formation and closure of political parties, jurisdictional boundaries between government entities, and the eligibility of persons to stand for public office. The Supreme Court and the Supreme Administrative Court are both based in Brno, as is the Supreme Public Prosecutor's Office.",
"title": "Government"
},
{
"paragraph_id": 51,
"text": "The Czech Republic has ranked as one of the safest or most peaceful countries for the past few decades. It is a member of the United Nations, the European Union, NATO, OECD, Council of Europe and is an observer to the Organization of American States. The embassies of most countries with diplomatic relations with the Czech Republic are located in Prague, while consulates are located across the country.",
"title": "Government"
},
{
"paragraph_id": 52,
"text": "The Czech passport is restricted by visas. According to the 2018 Henley & Partners Visa Restrictions Index, Czech citizens have visa-free access to 173 countries, which ranks them 7th along with Malta and New Zealand. The World Tourism Organization ranks the Czech passport 24th. The US Visa Waiver Program applies to Czech nationals.",
"title": "Government"
},
{
"paragraph_id": 53,
"text": "The Prime Minister and Minister of Foreign Affairs have primary roles in setting foreign policy, although the President also has influence and represents the country abroad. Membership in the European Union and NATO is central to the Czech Republic's foreign policy. The Office for Foreign Relations and Information (ÚZSI) serves as the foreign intelligence agency responsible for espionage and foreign policy briefings, as well as protection of Czech Republic's embassies abroad.",
"title": "Government"
},
{
"paragraph_id": 54,
"text": "The Czech Republic has ties with Slovakia, Poland and Hungary as a member of the Visegrád Group, as well as with Germany, Israel, the United States and the European Union and its members. After 2020, relations with Asian democratic states, such as Taiwan, are being strengthened. On the contrary, the Czech Republic has long had bad relations with Russia, and from 2021 the Czech Republic appears on Russia's official list of enemy countries. The Czech Republic also has problematic relations with China.",
"title": "Government"
},
{
"paragraph_id": 55,
"text": "Czech officials have supported dissenters in Belarus, Moldova, Myanmar and Cuba.",
"title": "Government"
},
{
"paragraph_id": 56,
"text": "Famous Czech diplomats of the past included Jaroslav Lev of Rožmitál, Humprecht Jan Czernin, Count Philip Kinsky of Wchinitz and Tettau, Wenzel Anton, Prince of Kaunitz-Rietberg, Prince Karl Philipp Schwarzenberg, Alois Lexa von Aehrenthal, Ottokar Czernin, Edvard Beneš, Jan Masaryk, Jiří Hájek, Jiří Dienstbier, Michael Žantovský, Petr Kolář, Alexandr Vondra, Prince Karel Schwarzenberg and Petr Pavel.",
"title": "Government"
},
{
"paragraph_id": 57,
"text": "The Czech armed forces consist of the Czech Land Forces, the Czech Air Force and of specialized support units. The armed forces are managed by the Ministry of Defence. The President of the Czech Republic is Commander-in-chief of the armed forces. In 2004 the army transformed itself into a fully professional organization and compulsory military service was abolished. The country has been a member of NATO since 12 March 1999. Defence spending is approximately 1.28% of the GDP (2021). The armed forces are charged with protecting the Czech Republic and its allies, promoting global security interests, and contributing to NATO.",
"title": "Government"
},
{
"paragraph_id": 58,
"text": "Currently, as a member of NATO, the Czech military are participating in the Resolute Support and KFOR operations and have soldiers in Afghanistan, Mali, Bosnia and Herzegovina, Kosovo, Egypt, Israel and Somalia. The Czech Air Force also served in the Baltic states and Iceland. The main equipment of the Czech military includes JAS 39 Gripen multi-role fighters, Aero L-159 Alca combat aircraft, Mi-35 attack helicopters, armored vehicles (Pandur II, OT-64, OT-90, BVP-2) and tanks (T-72 and T-72M4CZ).",
"title": "Government"
},
{
"paragraph_id": 59,
"text": "The most famous Czech, and therefore Czechoslovak, soldiers and military leaders of the past were Ottokar II of Bohemia, John of Bohemia, Jan Žižka, Albrecht von Wallenstein, Karl Philipp, Prince of Schwarzenberg, Joseph Radetzky von Radetz, Josef Šnejdárek, Heliodor Píka, Ludvík Svoboda, Jan Kubiš, Jozef Gabčík, František Fajtl and Petr Pavel.",
"title": "Government"
},
{
"paragraph_id": 60,
"text": "Human rights in the Czech Republic are guaranteed by the Charter of Fundamental Rights and Freedoms and international treaties on human rights. Nevertheless, there were cases of human rights violations such as discrimination against Roma children, for which the European Commission asked the Czech Republic to provide an explanation, or the illegal sterilization of Roma women, for which the government apologized.",
"title": "Government"
},
{
"paragraph_id": 61,
"text": "Prague is the seat of Radio Free Europe/Radio Liberty. Today, the station is based in Hagibor. At the beginning of the 1990s, Václav Havel personally invited her to Czechoslovakia.",
"title": "Government"
},
{
"paragraph_id": 62,
"text": "People of the same sex can enter into a \"registered partnership\" in the Czech Republic. Conducting same-sex marriage is not legal under current Czech law.",
"title": "Government"
},
{
"paragraph_id": 63,
"text": "The best-known Czech activists and supporters of human rights include Berta von Suttner, born in Prague, who won the Nobel Peace Prize for her pacifist struggle, philosopher and the first Czechoslovak president Tomáš Garrigue Masaryk, student Jan Palach, who set himself on fire in 1969 in protest against the Soviet occupation, Karel Schwarzenberg, who was chairman of the International Helsinki Committee for Human Rights between 1984 and 1990, Václav Havel, long-time dissident and later president, sociologist and dissident Jiřina Šiklová and Šimon Pánek, founder and director of the People in Need organization.",
"title": "Government"
},
{
"paragraph_id": 64,
"text": "Since 2000, the Czech Republic has been divided into thirteen regions (Czech: kraje, singular kraj) and the capital city of Prague. Every region has its own elected regional assembly and a regional governor. In Prague, the assembly and presidential powers are executed by the city council and the mayor.",
"title": "Government"
},
{
"paragraph_id": 65,
"text": "The older seventy-six districts (okresy, singular okres) including three \"statutory cities\" (without Prague, which had special status) lost most of their importance in 1999 in an administrative reform; they remain as territorial divisions and seats of various branches of state administration.",
"title": "Government"
},
{
"paragraph_id": 66,
"text": "The smallest administrative units are obce (municipalities). As of 2021, the Czech Republic is divided into 6,254 municipalities. Cities and towns are also municipalities. The capital city of Prague is a region and municipality at the same time.",
"title": "Government"
},
{
"paragraph_id": 67,
"text": "The Czech Republic has a developed, high-income export-oriented social market economy based in services, manufacturing and innovation, that maintains a welfare state and the European social model. The Czech Republic participates in the European Single Market as a member of the European Union and is therefore a part of the economy of the European Union, but uses its own currency, the Czech koruna, instead of the euro. It has a per capita GDP rate that is 91% of the EU average and is a member of the OECD. Monetary policy is conducted by the Czech National Bank, whose independence is guaranteed by the Constitution. The Czech Republic ranks 12th in the UN inequality-adjusted human development and 24th in World Bank Human Capital Index. It was described by The Guardian as \"one of Europe's most flourishing economies\".",
"title": "Economy"
},
{
"paragraph_id": 68,
"text": "As of 2023, the country's GDP per capita at purchasing power parity is $51,329 and $29,856 at nominal value. According to Allianz A.G., in 2018 the country was an MWC (mean wealth country), ranking 26th in net financial assets. The country experienced a 4.5% GDP growth in 2017. The 2016 unemployment rate was the lowest in the EU at 2.4%, and the 2016 poverty rate was the second lowest of OECD members. Czech Republic ranks 27th in the 2021 Index of Economic Freedom, 31st in the 2023 Global Innovation Index, down from 24th in the 2016, 29th in the Global Competitiveness Report, and 25th in the Global Enabling Trade Report. The Czech Republic has a diverse economy that ranks 7th in the 2016 Economic Complexity Index. The industrial sector accounts for 37.5% of the economy, while services account for 60% and agriculture for 2.5%. The largest trading partner for both export and import is Germany and the EU in general. Dividends worth CZK 270 billion were paid to the foreign owners of Czech companies in 2017, which has become a political issue. The country has been a member of the Schengen Area since 1 May 2004, having abolished border controls, completely opening its borders with all of its neighbors on 21 December 2007.",
"title": "Economy"
},
{
"paragraph_id": 69,
"text": "In 2018 the largest companies by revenue in the Czech Republic were: automobile manufacturer Škoda Auto, utility company ČEZ Group, conglomerate Agrofert, energy trading company EPH, oil processing company Unipetrol, electronics manufacturer Foxconn CZ and steel producer Moravia Steel. Other Czech transportation companies include: Škoda Transportation (tramways, trolleybuses, metro), Tatra (heavy trucks, the second oldest car maker in the world), Avia (medium trucks), Karosa and SOR Libchavy (buses), Aero Vodochody (military aircraft), Let Kunovice (civil aircraft), Zetor (tractors), Jawa Moto (motorcycles) and Čezeta (electric scooters).",
"title": "Economy"
},
{
"paragraph_id": 70,
"text": "Škoda Transportation is the fourth largest tram producer in the world; nearly one third of all trams in the world come from Czech factories. The Czech Republic is also the world's largest vinyl records manufacturer, with GZ Media producing about 6 million pieces annually in Loděnice. Česká zbrojovka is among the ten largest firearms producers in the world and five who produce automatic weapons.",
"title": "Economy"
},
{
"paragraph_id": 71,
"text": "In the food industry, Czech companies include Agrofert, Kofola and Hamé.",
"title": "Economy"
},
{
"paragraph_id": 72,
"text": "Production of Czech electricity exceeds consumption by about 10 TWh per year, the excess being exported. Nuclear power presently provides about 30 percent of the total power needs, its share is projected to increase to 40 percent. In 2005, 65.4 percent of electricity was produced by steam and combustion power plants (mostly coal); 30 percent by nuclear plants; and 4.6 percent came from renewable sources, including hydropower. The largest Czech power resource is Temelín Nuclear Power Station, with another nuclear power plant in Dukovany.",
"title": "Economy"
},
{
"paragraph_id": 73,
"text": "The Czech Republic is reducing its dependence on highly polluting low-grade brown coal as a source of energy. Natural gas is purchased from Norwegian companies and as liquefied gas LNG from the Netherlands and Belgium. In the past, three-quarters of gas supplies came from Russia, but after the outbreak of the war in Ukraine, the government gradually stopped these supplies. Gas consumption (approx. 100 TWh in 2003–2005) is almost double electricity consumption. South Moravia has small oil and gas deposits.",
"title": "Economy"
},
{
"paragraph_id": 74,
"text": "As of 2020, the road network in the Czech Republic is 55,768.3 kilometers (34,652.82 mi) long, out of which 1,276.4 km (793.1 mi) are motorways. The speed limit is 50 km/h (31 mph) within towns, 90 km/h (56 mph) outside of towns and 130 km/h (81 mph) on motorways.",
"title": "Economy"
},
{
"paragraph_id": 75,
"text": "The Czech Republic has one of the densest rail networks in the world. As of 2020, the country has 9,542 kilometers (5,929 mi) of lines. Of that number, 3,236 km (2,011 mi) is electrified, 7,503 km (4,662 mi) are single-line tracks and 2,040 km (1,270 mi) are double and multiple-line tracks. The length of tracks is 15,360 km (9,540 mi), out of which 6,917 km (4,298 mi) is electrified.",
"title": "Economy"
},
{
"paragraph_id": 76,
"text": "České dráhy (the Czech Railways) is the main railway operator in the country, with about 180 million passengers carried yearly. Maximum speed is limited to 160 km/h (99 mph).",
"title": "Economy"
},
{
"paragraph_id": 77,
"text": "Václav Havel Airport in Prague is the main international airport in the country. In 2019, it handled 17.8 million passengers. In total, the Czech Republic has 91 airports, six of which provide international air services. The public international airports are in Brno, Karlovy Vary, Mnichovo Hradiště, Mošnov (near Ostrava), Pardubice and Prague. The non-public international airports capable of handling airliners are in Kunovice and Vodochody.",
"title": "Economy"
},
{
"paragraph_id": 78,
"text": "Russia, via pipelines through Ukraine and to a lesser extent, Norway, via pipelines through Germany, supply the Czech Republic with liquid and natural gas.",
"title": "Economy"
},
{
"paragraph_id": 79,
"text": "The Czech Republic ranks in the top 10 countries worldwide with the fastest average internet speed. By the beginning of 2008, there were over 800 mostly local WISPs, with about 350,000 subscribers in 2007. Plans based on either GPRS, EDGE, UMTS or CDMA2000 are being offered by all three mobile phone operators (T-Mobile, O2, Vodafone) and internet provider U:fon. Government-owned Český Telecom slowed down broadband penetration. At the beginning of 2004, local-loop unbundling began and alternative operators started to offer ADSL and also SDSL. This and later privatization of Český Telecom helped drive down prices.",
"title": "Economy"
},
{
"paragraph_id": 80,
"text": "On 1 July 2006, Český Telecom was acquired by globalized company (Spain-owned) Telefónica group and adopted the new name Telefónica O2 Czech Republic. As of 2017, VDSL and ADSL2+ are offered in variants, with download speeds of up to 50 Mbit/s and upload speeds of up to 5 Mbit/s. Cable internet is gaining more popularity with its higher download speeds ranging from 50 Mbit/s to 1 Gbit/s.",
"title": "Economy"
},
{
"paragraph_id": 81,
"text": "Two computer security companies, Avast and AVG, were founded in the Czech Republic. In 2016, Avast led by Pavel Baudiš bought rival AVG for US$1.3 billion, together at the time, these companies had a user base of about 400 million people and 40% of the consumer market outside of China. Avast is the leading provider of antivirus software, with a 20.5% market share.",
"title": "Economy"
},
{
"paragraph_id": 82,
"text": "Prague is the fifth most visited city in Europe after London, Paris, Istanbul and Rome. In 2001, the total earnings from tourism reached 118 billion CZK, making up 5.5% of GNP and 9% of overall export earnings. The industry employs more than 110,000 people – over 1% of the population. Guidebooks and tourists reporting overcharging by taxi drivers and pickpocketing problems are mainly in Prague, though the situation has improved recently. Since 2005, Prague's mayor, Pavel Bém, has worked to improve this reputation by cracking down on petty crime and, aside from these problems, Prague is a \"safe\" city. The Czech Republic's crime rate is described by the United States State department as \"low\".",
"title": "Economy"
},
{
"paragraph_id": 83,
"text": "One of the tourist attractions in the Czech Republic is the Nether district Vítkovice in Ostrava.",
"title": "Economy"
},
{
"paragraph_id": 84,
"text": "The Czech Republic boasts 16 UNESCO World Heritage Sites, 3 of them are transnational. As of 2021, further 14 sites are on the tentative list.",
"title": "Economy"
},
{
"paragraph_id": 85,
"text": "Architectural heritage is an object of interest to visitors – it includes castles and châteaux from different historical epoques, namely Karlštejn Castle, Český Krumlov and the Lednice–Valtice Cultural Landscape. There are 12 cathedrals and 15 churches elevated to the rank of basilica by the Pope, calm monasteries.",
"title": "Economy"
},
{
"paragraph_id": 86,
"text": "Away from the towns, areas such as Bohemian Paradise, Bohemian Forest and the Giant Mountains attract visitors seeking outdoor pursuits. There is a number of beer festivals.",
"title": "Economy"
},
{
"paragraph_id": 87,
"text": "The country is also known for its various museums. Puppetry and marionette exhibitions are with a number of puppet festivals throughout the country. Aquapalace Prague in Čestlice is the largest water park in the country.",
"title": "Economy"
},
{
"paragraph_id": 88,
"text": "The Czech lands have a long and well-documented history of scientific innovation. Today, the Czech Republic has a highly sophisticated, developed, high-performing, innovation-oriented scientific community supported by the government, industry, and leading universities. Czech scientists are embedded members of the global scientific community. They contribute annually to multiple international academic journals and collaborate with their colleagues across boundaries and fields. The Czech Republic was ranked 24th in the Global Innovation Index in 2020 and 2021, up from 26th in 2019.",
"title": "Economy"
},
{
"paragraph_id": 89,
"text": "Historically, the Czech lands, especially Prague, have been the seat of scientific discovery going back to early modern times, including Tycho Brahe, Nicolaus Copernicus, and Johannes Kepler. In 1784 the scientific community was first formally organized under the charter of the Royal Czech Society of Sciences. Currently, this organization is known as the Czech Academy of Sciences. Similarly, the Czech lands have a well-established history of scientists, including Nobel laureates biochemists Gerty and Carl Ferdinand Cori, chemists Jaroslav Heyrovský and Otto Wichterle, physicists Ernst Mach and Peter Grünberg, physiologist Jan Evangelista Purkyně and chemist Antonín Holý. Sigmund Freud, the founder of psychoanalysis, was born in Příbor, Gregor Mendel, the founder of genetics, was born in Hynčice and spent most of his life in Brno, logician and mathematician Kurt Gödel was born in Brno.",
"title": "Economy"
},
{
"paragraph_id": 90,
"text": "Historically, most scientific research was recorded in Latin, but from the 18th century onwards increasingly in German and later in Czech, archived in libraries supported and managed by religious groups and other denominations as evidenced by historical locations of international renown and heritage such as the Strahov Monastery and the Clementinum in Prague. Increasingly, Czech scientists publish their work and that of their history in English.",
"title": "Economy"
},
{
"paragraph_id": 91,
"text": "The current important scientific institution is the already mentioned Academy of Sciences of the Czech Republic, the CEITEC Institute in Brno or the HiLASE and Eli Beamlines centers with the most powerful laser in the world in Dolní Břežany. Prague is the seat of the administrative center of the GSA Agency operating the European navigation system Galileo and the European Union Agency for the Space Programme.",
"title": "Economy"
},
{
"paragraph_id": 92,
"text": "The total fertility rate (TFR) in 2020 was estimated at 1.71 children per woman, which is below the replacement rate of 2.1. The Czech Republic's population has an average age of 43.3 years. The life expectancy in 2021 was estimated to be 79.5 years (76.55 years male, 82.61 years female). About 77,000 people immigrate to the Czech Republic annually. Vietnamese immigrants began settling in the country during the Communist period, when they were invited as guest workers by the Czechoslovak government. In 2009, there were about 70,000 Vietnamese in the Czech Republic. Most decide to stay in the country permanently.",
"title": "Demographics"
},
{
"paragraph_id": 93,
"text": "According to results of the 2021 census, the majority of the inhabitants of the Czech Republic are Czechs (57.3%), followed by Moravians (3.4%), Slovaks (0.9%), Ukrainians (0.7%), Viets (0.3%), Poles (0.3%), Russians (0.2%), Silesians (0.1%) and Germans (0.1%). Another 4.0% declared combination of two nationalities (3.6% combination of Czech and other nationality). As the 'nationality' was an optional item, a number of people left this field blank (31.6%). According to some estimates, there are about 250,000 Romani people in the Czech Republic. The Polish minority resides mainly in the Trans-Olza region.",
"title": "Demographics"
},
{
"paragraph_id": 94,
"text": "There were 658,564 foreigners residing in the country in 2021, according to the Czech Statistical Office, with the largest groups being Ukrainian (22%), Slovak (22%), Vietnamese (12%), Russian (7%) and German (4%). Most of the foreign population lives in Prague (37.3%) and Central Bohemia Region (13.2%).",
"title": "Demographics"
},
{
"paragraph_id": 95,
"text": "The Jewish population of Bohemia and Moravia, 118,000 according to the 1930 census, was nearly annihilated by the Nazi Germans during the Holocaust. There were approximately 3,900 Jews in the Czech Republic in 2021. The former Czech prime minister, Jan Fischer, is of Jewish faith.",
"title": "Demographics"
},
{
"paragraph_id": 96,
"text": "Nationality of residents, who answered the question in the Census 2021:",
"title": "Demographics"
},
{
"paragraph_id": 97,
"text": "About 75% to 79% of residents of the Czech Republic do not declare having any religion or faith in surveys, and the proportion of convinced atheists (30%) is the third highest in the world behind those of China (47%) and Japan (31%). The Czech people have been historically characterized as \"tolerant and even indifferent towards religion\". The religious identity of the country has changed drastically since the first half of the 20th century, when more than 90% of Czechs were Christians.",
"title": "Demographics"
},
{
"paragraph_id": 98,
"text": "Christianization in the 9th and 10th centuries introduced Catholicism. After the Bohemian Reformation, most Czechs became followers of Jan Hus, Petr Chelčický and other regional Protestant Reformers. Taborites and Utraquists were Hussite groups. Towards the end of the Hussite Wars, the Utraquists changed sides and allied with the Catholic Church. Following the joint Utraquist—Catholic victory, Utraquism was accepted as a distinct form of Christianity to be practiced in Bohemia by the Catholic Church while all remaining Hussite groups were prohibited. After the Reformation, some Bohemians went with the teachings of Martin Luther, especially Sudeten Germans. In the wake of the Reformation, Utraquist Hussites took a renewed increasingly anti-Catholic stance, while some of the defeated Hussite factions were revived. After the Habsburgs regained control of Bohemia, the whole population was forcibly converted to Catholicism—even the Utraquist Hussites. Going forward, Czechs have become more wary and pessimistic of religion as such. A history of resistance to the Catholic Church followed. It suffered a schism with the neo-Hussite Czechoslovak Hussite Church in 1920, lost the bulk of its adherents during the Communist era and continues to lose in the modern, ongoing secularization. Protestantism never recovered after the Counter-Reformation was introduced by the Austrian Habsburgs in 1620. Prior to the Holocaust, the Czech Republic had a sizable Jewish community of around 100,000. There are many historically important and culturally relevant Synagogues in the Czech Republic such as Europe's oldest active Synagogue, The Old New Synagogue and the second largest Synagogue in Europe, the Great Synagogue (Plzeň). The Holocaust decimated Czech Jewry and the Jewish population as of 2021 is 3,900.",
"title": "Demographics"
},
{
"paragraph_id": 99,
"text": "According to the 2011 census, 34% of the population stated they had no religion, 10.3% was Catholic, 0.8% was Protestant (0.5% Czech Brethren and 0.4% Hussite), and 9% followed other forms of religion both denominational or not (of which 863 people answered they are Pagan). 45% of the population did not answer the question about religion. From 1991 to 2001 and further to 2011 the adherence to Catholicism decreased from 39% to 27% and then to 10%; Protestantism similarly declined from 3.7% to 2% and then to 0.8%. The Muslim population is estimated to be 20,000 representing 0.2% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 100,
"text": "The proportion of religious believers varies significantly across the country, from 55% in Zlín Region to 16% in Ústí nad Labem Region.",
"title": "Demographics"
},
{
"paragraph_id": 101,
"text": "Education in the Czech Republic is compulsory for nine years and citizens have access to a free-tuition university education, while the average number of years of education is 13.1. Additionally, the Czech Republic has a \"relatively equal\" educational system in comparison with other countries in Europe. Founded in 1348, Charles University was the first university in Central Europe. Other major universities in the country are Masaryk University, Czech Technical University, Palacký University, Academy of Performing Arts and University of Economics.",
"title": "Demographics"
},
{
"paragraph_id": 102,
"text": "The Programme for International Student Assessment, coordinated by the OECD, currently ranks the Czech education system as the 15th most successful in the world, higher than the OECD average. The UN Education Index ranks the Czech Republic 10th as of 2013 (positioned behind Denmark and ahead of South Korea).",
"title": "Demographics"
},
{
"paragraph_id": 103,
"text": "Health care in the Czech Republic is similar in quality to that of other developed nations. The Czech universal health care system is based on a compulsory insurance model, with fee-for-service care funded by mandatory employment-related insurance plans. According to the 2016 Euro health consumer index, a comparison of healthcare in Europe, the Czech healthcare is 13th, ranked behind Sweden and two positions ahead of the United Kingdom.",
"title": "Demographics"
},
{
"paragraph_id": 104,
"text": "Venus of Dolní Věstonice is the treasure of prehistoric art. Theodoric of Prague was a painter in the Gothic era who decorated the castle Karlstejn. In the Baroque era, there were Wenceslaus Hollar, Jan Kupecký, Karel Škréta, Anton Raphael Mengs or Petr Brandl, sculptors Matthias Braun and Ferdinand Brokoff. In the first half of the 19th century, Josef Mánes joined the romantic movement. In the second half of the 19th century had the main say the so-called \"National Theatre generation\": sculptor Josef Václav Myslbek and painters Mikoláš Aleš, Václav Brožík, Vojtěch Hynais or Julius Mařák. At the end of the century came a wave of Art Nouveau. Alfons Mucha became the main representative. He is known for Art Nouveau posters and his cycle of 20 large canvases named the Slav Epic, which depicts the history of Czechs and other Slavs. As of 2012, the Slav Epic can be seen in the Veletržní Palace of the National Gallery in Prague, which manages the largest collection of art in the Czech Republic. Max Švabinský was another Art nouveau painter. The 20th century brought an avant-garde revolution. In the Czech lands mainly expressionist and cubist: Josef Čapek, Emil Filla, Bohumil Kubišta, Jan Zrzavý. Surrealism emerged particularly in the work of Toyen, Josef Šíma and Karel Teige. In the world, however, he pushed mainly František Kupka, a pioneer of abstract painting. As illustrators and cartoonists in the first half of the 20th century gained fame Josef Lada, Zdeněk Burian or Emil Orlík. Art photography has become a new field (František Drtikol, Josef Sudek, later Jan Saudek or Josef Koudelka).",
"title": "Culture"
},
{
"paragraph_id": 105,
"text": "The Czech Republic is known for its individually made, mouth-blown, and decorated Bohemian glass.",
"title": "Culture"
},
{
"paragraph_id": 106,
"text": "The earliest preserved stone buildings in Bohemia and Moravia date back to the time of the Christianization in the 9th and 10th centuries. Since the Middle Ages, the Czech lands have been using the same architectural styles as most of Western and Central Europe. The oldest still standing churches were built in the Romanesque style. During the 13th century, it was replaced by the Gothic style. In the 14th century, Emperor Charles IV invited architects from France and Germany, Matthias of Arras and Peter Parler, to his court in Prague. During the Middle Ages, some fortified castles were built by the king and aristocracy, as well as some monasteries.",
"title": "Culture"
},
{
"paragraph_id": 107,
"text": "The Renaissance style penetrated the Bohemian Crown in the late 15th century when the older Gothic style started to be mixed with Renaissance elements. An example of pure Renaissance architecture in Bohemia is the Queen Anne's Summer Palace, which was situated in the garden of Prague Castle. Evidence of the general reception of the Renaissance in Bohemia, involving an influx of Italian architects, can be found in spacious chateaus with arcade courtyards and geometrically arranged gardens. Emphasis was placed on comfort, and buildings that were built for entertainment purposes also appeared.",
"title": "Culture"
},
{
"paragraph_id": 108,
"text": "In the 17th century, the Baroque style spread throughout the Crown of Bohemia.",
"title": "Culture"
},
{
"paragraph_id": 109,
"text": "In the 18th century, Bohemia produced an architectural peculiarity – the Baroque Gothic style, a synthesis of the Gothic and Baroque styles.",
"title": "Culture"
},
{
"paragraph_id": 110,
"text": "During the 19th century stands the revival architectural styles. Some churches were restored to their presumed medieval appearance and there were constructed buildings in the Neo-Romanesque, Neo-Gothic and Neo-Renaissance styles. At the turn of the 19th and 20th centuries, the new art style appeared in the Czech lands – Art Nouveau.",
"title": "Culture"
},
{
"paragraph_id": 111,
"text": "Bohemia contributed an unusual style to the world's architectural heritage when Czech architects attempted to transpose the Cubism of painting and sculpture into architecture.",
"title": "Culture"
},
{
"paragraph_id": 112,
"text": "Between World Wars I and II, Functionalism, with its sober, progressive forms, took over as the main architectural style.",
"title": "Culture"
},
{
"paragraph_id": 113,
"text": "After World War II and the Communist coup in 1948, art in Czechoslovakia became Soviet-influenced. The Czechoslovak avant-garde artistic movement is known as the Brussels style came up in the time of political liberalization of Czechoslovakia in the 1960s. Brutalism dominated in the 1970s and 1980s.",
"title": "Culture"
},
{
"paragraph_id": 114,
"text": "The Czech Republic is not shying away from the more modern trends of international architecture, an example is the Dancing House (Tančící dům) in Prague, Golden Angel in Prague or Congress Centre in Zlín.",
"title": "Culture"
},
{
"paragraph_id": 115,
"text": "Influential Czech architects include Peter Parler, Benedikt Rejt, Jan Santini Aichel, Kilian Ignaz Dientzenhofer, Josef Fanta, Josef Hlávka, Josef Gočár, Pavel Janák, Jan Kotěra, Věra Machoninová, Karel Prager, Karel Hubáček, Jan Kaplický, Eva Jiřičná or Josef Pleskot.",
"title": "Culture"
},
{
"paragraph_id": 116,
"text": "The literature from the area of today's Czech Republic was mostly written in Czech, but also in Latin and German or even Old Church Slavonic. Franz Kafka, although a competent user of Czech, wrote in his mother tongue, German. His included: (The Trial and The Castle).",
"title": "Culture"
},
{
"paragraph_id": 117,
"text": "In the second half of the 13th century, the royal court in Prague became one of the centers of German Minnesang and courtly literature. The Czech German-language literature can be seen in the first half of the 20th century.",
"title": "Culture"
},
{
"paragraph_id": 118,
"text": "Bible translations played a role in the development of Czech literature. The oldest Czech translation of the Psalms originated in the late 13th century and the first complete Czech translation of the Bible was finished around 1360. The first complete printed Czech Bible was published in 1488. The first complete Czech Bible translation from the original languages was published between 1579 and 1593. The Codex Gigas from the 12th century is the largest extant medieval manuscript in the world.",
"title": "Culture"
},
{
"paragraph_id": 119,
"text": "Czech-language literature can be divided into several periods: the Middle Ages; the Hussite period; the Renaissance humanism; the Baroque period; the Enlightenment and Czech reawakening in the first half of the 19th century, modern literature in the second half of the 19th century; the avant-garde of the interwar period; the years under Communism; and the Czech Republic.",
"title": "Culture"
},
{
"paragraph_id": 120,
"text": "The antiwar comedy novel The Good Soldier Švejk is the most translated Czech book in history.",
"title": "Culture"
},
{
"paragraph_id": 121,
"text": "The international literary award the Franz Kafka Prize is awarded in the Czech Republic.",
"title": "Culture"
},
{
"paragraph_id": 122,
"text": "The Czech Republic has the densest network of libraries in Europe.",
"title": "Culture"
},
{
"paragraph_id": 123,
"text": "Czech literature and culture played a role on at least two occasions when Czechs lived under oppression and political activity was suppressed. On both of these occasions, in the early 19th century and then again in the 1960s, the Czechs used their cultural and literary effort to strive for political freedom, establishing a confident, politically aware nation.",
"title": "Culture"
},
{
"paragraph_id": 124,
"text": "The musical tradition of the Czech lands arose from the first church hymns, whose first evidence is suggested at the break of the 10th and 11th centuries. Some pieces of Czech music include two chorales, which in their time performed the function of anthems: \"Lord, Have Mercy on Us\" and the hymn \"Saint Wenceslaus\" or \"Saint Wenceslaus Chorale\". The authorship of the anthem \"Lord, Have Mercy on Us\" is ascribed by some historians to Saint Adalbert of Prague (sv.Vojtěch), bishop of Prague, living between 956 and 997.",
"title": "Culture"
},
{
"paragraph_id": 125,
"text": "The wealth of musical culture lies in the classical music tradition during all historical periods, especially in the Baroque, Classicism, Romantic, modern classical music and in the traditional folk music of Bohemia, Moravia and Silesia. Since the early era of artificial music, Czech musicians and composers have been influenced the folk music of the region and dance.",
"title": "Culture"
},
{
"paragraph_id": 126,
"text": "Czech music can be considered to have been \"beneficial\" in both the European and worldwide context, several times co-determined or even determined a newly arriving era in musical art, above all of Classical era, as well as by original attitudes in Baroque, Romantic and modern classical music. Some Czech musical works are The Bartered Bride, New World Symphony, Sinfonietta and Jenůfa.",
"title": "Culture"
},
{
"paragraph_id": 127,
"text": "A music festival in the country is Prague Spring International Music Festival of classical music, a permanent showcase for performing artists, symphony orchestras and chamber music ensembles of the world.",
"title": "Culture"
},
{
"paragraph_id": 128,
"text": "The roots of Czech theatre can be found in the Middle Ages, especially in the cultural life of the Gothic period. In the 19th century, the theatre played a role in the national awakening movement and later, in the 20th century, it became a part of modern European theatre art. The original Czech cultural phenomenon came into being at the end of the 1950s. This project called Laterna magika, resulting in productions that combined theater, dance, and film in a poetic manner, considered the first multimedia art project in an international context.",
"title": "Culture"
},
{
"paragraph_id": 129,
"text": "A drama is Karel Čapek's play R.U.R., which introduced the word \"robot\".",
"title": "Culture"
},
{
"paragraph_id": 130,
"text": "The country has a tradition of puppet theater. In 2016, Czech and Slovak Puppetry was included on the UNESCO Intangible Cultural Heritage Lists.",
"title": "Culture"
},
{
"paragraph_id": 131,
"text": "The tradition of Czech cinematography started in the second half of the 1890s. Peaks of the production in the era of silent movies include the historical drama The Builder of the Temple and the social and erotic drama Erotikon directed by Gustav Machatý. The early Czech sound film era was productive, above all in mainstream genres, with the comedies of Martin Frič or Karel Lamač. There were dramatic movies sought internationally.",
"title": "Culture"
},
{
"paragraph_id": 132,
"text": "Hermína Týrlová was a prominent Czech animator, screenwriter, and film director. She was often called the mother of Czech animation. Over the course of her career, she produced over 60 animated children's short films using puppets and the technique of stop motion animation.",
"title": "Culture"
},
{
"paragraph_id": 133,
"text": "Before the German occupation, in 1933, filmmaker and animator Irena Dodalová [cs] established the first Czech animation studio \"IRE Film\" with her husband Karel Dodal.",
"title": "Culture"
},
{
"paragraph_id": 134,
"text": "After the period of Nazi occupation and early communist official dramaturgy of socialist realism in movies at the turn of the 1940s and 1950s with fewer exceptions such as Krakatit or Men without wings (awarded by Palme d'Or in 1946), an era of the Czech film began with animated films, performed in anglophone countries under the name \"The Fabulous World of Jules Verne\" from 1958, which combined acted drama with animation, and Jiří Trnka, the founder of the modern puppet film. This began a tradition of animated films (Mole etc.).",
"title": "Culture"
},
{
"paragraph_id": 135,
"text": "In the 1960s, the hallmark of Czechoslovak New Wave's films were improvised dialogues, black and absurd humor and the occupation of non-actors. Directors are trying to preserve natural atmosphere without refinement and artificial arrangement of scenes. A personality of the 1960s and the beginning of the 1970s with original manuscript and psychological impact is František Vláčil. Another international author is Jan Švankmajer, a filmmaker and artist whose work spans several media. He is a self-labeled surrealist known for animations and features.",
"title": "Culture"
},
{
"paragraph_id": 136,
"text": "The Barrandov Studios in Prague are the largest film studios with film locations in the country. Filmmakers have come to Prague to shoot scenery no longer found in Berlin, Paris and Vienna. The city of Karlovy Vary was used as a location for the 2006 James Bond film Casino Royale.",
"title": "Culture"
},
{
"paragraph_id": 137,
"text": "The Czech Lion is the highest Czech award for film achievement. Karlovy Vary International Film Festival is one of the film festivals that have been given competitive status by the FIAPF. Other film festivals held in the country include Febiofest, Jihlava International Documentary Film Festival, One World Film Festival, Zlín Film Festival and Fresh Film Festival.",
"title": "Culture"
},
{
"paragraph_id": 138,
"text": "Czech journalists and media enjoy a degree of freedom. There are restrictions against writing in support of Nazism, racism or violating Czech law. The Czech press was ranked as the 40th most free press in the World Freedom Index by Reporters Without Borders in 2021. Radio Free Europe/Radio Liberty has its headquarters in Prague.",
"title": "Culture"
},
{
"paragraph_id": 139,
"text": "The national public television service is Czech Television that operates the 24-hour news channel ČT24 and the news website ct24.cz. As of 2020, Czech Television is the most watched television, followed by private televisions TV Nova and Prima TV. However, TV Nova has the most watched main news program and prime time program. Other public services include the Czech Radio and the Czech News Agency.",
"title": "Culture"
},
{
"paragraph_id": 140,
"text": "The best-selling daily national newspapers in 2020/21 are Blesk (average 703,000 daily readers), Mladá fronta DNES (average 461,000 daily readers), Právo (average 182,000 daily readers), Lidové noviny (average 163,000 daily readers) and Hospodářské noviny (average 162,000 daily readers).",
"title": "Culture"
},
{
"paragraph_id": 141,
"text": "Most Czechs (87%) read their news online, with Seznam.cz, iDNES.cz, Novinky.cz, iPrima.cz and Seznam Zprávy.cz being the most visited as of 2021.",
"title": "Culture"
},
{
"paragraph_id": 142,
"text": "Czech cuisine is marked by an emphasis on meat dishes with pork, beef, and chicken. Goose, duck, rabbit, and venison are served. Fish is less common, with the occasional exception of fresh trout and carp, which is served at Christmas.",
"title": "Culture"
},
{
"paragraph_id": 143,
"text": "There is a variety of local sausages, wurst, pâtés, and smoked and cured meats. Czech desserts include a variety of whipped cream, chocolate, and fruit pastries and tarts, crêpes, creme desserts and cheese, poppy-seed-filled and other types of traditional cakes such as buchty, koláče and štrúdl.",
"title": "Culture"
},
{
"paragraph_id": 144,
"text": "Czech beer has a history extending more than a millennium; the earliest known brewery existed in 993. Today the Czech Republic has the highest beer consumption per capita in the world. The pilsner style beer (pils) originated in Plzeň, where the world's first blond lager Pilsner Urquell is still produced. It has served as the inspiration for more than two-thirds of the beer produced in the world today. The city of České Budějovice has similarly lent its name to its beer, known as Budweiser Budvar.",
"title": "Culture"
},
{
"paragraph_id": 145,
"text": "The South Moravian region has been producing wine since the Middle Ages; about 94% of vineyards in the Czech Republic are Moravian. Aside from beer, slivovitz and wine, the Czech Republic also produces two liquors, Fernet Stock and Becherovka. Kofola is a non-alcoholic domestic cola soft drink which competes with Coca-Cola and Pepsi.",
"title": "Culture"
},
{
"paragraph_id": 146,
"text": "The two leading sports in the Czech Republic are football and ice hockey. The most watched sporting events are the Olympic tournament and World Championships of ice hockey. Other most popular sports include tennis, volleyball, floorball, golf, ball hockey, athletics, basketball and skiing.",
"title": "Culture"
},
{
"paragraph_id": 147,
"text": "The country has won 15 gold medals in the Summer Olympics and nine in the Winter Games. (See Olympic history.) The Czech ice hockey team won the gold medal at the 1998 Winter Olympics and has won twelve gold medals at the World Championships, including three straight from 1999 to 2001.",
"title": "Culture"
},
{
"paragraph_id": 148,
"text": "The Škoda Motorsport is engaged in competition racing since 1901 and has gained a number of titles with various vehicles around the world. MTX automobile company was formerly engaged in the manufacture of racing and formula cars since 1969.",
"title": "Culture"
},
{
"paragraph_id": 149,
"text": "Hiking is a popular sport. The word for 'tourist' in Czech, turista, also means 'trekker' or 'hiker'. For hikers, thanks to the more than 120-year-old tradition, there is the Czech Hiking Markers System of trail blazing, that has been adopted by countries worldwide. There is a network of around 40,000 km of marked short- and long-distance trails crossing the whole country and all the Czech mountains.",
"title": "Culture"
},
{
"paragraph_id": 150,
"text": "49°45′N 15°30′E / 49.750°N 15.500°E / 49.750; 15.500",
"title": "External links"
}
] | The Czech Republic, also known as Czechia, is a landlocked country in Central Europe. Historically known as Bohemia, it is bordered by Austria to the south, Germany to the west, Poland to the northeast, and Slovakia to the southeast. The Czech Republic has a hilly landscape that covers an area of 78,871 square kilometers (30,452 sq mi) with a mostly temperate continental and oceanic climate. The capital and largest city is Prague; other major cities and urban areas include Brno, Ostrava, Plzeň and Liberec. The Duchy of Bohemia was founded in the late 9th century under Great Moravia. It was formally recognized as an Imperial State of the Holy Roman Empire in 1002 and became a kingdom in 1198. Following the Battle of Mohács in 1526, all of the Crown lands of Bohemia were gradually integrated into the Habsburg monarchy. Nearly a hundred years later, the Protestant Bohemian Revolt led to the Thirty Years' War. After the Battle of White Mountain, the Habsburgs consolidated their rule. With the dissolution of the Holy Roman Empire in 1806, the Crown lands became part of the Austrian Empire. In the 19th century, the Czech lands became more industrialized, and in 1918 most of it became part of the First Czechoslovak Republic following the collapse of Austria-Hungary after World War I. Czechoslovakia was the only country in Central and Eastern Europe to remain a parliamentary democracy during the entirety of the interwar period. After the Munich Agreement in 1938, Nazi Germany systematically took control over the Czech lands. Czechoslovakia was restored in 1945 and three years later became an Eastern Bloc communist state following a coup d'état in 1948. Attempts to liberalize the government and economy were suppressed by a Soviet-led invasion of the country during the Prague Spring in 1968. In November 1989, the Velvet Revolution ended communist rule in the country and restored democracy. On 31 December 1992, Czechoslovakia was peacefully dissolved, with its constituent states becoming the independent states of the Czech Republic and Slovakia. The Czech Republic is a unitary parliamentary republic and developed country with an advanced, high-income social market economy. It is a welfare state with a European social model, universal health care and free-tuition university education. It ranks 32nd in the Human Development Index. The Czech Republic is a member of the United Nations, NATO, the European Union, the OECD, the OSCE, the Council of Europe and the Visegrád Group. | 2001-07-20T19:25:58Z | 2023-12-31T07:48:08Z | [
"Template:Cite web",
"Template:Lang-cs",
"Template:As of",
"Template:Cvt",
"Template:ISBN",
"Template:Osmrelation-inline",
"Template:Coord",
"Template:Short description",
"Template:Wikt-lang",
"Template:Efn",
"Template:Bar box",
"Template:Portal",
"Template:In lang",
"Template:Use American English",
"Template:Use dmy dates",
"Template:Multiple image",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite news",
"Template:Refend",
"Template:Hatnote group",
"Template:Convert",
"Template:Citation",
"Template:Sister bar",
"Template:Navboxes",
"Template:Authority control",
"Template:Notelist",
"Template:Cite tweet",
"Template:Office-table",
"Template:Clear",
"Template:Webarchive",
"Template:Lang",
"Template:Legend",
"Template:Cn",
"Template:Cite book",
"Template:Cite CIA World Factbook",
"Template:Wikiatlas",
"Template:Infobox country",
"Template:Refimprove section",
"Template:See also",
"Template:Largest cities of the Czech Republic",
"Template:Sfn",
"Template:Ill",
"Template:Refbegin",
"Template:Czech Republic topics",
"Template:Pp-protected",
"Template:Main"
] | https://en.wikipedia.org/wiki/Czech_Republic |
5,322 | Czechoslovakia | Czechoslovakia (/ˌtʃɛkoʊsloʊˈvækiə, -kə-, -slə-, -ˈvɑː-/ ; Czech and Slovak: Československo, Česko-Slovensko) was a landlocked state in Central Europe, created in 1918, when it declared its independence from Austria-Hungary. In 1938, after the Munich Agreement, the Sudetenland became part of Nazi Germany, while the country lost further territories to Hungary and Poland (Carpathian Ruthenia to Hungary and Zaolzie to Poland). Between 1939 and 1945, the state ceased to exist, as Slovakia proclaimed its independence and the remaining territories in the east became part of Hungary, while in the remainder of the Czech Lands, the German Protectorate of Bohemia and Moravia was proclaimed. In 1939, after the outbreak of World War II, former Czechoslovak President Edvard Beneš formed a government-in-exile and sought recognition from the Allies.
After World War II, Czechoslovakia was reestablished under its pre-1938 borders, with the exception of Carpathian Ruthenia, which became part of the Ukrainian SSR (a republic of the Soviet Union). The Communist Party seized power in a coup in 1948. From 1948 to 1989, Czechoslovakia was part of the Eastern Bloc with a planned economy. Its economic status was formalized in membership of Comecon from 1949 and its defense status in the Warsaw Pact of 1955. A period of political liberalization in 1968, the Prague Spring, ended violently when the Soviet Union, assisted by other Warsaw Pact countries, invaded Czechoslovakia. In 1989, as Marxist–Leninist governments and communism were ending all over Central and Eastern Europe, Czechoslovaks peacefully deposed their communist government during the Velvet Revolution, which began on 17 November 1989 and ended 11 days later on 28 November when all of the top Communist leaders and Communist party itself resigned. On 31 December 1992, Czechoslovakia peacefully split into the two sovereign states of the Czech Republic and Slovakia.
The country was of generally irregular terrain. The western area was part of the north-central European uplands. The eastern region was composed of the northern reaches of the Carpathian Mountains and lands of the Danube River basin.
The weather is mild winters and mild summers. Influenced by the Atlantic Ocean from the west, the Baltic Sea from the north, and Mediterranean Sea from the south. There is no continental weather.
The area was part of the Austro-Hungarian Empire until it collapsed at the end of World War I. The new state was founded by Tomáš Garrigue Masaryk, who served as its first president from 14 November 1918 to 14 December 1935. He was succeeded by his close ally Edvard Beneš (1884–1948).
The roots of Czech nationalism go back to the 19th century, when philologists and educators, influenced by Romanticism, promoted the Czech language and pride in the Czech people. Nationalism became a mass movement in the second half of the 19th century. Taking advantage of the limited opportunities for participation in political life under Austrian rule, Czech leaders such as historian František Palacký (1798–1876) founded various patriotic, self-help organizations which provided a chance for many of their compatriots to participate in communal life before independence. Palacký supported Austro-Slavism and worked for a reorganized federal Austrian Empire, which would protect the Slavic speaking peoples of Central Europe against Russian and German threats.
An advocate of democratic reform and Czech autonomy within Austria-Hungary, Masaryk was elected twice to the Reichsrat (Austrian Parliament), from 1891 to 1893 for the Young Czech Party, and from 1907 to 1914 for the Czech Realist Party, which he had founded in 1889 with Karel Kramář and Josef Kaizl.
During World War I a number of Czechs and Slovaks, the Czechoslovak Legions, fought with the Allies in France and Italy, while large numbers deserted to Russia in exchange for its support for the independence of Czechoslovakia from the Austrian Empire. With the outbreak of World War I, Masaryk began working for Czech independence in a union with Slovakia. With Edvard Beneš and Milan Rastislav Štefánik, Masaryk visited several Western countries and won support from influential publicists. The Czechoslovak National Council was the main organization that advanced the claims for a Czechoslovak state.
The Bohemian Kingdom ceased to exist in 1918 when it was incorporated into Czechoslovakia. Czechoslovakia was founded in October 1918, as one of the successor states of the Austro-Hungarian Empire at the end of World War I and as part of the Treaty of Saint-Germain-en-Laye. It consisted of the present day territories of Bohemia, Moravia, Slovakia and Carpathian Ruthenia. Its territory included some of the most industrialized regions of the former Austria-Hungary. The land consisted of modern day Czechia, Slovakia, and a region of Ukraine called Carpathian Ruthenia
The new country was a multi-ethnic state, with Czechs and Slovaks as constituent peoples. The population consisted of Czechs (51%), Slovaks (16%), Germans (22%), Hungarians (5%) and Rusyns (4%). Many of the Germans, Hungarians, Ruthenians and Poles and some Slovaks, felt oppressed because the political elite did not generally allow political autonomy for minority ethnic groups. This policy led to unrest among the non-Czech population, particularly in German-speaking Sudetenland, which initially had proclaimed itself part of the Republic of German-Austria in accordance with the self-determination principle.
The state proclaimed the official ideology that there were no separate Czech and Slovak nations, but only one nation of Czechoslovaks (see Czechoslovakism), to the disagreement of Slovaks and other ethnic groups. Once a unified Czechoslovakia was restored after World War II (after the country had been divided during the war), the conflict between the Czechs and the Slovaks surfaced again. The governments of Czechoslovakia and other Central European nations deported ethnic Germans, reducing the presence of minorities in the nation. Most of the Jews had been killed during the war by the Nazis.
*Jews identified themselves as Germans or Hungarians (and Jews only by religion not ethnicity), the sum is, therefore, more than 100%.
During the period between the two world wars Czechoslovakia was a democratic state. The population was generally literate, and contained fewer alienated groups. The influence of these conditions was augmented by the political values of Czechoslovakia's leaders and the policies they adopted. Under Tomas Masaryk, Czech and Slovak politicians promoted progressive social and economic conditions that served to defuse discontent.
Foreign minister Beneš became the prime architect of the Czechoslovak-Romanian-Yugoslav alliance (the "Little Entente", 1921–38) directed against Hungarian attempts to reclaim lost areas. Beneš worked closely with France. Far more dangerous was the German element, which after 1933 became allied with the Nazis in Germany.
Czech-Slovak relations came to be a central issue in Czechoslovak politics during the 1930s. The increasing feeling of inferiority among the Slovaks, who were hostile to the more numerous Czechs, weakened the country in the late 1930s. Slovakia became autonomous in the fall of 1938, and by mid-1939, Slovakia had become independent, with the First Slovak Republic set up as a satellite state of Nazi Germany and the far-right Slovak People's Party in power .
After 1933, Czechoslovakia remained the only democracy in central and eastern Europe.
In September 1938, Adolf Hitler demanded control of the Sudetenland. On 29 September 1938, Britain and France ceded control in the Appeasement at the Munich Conference; France ignored the military alliance it had with Czechoslovakia. During October 1938, Nazi Germany occupied the Sudetenland border region, effectively crippling Czechoslovak defences.
The First Vienna Award assigned a strip of southern Slovakia and Carpathian Ruthenia to Hungary. Poland occupied Zaolzie, an area whose population was majority Polish, in October 1938.
On 14 March 1939, the remainder ("rump") of Czechoslovakia was dismembered by the proclamation of the Slovak State, the next day the rest of Carpathian Ruthenia was occupied and annexed by Hungary, while the following day the German Protectorate of Bohemia and Moravia was proclaimed.
The eventual goal of the German state under Nazi leadership was to eradicate Czech nationality through assimilation, deportation, and extermination of the Czech intelligentsia; the intellectual elites and middle class made up a considerable number of the 200,000 people who passed through concentration camps and the 250,000 who died during German occupation. Under Generalplan Ost, it was assumed that around 50% of Czechs would be fit for Germanization. The Czech intellectual elites were to be removed not only from Czech territories but from Europe completely. The authors of Generalplan Ost believed it would be best if they emigrated overseas, as even in Siberia they were considered a threat to German rule. Just like Jews, Poles, Serbs, and several other nations, Czechs were considered to be untermenschen by the Nazi state. In 1940, in a secret Nazi plan for the Germanization of the Protectorate of Bohemia and Moravia it was declared that those considered to be of racially Mongoloid origin and the Czech intelligentsia were not to be Germanized.
The deportation of Jews to concentration camps was organized under the direction of Reinhard Heydrich, and the fortress town of Terezín was made into a ghetto way station for Jewish families. On 4 June 1942 Heydrich died after being wounded by an assassin in Operation Anthropoid. Heydrich's successor, Colonel General Kurt Daluege, ordered mass arrests and executions and the destruction of the villages of Lidice and Ležáky. In 1943 the German war effort was accelerated. Under the authority of Karl Hermann Frank, German minister of state for Bohemia and Moravia, some 350,000 Czech laborers were dispatched to the Reich. Within the protectorate, all non-war-related industry was prohibited. Most of the Czech population obeyed quiescently up until the final months preceding the end of the war, while thousands were involved in the resistance movement.
For the Czechs of the Protectorate Bohemia and Moravia, German occupation was a period of brutal oppression. Czech losses resulting from political persecution and deaths in concentration camps totaled between 36,000 and 55,000. The Jewish populations of Bohemia and Moravia (118,000 according to the 1930 census) were virtually annihilated. Many Jews emigrated after 1939; more than 70,000 were killed; 8,000 survived at Terezín. Several thousand Jews managed to live in freedom or in hiding throughout the occupation.
Despite the estimated 136,000 deaths at the hands of the Nazi regime, the population in the Reichsprotektorate saw a net increase during the war years of approximately 250,000 in line with an increased birth rate.
On 6 May 1945, the third US Army of General Patton entered Pilsen from the south west. On 9 May 1945, Soviet Red Army troops entered Prague.
After World War II, prewar Czechoslovakia was reestablished, with the exception of Subcarpathian Ruthenia, which was annexed by the Soviet Union and incorporated into the Ukrainian Soviet Socialist Republic. The Beneš decrees were promulgated concerning ethnic Germans (see Potsdam Agreement) and ethnic Hungarians. Under the decrees, citizenship was abrogated for people of German and Hungarian ethnic origin who had accepted German or Hungarian citizenship during the occupations. In 1948, this provision was cancelled for the Hungarians, but only partially for the Germans. The government then confiscated the property of the Germans and expelled about 90% of the ethnic German population, over 2 million people. Those who remained were collectively accused of supporting the Nazis after the Munich Agreement, as 97.32% of Sudeten Germans had voted for the NSDAP in the December 1938 elections. Almost every decree explicitly stated that the sanctions did not apply to antifascists. Some 250,000 Germans, many married to Czechs, some antifascists, and also those required for the post-war reconstruction of the country, remained in Czechoslovakia. The Beneš Decrees still cause controversy among nationalist groups in the Czech Republic, Germany, Austria and Hungary.
Following the expulsion of the ethnic German population from Czechoslovakia, parts of the former Sudetenland, especially around Krnov and the surrounding villages of the Jesenik mountain region in northeastern Czechoslovakia, were settled in 1949 by Communist refugees from Northern Greece who had left their homeland as a result of the Greek Civil War. These Greeks made up a large proportion of the town and region's population until the late 1980s/early 1990s. Although defined as "Greeks", the Greek Communist community of Krnov and the Jeseniky region actually consisted of an ethnically diverse population, including Greek Macedonians, Macedonians, Vlachs, Pontic Greeks and Turkish speaking Urums or Caucasus Greeks.
Carpathian Ruthenia (Podkarpatská Rus) was occupied by (and in June 1945 formally ceded to) the Soviet Union. In the 1946 parliamentary election, the Communist Party of Czechoslovakia was the winner in the Czech lands, and the Democratic Party won in Slovakia. In February 1948 the Communists seized power. Although they would maintain the fiction of political pluralism through the existence of the National Front, except for a short period in the late 1960s (the Prague Spring) the country had no liberal democracy. Since citizens lacked significant electoral methods of registering protest against government policies, periodically there were street protests that became violent. For example, there were riots in the town of Plzeň in 1953, reflecting economic discontent. Police and army units put down the rebellion, and hundreds were injured but no one was killed. While its economy remained more advanced than those of its neighbors in Eastern Europe, Czechoslovakia grew increasingly economically weak relative to Western Europe.
The currency reform of 1953 caused dissatisfaction among Czechoslovak laborers. To equalize the wage rate, Czechoslovaks had to turn in their old money for new at a decreased value. The banks also confiscated savings and bank deposits to control the amount of money in circulation. In the 1950s, Czechoslovakia experienced high economic growth (averaging 7% per year), which allowed for a substantial increase in wages and living standards, thus promoting the stability of the regime.
In 1968, when the reformer Alexander Dubček was appointed to the key post of First Secretary of the Czechoslovak Communist Party, there was a brief period of liberalization known as the Prague Spring. In response, after failing to persuade the Czechoslovak leaders to change course, five other members of the Warsaw Pact invaded. Soviet tanks rolled into Czechoslovakia on the night of 20–21 August 1968. Soviet Communist Party General Secretary Leonid Brezhnev viewed this intervention as vital for the preservation of the Soviet, socialist system and vowed to intervene in any state that sought to replace Marxism-Leninism with capitalism.
In the week after the invasion there was a spontaneous campaign of civil resistance against the occupation. This resistance involved a wide range of acts of non-cooperation and defiance: this was followed by a period in which the Czechoslovak Communist Party leadership, having been forced in Moscow to make concessions to the Soviet Union, gradually put the brakes on their earlier liberal policies.
Meanwhile, one plank of the reform program had been carried out: in 1968–69, Czechoslovakia was turned into a federation of the Czech Socialist Republic and Slovak Socialist Republic. The theory was that under the federation, social and economic inequities between the Czech and Slovak halves of the state would be largely eliminated. A number of ministries, such as education, now became two formally equal bodies in the two formally equal republics. However, the centralized political control by the Czechoslovak Communist Party severely limited the effects of federalization.
The 1970s saw the rise of the dissident movement in Czechoslovakia, represented among others by Václav Havel. The movement sought greater political participation and expression in the face of official disapproval, manifested in limitations on work activities, which went as far as a ban on professional employment, the refusal of higher education for the dissidents' children, police harassment and prison.
During the 1980s, Czechoslovakia became one of the most tightly controlled Communist regimes in the Warsaw Pact in resistance to the mitigation of controls notified by Soviet president Mikhail Gorbachev.
In 1989, the Velvet Revolution restored democracy. This occurred around the same time as the fall of communism in Romania, Bulgaria, Hungary, East Germany and Poland.
The word "socialist" was removed from the country's full name on 29 March 1990 and replaced by "federal".
Pope John Paul II made a papal visit to Czechoslovakia on 21 April 1990, hailing it as a symbolic step of reviving Christianity in the newly-formed post-communist state.
Czechoslovakia participated in the Gulf War with a small force of 200 troops under the command of the U.S.-led coalition.
In 1992, because of growing nationalist tensions in the government, Czechoslovakia was peacefully dissolved by parliament. On 31 December 1992 it formally separated into two independent countries, the Czech Republic and the Slovak Republic.
After World War II, a political monopoly was held by the Communist Party of Czechoslovakia (KSČ). The leader of the KSČ was de facto the most powerful person in the country during this period. Gustáv Husák was elected first secretary of the KSČ in 1969 (changed to general secretary in 1971) and president of Czechoslovakia in 1975. Other parties and organizations existed but functioned in subordinate roles to the KSČ. All political parties, as well as numerous mass organizations, were grouped under umbrella of the National Front. Human rights activists and religious activists were severely repressed.
Czechoslovakia had the following constitutions during its history (1918–1992):
In the 1930s, the nation formed a military alliance with France, which collapsed in the Munich Agreement of 1938. After World War II, an active participant in Council for Mutual Economic Assistance (Comecon), Warsaw Pact, United Nations and its specialized agencies; signatory of conference on Security and Cooperation in Europe.
Before World War II, the economy was about the fourth in all industrial countries in Europe. The state was based on strong economy, manufacturing cars (Škoda, Tatra), trams, aircraft (Aero, Avia), ships, ship engines (Škoda), cannons, shoes (Baťa), turbines, guns (Zbrojovka Brno). It was the industrial workshop for the Austro-Hungarian empire. The Slovak lands relied more heavily on agriculture than the Czech lands.
After World War II, the economy was centrally planned, with command links controlled by the communist party, similarly to the Soviet Union. The large metallurgical industry was dependent on imports of iron and non-ferrous ores.
After World War II, the country was short of energy, relying on imported crude oil and natural gas from the Soviet Union, domestic brown coal, and nuclear and hydroelectric energy. Energy constraints were a major factor in the 1980s.
Slightly after the foundation of Czechoslovakia in 1918, there was a lack of essential infrastructure in many areas – paved roads, railways, bridges, etc. Massive improvement in the following years enabled Czechoslovakia to develop its industry. Prague's civil airport in Ruzyně became one of the most modern terminals in the world when it was finished in 1937. Tomáš Baťa, a Czech entrepreneur and visionary, outlined his ideas in the publication "Budujme stát pro 40 milionů lidí", where he described the future motorway system. Construction of the first motorways in Czechoslovakia begun in 1939, nevertheless, they were stopped after German occupation during World War II.
Education was free at all levels and compulsory from ages 6 to 15. The vast majority of the population was literate. There was a highly developed system of apprenticeship training and vocational schools supplemented general secondary schools and institutions of higher education.
In 1991, 46% of the population were Roman Catholics, 5.3% were Evangelical Lutheran, 30% were Atheist, and other religions made up 17% of the country, but there were huge differences in religious practices between the two constituent republics; see Czech Republic and Slovakia.
After World War II, free health care was available to all citizens. National health planning emphasized preventive medicine; factory and local health care centres supplemented hospitals and other inpatient institutions. There was a substantial improvement in rural health care during the 1960s and 1970s.
During the era between the World Wars, Czechoslovak democracy and liberalism facilitated conditions for free publication. The most significant daily newspapers in these times were Lidové noviny, Národní listy, Český deník and Československá Republika.
During Communist rule, the mass media in Czechoslovakia were controlled by the Communist Party. Private ownership of any publication or agency of the mass media was generally forbidden, although churches and other organizations published small periodicals and newspapers. Even with this information monopoly in the hands of organizations under KSČ control, all publications were reviewed by the government's Office for Press and Information.
The Czechoslovakia national football team was a consistent performer on the international scene, with eight appearances in the FIFA World Cup Finals, finishing in second place in 1934 and 1962. The team also won the European Football Championship in 1976, came in third in 1980 and won the Olympic gold in 1980.
Well-known football players such as Pavel Nedvěd, Antonín Panenka, Milan Baroš, Tomáš Rosický, Vladimír Šmicer or Petr Čech were all born in Czechoslovakia.
The International Olympic Committee code for Czechoslovakia is TCH, which is still used in historical listings of results.
The Czechoslovak national ice hockey team won many medals from the world championships and Olympic Games. Peter Šťastný, Jaromír Jágr, Dominik Hašek, Peter Bondra, Petr Klíma, Marián Gáborík, Marián Hossa, Miroslav Šatan and Pavol Demitra all come from Czechoslovakia.
Emil Zátopek, winner of four Olympic gold medals in athletics, is considered one of the top athletes in Czechoslovak history.
Věra Čáslavská was an Olympic gold medallist in gymnastics, winning seven gold medals and four silver medals. She represented Czechoslovakia in three consecutive Olympics.
Several accomplished professional tennis players including Jaroslav Drobný, Ivan Lendl, Jan Kodeš, Miloslav Mečíř, Hana Mandlíková, Martina Hingis, Martina Navratilova, Jana Novotna, Petra Kvitová and Daniela Hantuchová were born in Czechoslovakia.
Maps with Hungarian-language rubrics: | [
{
"paragraph_id": 0,
"text": "Czechoslovakia (/ˌtʃɛkoʊsloʊˈvækiə, -kə-, -slə-, -ˈvɑː-/ ; Czech and Slovak: Československo, Česko-Slovensko) was a landlocked state in Central Europe, created in 1918, when it declared its independence from Austria-Hungary. In 1938, after the Munich Agreement, the Sudetenland became part of Nazi Germany, while the country lost further territories to Hungary and Poland (Carpathian Ruthenia to Hungary and Zaolzie to Poland). Between 1939 and 1945, the state ceased to exist, as Slovakia proclaimed its independence and the remaining territories in the east became part of Hungary, while in the remainder of the Czech Lands, the German Protectorate of Bohemia and Moravia was proclaimed. In 1939, after the outbreak of World War II, former Czechoslovak President Edvard Beneš formed a government-in-exile and sought recognition from the Allies.",
"title": ""
},
{
"paragraph_id": 1,
"text": "After World War II, Czechoslovakia was reestablished under its pre-1938 borders, with the exception of Carpathian Ruthenia, which became part of the Ukrainian SSR (a republic of the Soviet Union). The Communist Party seized power in a coup in 1948. From 1948 to 1989, Czechoslovakia was part of the Eastern Bloc with a planned economy. Its economic status was formalized in membership of Comecon from 1949 and its defense status in the Warsaw Pact of 1955. A period of political liberalization in 1968, the Prague Spring, ended violently when the Soviet Union, assisted by other Warsaw Pact countries, invaded Czechoslovakia. In 1989, as Marxist–Leninist governments and communism were ending all over Central and Eastern Europe, Czechoslovaks peacefully deposed their communist government during the Velvet Revolution, which began on 17 November 1989 and ended 11 days later on 28 November when all of the top Communist leaders and Communist party itself resigned. On 31 December 1992, Czechoslovakia peacefully split into the two sovereign states of the Czech Republic and Slovakia.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The country was of generally irregular terrain. The western area was part of the north-central European uplands. The eastern region was composed of the northern reaches of the Carpathian Mountains and lands of the Danube River basin.",
"title": "Characteristics"
},
{
"paragraph_id": 3,
"text": "The weather is mild winters and mild summers. Influenced by the Atlantic Ocean from the west, the Baltic Sea from the north, and Mediterranean Sea from the south. There is no continental weather.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "The area was part of the Austro-Hungarian Empire until it collapsed at the end of World War I. The new state was founded by Tomáš Garrigue Masaryk, who served as its first president from 14 November 1918 to 14 December 1935. He was succeeded by his close ally Edvard Beneš (1884–1948).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The roots of Czech nationalism go back to the 19th century, when philologists and educators, influenced by Romanticism, promoted the Czech language and pride in the Czech people. Nationalism became a mass movement in the second half of the 19th century. Taking advantage of the limited opportunities for participation in political life under Austrian rule, Czech leaders such as historian František Palacký (1798–1876) founded various patriotic, self-help organizations which provided a chance for many of their compatriots to participate in communal life before independence. Palacký supported Austro-Slavism and worked for a reorganized federal Austrian Empire, which would protect the Slavic speaking peoples of Central Europe against Russian and German threats.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "An advocate of democratic reform and Czech autonomy within Austria-Hungary, Masaryk was elected twice to the Reichsrat (Austrian Parliament), from 1891 to 1893 for the Young Czech Party, and from 1907 to 1914 for the Czech Realist Party, which he had founded in 1889 with Karel Kramář and Josef Kaizl.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "During World War I a number of Czechs and Slovaks, the Czechoslovak Legions, fought with the Allies in France and Italy, while large numbers deserted to Russia in exchange for its support for the independence of Czechoslovakia from the Austrian Empire. With the outbreak of World War I, Masaryk began working for Czech independence in a union with Slovakia. With Edvard Beneš and Milan Rastislav Štefánik, Masaryk visited several Western countries and won support from influential publicists. The Czechoslovak National Council was the main organization that advanced the claims for a Czechoslovak state.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Bohemian Kingdom ceased to exist in 1918 when it was incorporated into Czechoslovakia. Czechoslovakia was founded in October 1918, as one of the successor states of the Austro-Hungarian Empire at the end of World War I and as part of the Treaty of Saint-Germain-en-Laye. It consisted of the present day territories of Bohemia, Moravia, Slovakia and Carpathian Ruthenia. Its territory included some of the most industrialized regions of the former Austria-Hungary. The land consisted of modern day Czechia, Slovakia, and a region of Ukraine called Carpathian Ruthenia",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The new country was a multi-ethnic state, with Czechs and Slovaks as constituent peoples. The population consisted of Czechs (51%), Slovaks (16%), Germans (22%), Hungarians (5%) and Rusyns (4%). Many of the Germans, Hungarians, Ruthenians and Poles and some Slovaks, felt oppressed because the political elite did not generally allow political autonomy for minority ethnic groups. This policy led to unrest among the non-Czech population, particularly in German-speaking Sudetenland, which initially had proclaimed itself part of the Republic of German-Austria in accordance with the self-determination principle.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The state proclaimed the official ideology that there were no separate Czech and Slovak nations, but only one nation of Czechoslovaks (see Czechoslovakism), to the disagreement of Slovaks and other ethnic groups. Once a unified Czechoslovakia was restored after World War II (after the country had been divided during the war), the conflict between the Czechs and the Slovaks surfaced again. The governments of Czechoslovakia and other Central European nations deported ethnic Germans, reducing the presence of minorities in the nation. Most of the Jews had been killed during the war by the Nazis.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "*Jews identified themselves as Germans or Hungarians (and Jews only by religion not ethnicity), the sum is, therefore, more than 100%.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "During the period between the two world wars Czechoslovakia was a democratic state. The population was generally literate, and contained fewer alienated groups. The influence of these conditions was augmented by the political values of Czechoslovakia's leaders and the policies they adopted. Under Tomas Masaryk, Czech and Slovak politicians promoted progressive social and economic conditions that served to defuse discontent.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Foreign minister Beneš became the prime architect of the Czechoslovak-Romanian-Yugoslav alliance (the \"Little Entente\", 1921–38) directed against Hungarian attempts to reclaim lost areas. Beneš worked closely with France. Far more dangerous was the German element, which after 1933 became allied with the Nazis in Germany.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Czech-Slovak relations came to be a central issue in Czechoslovak politics during the 1930s. The increasing feeling of inferiority among the Slovaks, who were hostile to the more numerous Czechs, weakened the country in the late 1930s. Slovakia became autonomous in the fall of 1938, and by mid-1939, Slovakia had become independent, with the First Slovak Republic set up as a satellite state of Nazi Germany and the far-right Slovak People's Party in power .",
"title": "History"
},
{
"paragraph_id": 15,
"text": "After 1933, Czechoslovakia remained the only democracy in central and eastern Europe.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In September 1938, Adolf Hitler demanded control of the Sudetenland. On 29 September 1938, Britain and France ceded control in the Appeasement at the Munich Conference; France ignored the military alliance it had with Czechoslovakia. During October 1938, Nazi Germany occupied the Sudetenland border region, effectively crippling Czechoslovak defences.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The First Vienna Award assigned a strip of southern Slovakia and Carpathian Ruthenia to Hungary. Poland occupied Zaolzie, an area whose population was majority Polish, in October 1938.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "On 14 March 1939, the remainder (\"rump\") of Czechoslovakia was dismembered by the proclamation of the Slovak State, the next day the rest of Carpathian Ruthenia was occupied and annexed by Hungary, while the following day the German Protectorate of Bohemia and Moravia was proclaimed.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The eventual goal of the German state under Nazi leadership was to eradicate Czech nationality through assimilation, deportation, and extermination of the Czech intelligentsia; the intellectual elites and middle class made up a considerable number of the 200,000 people who passed through concentration camps and the 250,000 who died during German occupation. Under Generalplan Ost, it was assumed that around 50% of Czechs would be fit for Germanization. The Czech intellectual elites were to be removed not only from Czech territories but from Europe completely. The authors of Generalplan Ost believed it would be best if they emigrated overseas, as even in Siberia they were considered a threat to German rule. Just like Jews, Poles, Serbs, and several other nations, Czechs were considered to be untermenschen by the Nazi state. In 1940, in a secret Nazi plan for the Germanization of the Protectorate of Bohemia and Moravia it was declared that those considered to be of racially Mongoloid origin and the Czech intelligentsia were not to be Germanized.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The deportation of Jews to concentration camps was organized under the direction of Reinhard Heydrich, and the fortress town of Terezín was made into a ghetto way station for Jewish families. On 4 June 1942 Heydrich died after being wounded by an assassin in Operation Anthropoid. Heydrich's successor, Colonel General Kurt Daluege, ordered mass arrests and executions and the destruction of the villages of Lidice and Ležáky. In 1943 the German war effort was accelerated. Under the authority of Karl Hermann Frank, German minister of state for Bohemia and Moravia, some 350,000 Czech laborers were dispatched to the Reich. Within the protectorate, all non-war-related industry was prohibited. Most of the Czech population obeyed quiescently up until the final months preceding the end of the war, while thousands were involved in the resistance movement.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "For the Czechs of the Protectorate Bohemia and Moravia, German occupation was a period of brutal oppression. Czech losses resulting from political persecution and deaths in concentration camps totaled between 36,000 and 55,000. The Jewish populations of Bohemia and Moravia (118,000 according to the 1930 census) were virtually annihilated. Many Jews emigrated after 1939; more than 70,000 were killed; 8,000 survived at Terezín. Several thousand Jews managed to live in freedom or in hiding throughout the occupation.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Despite the estimated 136,000 deaths at the hands of the Nazi regime, the population in the Reichsprotektorate saw a net increase during the war years of approximately 250,000 in line with an increased birth rate.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "On 6 May 1945, the third US Army of General Patton entered Pilsen from the south west. On 9 May 1945, Soviet Red Army troops entered Prague.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "After World War II, prewar Czechoslovakia was reestablished, with the exception of Subcarpathian Ruthenia, which was annexed by the Soviet Union and incorporated into the Ukrainian Soviet Socialist Republic. The Beneš decrees were promulgated concerning ethnic Germans (see Potsdam Agreement) and ethnic Hungarians. Under the decrees, citizenship was abrogated for people of German and Hungarian ethnic origin who had accepted German or Hungarian citizenship during the occupations. In 1948, this provision was cancelled for the Hungarians, but only partially for the Germans. The government then confiscated the property of the Germans and expelled about 90% of the ethnic German population, over 2 million people. Those who remained were collectively accused of supporting the Nazis after the Munich Agreement, as 97.32% of Sudeten Germans had voted for the NSDAP in the December 1938 elections. Almost every decree explicitly stated that the sanctions did not apply to antifascists. Some 250,000 Germans, many married to Czechs, some antifascists, and also those required for the post-war reconstruction of the country, remained in Czechoslovakia. The Beneš Decrees still cause controversy among nationalist groups in the Czech Republic, Germany, Austria and Hungary.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Following the expulsion of the ethnic German population from Czechoslovakia, parts of the former Sudetenland, especially around Krnov and the surrounding villages of the Jesenik mountain region in northeastern Czechoslovakia, were settled in 1949 by Communist refugees from Northern Greece who had left their homeland as a result of the Greek Civil War. These Greeks made up a large proportion of the town and region's population until the late 1980s/early 1990s. Although defined as \"Greeks\", the Greek Communist community of Krnov and the Jeseniky region actually consisted of an ethnically diverse population, including Greek Macedonians, Macedonians, Vlachs, Pontic Greeks and Turkish speaking Urums or Caucasus Greeks.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Carpathian Ruthenia (Podkarpatská Rus) was occupied by (and in June 1945 formally ceded to) the Soviet Union. In the 1946 parliamentary election, the Communist Party of Czechoslovakia was the winner in the Czech lands, and the Democratic Party won in Slovakia. In February 1948 the Communists seized power. Although they would maintain the fiction of political pluralism through the existence of the National Front, except for a short period in the late 1960s (the Prague Spring) the country had no liberal democracy. Since citizens lacked significant electoral methods of registering protest against government policies, periodically there were street protests that became violent. For example, there were riots in the town of Plzeň in 1953, reflecting economic discontent. Police and army units put down the rebellion, and hundreds were injured but no one was killed. While its economy remained more advanced than those of its neighbors in Eastern Europe, Czechoslovakia grew increasingly economically weak relative to Western Europe.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The currency reform of 1953 caused dissatisfaction among Czechoslovak laborers. To equalize the wage rate, Czechoslovaks had to turn in their old money for new at a decreased value. The banks also confiscated savings and bank deposits to control the amount of money in circulation. In the 1950s, Czechoslovakia experienced high economic growth (averaging 7% per year), which allowed for a substantial increase in wages and living standards, thus promoting the stability of the regime.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In 1968, when the reformer Alexander Dubček was appointed to the key post of First Secretary of the Czechoslovak Communist Party, there was a brief period of liberalization known as the Prague Spring. In response, after failing to persuade the Czechoslovak leaders to change course, five other members of the Warsaw Pact invaded. Soviet tanks rolled into Czechoslovakia on the night of 20–21 August 1968. Soviet Communist Party General Secretary Leonid Brezhnev viewed this intervention as vital for the preservation of the Soviet, socialist system and vowed to intervene in any state that sought to replace Marxism-Leninism with capitalism.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In the week after the invasion there was a spontaneous campaign of civil resistance against the occupation. This resistance involved a wide range of acts of non-cooperation and defiance: this was followed by a period in which the Czechoslovak Communist Party leadership, having been forced in Moscow to make concessions to the Soviet Union, gradually put the brakes on their earlier liberal policies.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Meanwhile, one plank of the reform program had been carried out: in 1968–69, Czechoslovakia was turned into a federation of the Czech Socialist Republic and Slovak Socialist Republic. The theory was that under the federation, social and economic inequities between the Czech and Slovak halves of the state would be largely eliminated. A number of ministries, such as education, now became two formally equal bodies in the two formally equal republics. However, the centralized political control by the Czechoslovak Communist Party severely limited the effects of federalization.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The 1970s saw the rise of the dissident movement in Czechoslovakia, represented among others by Václav Havel. The movement sought greater political participation and expression in the face of official disapproval, manifested in limitations on work activities, which went as far as a ban on professional employment, the refusal of higher education for the dissidents' children, police harassment and prison.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "During the 1980s, Czechoslovakia became one of the most tightly controlled Communist regimes in the Warsaw Pact in resistance to the mitigation of controls notified by Soviet president Mikhail Gorbachev.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "In 1989, the Velvet Revolution restored democracy. This occurred around the same time as the fall of communism in Romania, Bulgaria, Hungary, East Germany and Poland.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "The word \"socialist\" was removed from the country's full name on 29 March 1990 and replaced by \"federal\".",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Pope John Paul II made a papal visit to Czechoslovakia on 21 April 1990, hailing it as a symbolic step of reviving Christianity in the newly-formed post-communist state.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Czechoslovakia participated in the Gulf War with a small force of 200 troops under the command of the U.S.-led coalition.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "In 1992, because of growing nationalist tensions in the government, Czechoslovakia was peacefully dissolved by parliament. On 31 December 1992 it formally separated into two independent countries, the Czech Republic and the Slovak Republic.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "After World War II, a political monopoly was held by the Communist Party of Czechoslovakia (KSČ). The leader of the KSČ was de facto the most powerful person in the country during this period. Gustáv Husák was elected first secretary of the KSČ in 1969 (changed to general secretary in 1971) and president of Czechoslovakia in 1975. Other parties and organizations existed but functioned in subordinate roles to the KSČ. All political parties, as well as numerous mass organizations, were grouped under umbrella of the National Front. Human rights activists and religious activists were severely repressed.",
"title": "Government and politics"
},
{
"paragraph_id": 39,
"text": "Czechoslovakia had the following constitutions during its history (1918–1992):",
"title": "Government and politics"
},
{
"paragraph_id": 40,
"text": "In the 1930s, the nation formed a military alliance with France, which collapsed in the Munich Agreement of 1938. After World War II, an active participant in Council for Mutual Economic Assistance (Comecon), Warsaw Pact, United Nations and its specialized agencies; signatory of conference on Security and Cooperation in Europe.",
"title": "Government and politics"
},
{
"paragraph_id": 41,
"text": "Before World War II, the economy was about the fourth in all industrial countries in Europe. The state was based on strong economy, manufacturing cars (Škoda, Tatra), trams, aircraft (Aero, Avia), ships, ship engines (Škoda), cannons, shoes (Baťa), turbines, guns (Zbrojovka Brno). It was the industrial workshop for the Austro-Hungarian empire. The Slovak lands relied more heavily on agriculture than the Czech lands.",
"title": "Economy"
},
{
"paragraph_id": 42,
"text": "After World War II, the economy was centrally planned, with command links controlled by the communist party, similarly to the Soviet Union. The large metallurgical industry was dependent on imports of iron and non-ferrous ores.",
"title": "Economy"
},
{
"paragraph_id": 43,
"text": "After World War II, the country was short of energy, relying on imported crude oil and natural gas from the Soviet Union, domestic brown coal, and nuclear and hydroelectric energy. Energy constraints were a major factor in the 1980s.",
"title": "Resource base"
},
{
"paragraph_id": 44,
"text": "Slightly after the foundation of Czechoslovakia in 1918, there was a lack of essential infrastructure in many areas – paved roads, railways, bridges, etc. Massive improvement in the following years enabled Czechoslovakia to develop its industry. Prague's civil airport in Ruzyně became one of the most modern terminals in the world when it was finished in 1937. Tomáš Baťa, a Czech entrepreneur and visionary, outlined his ideas in the publication \"Budujme stát pro 40 milionů lidí\", where he described the future motorway system. Construction of the first motorways in Czechoslovakia begun in 1939, nevertheless, they were stopped after German occupation during World War II.",
"title": "Transport and communications"
},
{
"paragraph_id": 45,
"text": "Education was free at all levels and compulsory from ages 6 to 15. The vast majority of the population was literate. There was a highly developed system of apprenticeship training and vocational schools supplemented general secondary schools and institutions of higher education.",
"title": "Education"
},
{
"paragraph_id": 46,
"text": "In 1991, 46% of the population were Roman Catholics, 5.3% were Evangelical Lutheran, 30% were Atheist, and other religions made up 17% of the country, but there were huge differences in religious practices between the two constituent republics; see Czech Republic and Slovakia.",
"title": "Religion"
},
{
"paragraph_id": 47,
"text": "After World War II, free health care was available to all citizens. National health planning emphasized preventive medicine; factory and local health care centres supplemented hospitals and other inpatient institutions. There was a substantial improvement in rural health care during the 1960s and 1970s.",
"title": "Health, social welfare and housing"
},
{
"paragraph_id": 48,
"text": "During the era between the World Wars, Czechoslovak democracy and liberalism facilitated conditions for free publication. The most significant daily newspapers in these times were Lidové noviny, Národní listy, Český deník and Československá Republika.",
"title": "Mass media"
},
{
"paragraph_id": 49,
"text": "During Communist rule, the mass media in Czechoslovakia were controlled by the Communist Party. Private ownership of any publication or agency of the mass media was generally forbidden, although churches and other organizations published small periodicals and newspapers. Even with this information monopoly in the hands of organizations under KSČ control, all publications were reviewed by the government's Office for Press and Information.",
"title": "Mass media"
},
{
"paragraph_id": 50,
"text": "The Czechoslovakia national football team was a consistent performer on the international scene, with eight appearances in the FIFA World Cup Finals, finishing in second place in 1934 and 1962. The team also won the European Football Championship in 1976, came in third in 1980 and won the Olympic gold in 1980.",
"title": "Sports"
},
{
"paragraph_id": 51,
"text": "Well-known football players such as Pavel Nedvěd, Antonín Panenka, Milan Baroš, Tomáš Rosický, Vladimír Šmicer or Petr Čech were all born in Czechoslovakia.",
"title": "Sports"
},
{
"paragraph_id": 52,
"text": "The International Olympic Committee code for Czechoslovakia is TCH, which is still used in historical listings of results.",
"title": "Sports"
},
{
"paragraph_id": 53,
"text": "The Czechoslovak national ice hockey team won many medals from the world championships and Olympic Games. Peter Šťastný, Jaromír Jágr, Dominik Hašek, Peter Bondra, Petr Klíma, Marián Gáborík, Marián Hossa, Miroslav Šatan and Pavol Demitra all come from Czechoslovakia.",
"title": "Sports"
},
{
"paragraph_id": 54,
"text": "Emil Zátopek, winner of four Olympic gold medals in athletics, is considered one of the top athletes in Czechoslovak history.",
"title": "Sports"
},
{
"paragraph_id": 55,
"text": "Věra Čáslavská was an Olympic gold medallist in gymnastics, winning seven gold medals and four silver medals. She represented Czechoslovakia in three consecutive Olympics.",
"title": "Sports"
},
{
"paragraph_id": 56,
"text": "Several accomplished professional tennis players including Jaroslav Drobný, Ivan Lendl, Jan Kodeš, Miloslav Mečíř, Hana Mandlíková, Martina Hingis, Martina Navratilova, Jana Novotna, Petra Kvitová and Daniela Hantuchová were born in Czechoslovakia.",
"title": "Sports"
},
{
"paragraph_id": 57,
"text": "Maps with Hungarian-language rubrics:",
"title": "External links"
}
] | Czechoslovakia was a landlocked state in Central Europe, created in 1918, when it declared its independence from Austria-Hungary. In 1938, after the Munich Agreement, the Sudetenland became part of Nazi Germany, while the country lost further territories to Hungary and Poland. Between 1939 and 1945, the state ceased to exist, as Slovakia proclaimed its independence and the remaining territories in the east became part of Hungary, while in the remainder of the Czech Lands, the German Protectorate of Bohemia and Moravia was proclaimed. In 1939, after the outbreak of World War II, former Czechoslovak President Edvard Beneš formed a government-in-exile and sought recognition from the Allies. After World War II, Czechoslovakia was reestablished under its pre-1938 borders, with the exception of Carpathian Ruthenia, which became part of the Ukrainian SSR. The Communist Party seized power in a coup in 1948. From 1948 to 1989, Czechoslovakia was part of the Eastern Bloc with a planned economy. Its economic status was formalized in membership of Comecon from 1949 and its defense status in the Warsaw Pact of 1955. A period of political liberalization in 1968, the Prague Spring, ended violently when the Soviet Union, assisted by other Warsaw Pact countries, invaded Czechoslovakia. In 1989, as Marxist–Leninist governments and communism were ending all over Central and Eastern Europe, Czechoslovaks peacefully deposed their communist government during the Velvet Revolution, which began on 17 November 1989 and ended 11 days later on 28 November when all of the top Communist leaders and Communist party itself resigned. On 31 December 1992, Czechoslovakia peacefully split into the two sovereign states of the Czech Republic and Slovakia. | 2001-03-24T16:54:01Z | 2023-12-30T01:51:23Z | [
"Template:Section expand",
"Template:Refbegin",
"Template:Redirect",
"Template:IPAc-en",
"Template:Failed verification",
"Template:Cite book",
"Template:Council of Europe",
"Template:Main",
"Template:Citation needed",
"Template:See also",
"Template:Commons category",
"Template:Dissolution of Austria–Hungary",
"Template:Use dmy dates",
"Template:Use American English",
"Template:Infobox country",
"Template:Czechoslovakia topics",
"Template:Cite journal",
"Template:EB1922 poster",
"Template:Citation",
"Template:IPA-sk",
"Template:Cite web",
"Template:ISBN",
"Template:Doi",
"Template:Refend",
"Template:Lang",
"Template:Czechoslovakia timeline",
"Template:\\",
"Template:Reflist",
"Template:IPA-cs",
"Template:Webarchive",
"Template:Authority control",
"Template:Short description",
"Template:Clarify",
"Template:Cite AV media",
"Template:Lang-sk",
"Template:Notelist"
] | https://en.wikipedia.org/wiki/Czechoslovakia |
5,323 | Computer science | Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Though more often considered an academic discipline, computer science is closely related to computer programming.
Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.
The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".
During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM, in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline. His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.
The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
Despite the word "science" in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975,
Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Nonetheless, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available.
It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena.
Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems.
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence). Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.
Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies.
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation.
Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of unsolved problems in theoretical computer science.
Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.
Social computing is an area that is concerned with the intersection of social behavior and computational systems. Human–computer interaction research develops theories, principles, and guidelines for user interface designers.
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes.
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959.
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.
This branch of computer science aims to manage networks between computers worldwide.
Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users.
Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits.
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets.
The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.
Computer Science, known by its near synonyms, Computing, Computer Studies, has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4.
In the US, with 14,000 school districts deciding the curriculum, provision was fractured. According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science. According to a 2021 report, only 51% of high schools in the US offer computer science.
Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula, and several others are following. | [
{
"paragraph_id": 0,
"text": "Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Though more often considered an academic discipline, computer science is closely related to computer programming.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and \"in less than two years, he had sketched out many of the salient features of the modern computer\". \"A crucial step was the adoption of a punched card system derived from the Jacquard loom\" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as \"Babbage's dream come true\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Although first proposed in 1956, the term \"computer science\" appears in a 1959 article in Communications of the ACM, in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline. His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression \"automatic information\" (e.g. \"informazione automatica\" in Italian) or \"information and mathematics\" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). \"In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain.\"",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that \"computer science is no more about computers than astronomy is about telescopes.\" The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.",
"title": "Etymology"
},
{
"paragraph_id": 10,
"text": "The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term \"software engineering\" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.",
"title": "Etymology"
},
{
"paragraph_id": 11,
"text": "The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.",
"title": "Etymology"
},
{
"paragraph_id": 12,
"text": "Despite the word \"science\" in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975,",
"title": "Philosophy"
},
{
"paragraph_id": 13,
"text": "Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Nonetheless, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available.",
"title": "Philosophy"
},
{
"paragraph_id": 14,
"text": "It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena.",
"title": "Philosophy"
},
{
"paragraph_id": 15,
"text": "Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems.",
"title": "Philosophy"
},
{
"paragraph_id": 16,
"text": "A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the \"rationalist paradigm\" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the \"technocratic paradigm\" (which might be found in engineering approaches, most prominently in software engineering), and the \"scientific paradigm\" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence). Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.",
"title": "Philosophy"
},
{
"paragraph_id": 17,
"text": "As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.",
"title": "Fields"
},
{
"paragraph_id": 18,
"text": "Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies.",
"title": "Fields"
},
{
"paragraph_id": 19,
"text": "According to Peter Denning, the fundamental question underlying computer science is, \"What can be automated?\" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.",
"title": "Fields"
},
{
"paragraph_id": 20,
"text": "The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation.",
"title": "Fields"
},
{
"paragraph_id": 21,
"text": "Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.",
"title": "Fields"
},
{
"paragraph_id": 22,
"text": "Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.",
"title": "Fields"
},
{
"paragraph_id": 23,
"text": "Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.",
"title": "Fields"
},
{
"paragraph_id": 24,
"text": "Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.",
"title": "Fields"
},
{
"paragraph_id": 25,
"text": "Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.",
"title": "Fields"
},
{
"paragraph_id": 26,
"text": "Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of unsolved problems in theoretical computer science.",
"title": "Fields"
},
{
"paragraph_id": 27,
"text": "Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.",
"title": "Fields"
},
{
"paragraph_id": 28,
"text": "Social computing is an area that is concerned with the intersection of social behavior and computational systems. Human–computer interaction research develops theories, principles, and guidelines for user interface designers.",
"title": "Fields"
},
{
"paragraph_id": 29,
"text": "Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes.",
"title": "Fields"
},
{
"paragraph_id": 30,
"text": "Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question \"Can computers think?\", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.",
"title": "Fields"
},
{
"paragraph_id": 31,
"text": "Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term \"architecture\" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959.",
"title": "Fields"
},
{
"paragraph_id": 32,
"text": "Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.",
"title": "Fields"
},
{
"paragraph_id": 33,
"text": "This branch of computer science aims to manage networks between computers worldwide.",
"title": "Fields"
},
{
"paragraph_id": 34,
"text": "Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users.",
"title": "Fields"
},
{
"paragraph_id": 35,
"text": "Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits.",
"title": "Fields"
},
{
"paragraph_id": 36,
"text": "A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets.",
"title": "Fields"
},
{
"paragraph_id": 37,
"text": "The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:",
"title": "Discoveries"
},
{
"paragraph_id": 38,
"text": "Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:",
"title": "Programming paradigms"
},
{
"paragraph_id": 39,
"text": "Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.",
"title": "Programming paradigms"
},
{
"paragraph_id": 40,
"text": "Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.",
"title": "Research"
},
{
"paragraph_id": 41,
"text": "Computer Science, known by its near synonyms, Computing, Computer Studies, has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4.",
"title": "Education"
},
{
"paragraph_id": 42,
"text": "In the US, with 14,000 school districts deciding the curriculum, provision was fractured. According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science. According to a 2021 report, only 51% of high schools in the US offer computer science.",
"title": "Education"
},
{
"paragraph_id": 43,
"text": "Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula, and several others are following.",
"title": "Education"
}
] | Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines to applied disciplines. Though more often considered an academic discipline, computer science is closely related to computer programming. Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data. The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science. | 2001-11-18T22:41:50Z | 2023-12-30T03:37:39Z | [
"Template:TOClimit",
"Template:Webarchive",
"Template:Refend",
"Template:Software engineering",
"Template:Use mdy dates",
"Template:Cite web",
"Template:Cite journal",
"Template:Citation",
"Template:Cbignore",
"Template:Authority control",
"Template:Use American English",
"Template:Blockquote",
"Template:Cite encyclopedia",
"Template:Wikibooks",
"Template:Div col",
"Template:Library resources box",
"Template:Pp-vandalism",
"Template:Pp-move-indef",
"Template:TopicTOC-Computer science",
"Template:History of computing",
"Template:Sfn",
"Template:See also",
"Template:Dynamic list",
"Template:ISBN",
"Template:Cite news",
"Template:Glossaries of science and engineering",
"Template:Other uses",
"Template:Anchor",
"Template:Further",
"Template:Div col end",
"Template:Reflist",
"Template:Cite conference",
"Template:Short description",
"Template:Cite book",
"Template:Refbegin",
"Template:Computer science",
"Template:Multiple image",
"Template:Main",
"Template:Math",
"Template:Refn",
"Template:Sister project links"
] | https://en.wikipedia.org/wiki/Computer_science |
5,324 | Catalan | Catalan may refer to:
From, or related to Catalonia:
Mathematical concepts named after mathematician Eugène Catalan: | [
{
"paragraph_id": 0,
"text": "Catalan may refer to:",
"title": ""
},
{
"paragraph_id": 1,
"text": "From, or related to Catalonia:",
"title": "Catalonia"
},
{
"paragraph_id": 2,
"text": "Mathematical concepts named after mathematician Eugène Catalan:",
"title": "Mathematics"
}
] | Catalan may refer to: | 2023-03-10T17:18:17Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Canned search",
"Template:Lookfrom",
"Template:Intitle",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Catalan |
|
5,326 | Creationism | Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation. In its broadest sense, creationism includes a continuum of religious views, which vary in their acceptance or rejection of scientific explanations such as evolution that describe the origin and development of natural phenomena.
The term creationism most often refers to belief in special creation; the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myth found in the Bible's Genesis creation narrative. Since the 1970s, the most common form of this has been Young Earth creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism.
Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism. Less prominently, there are also members of the Islamic and Hindu faiths who are creationists. Use of the term "creationist" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became On the Origin of Species, and he used the term later in letters to colleagues. In 1873, Asa Gray published an article in The Nation saying a "special creationist" who held that species "were supernaturally originated just as they are, by the very terms of his doctrine places them out of the reach of scientific explanation."
The basis for many creationists' beliefs is a literal or quasi-literal interpretation of the Book of Genesis. The Genesis creation narratives (Genesis 1–2) describe how God brings the Universe into being in a series of creative acts over six days and places the first man and woman (Adam and Eve) in the Garden of Eden. This story is the basis of creationist cosmology and biology. The Genesis flood narrative (Genesis 6–9) tells how God destroys the world and all life through a great flood, saving representatives of each form of life by means of Noah's Ark. This forms the basis of creationist geology, better known as flood geology.
Recent decades have seen attempts to de-link creationism from the Bible and recast it as science; these include creation science and intelligent design.
To counter the common misunderstanding that the creation–evolution controversy was a simple dichotomy of views, with "creationists" set against "evolutionists", Eugenie Scott of the National Center for Science Education produced a diagram and description of a continuum of religious views as a spectrum ranging from extreme literal biblical creationism to materialist evolution, grouped under main headings. This was used in public presentations, then published in 1999 in Reports of the NCSE. Other versions of a taxonomy of creationists were produced, and comparisons made between the different groupings. In 2009 Scott produced a revised continuum taking account of these issues, emphasizing that intelligent design creationism overlaps other types, and each type is a grouping of various beliefs and positions. The revised diagram is labelled to shows a spectrum relating to positions on the age of the Earth, and the part played by special creation as against evolution. This was published in the book Evolution Vs. Creationism: An Introduction, and the NCSE website rewritten on the basis of the book version.
The main general types are listed below.
Young Earth creationists such as Ken Ham and Doug Phillips believe that God created the Earth within the last ten thousand years, with a literalist interpretation of the Genesis creation narrative, within the approximate time-frame of biblical genealogies. Most young Earth creationists believe that the universe has a similar age as the Earth. A few assign a much older age to the universe than to Earth. Young Earth creationism gives the universe an age consistent with the Ussher chronology and other young Earth time frames. Other young Earth creationists believe that the Earth and the universe were created with the appearance of age, so that the world appears to be much older than it is, and that this appearance is what gives the geological findings and other methods of dating the Earth and the universe their much longer timelines.
The Christian organizations Answers in Genesis (AiG), Institute for Creation Research (ICR) and the Creation Research Society (CRS) promote young Earth creationism in the United States. Carl Baugh's Creation Evidence Museum in Texas, United States AiG's Creation Museum and Ark Encounter in Kentucky, United States were opened to promote young Earth creationism. Creation Ministries International promotes young Earth views in Australia, Canada, South Africa, New Zealand, the United States, and the United Kingdom.
Among Roman Catholics, the Kolbe Center for the Study of Creation promotes similar ideas.
Old Earth creationism holds that the physical universe was created by God, but that the creation event described in the Book of Genesis is to be taken figuratively. This group generally believes that the age of the universe and the age of the Earth are as described by astronomers and geologists, but that details of modern evolutionary theory are questionable.
Old Earth creationism itself comes in at least three types:
Gap creationism (also known as ruin-restoration creationism, restoration creationism, or the Gap Theory) is a form of old Earth creationism that posits that the six-yom creation period, as described in the Book of Genesis, involved six literal 24-hour days, but that there was a gap of time between two distinct creations in the first and the second verses of Genesis, which the theory states explains many scientific observations, including the age of the Earth. Thus, the six days of creation (verse 3 onwards) start sometime after the Earth was "without form and void." This allows an indefinite gap of time to be inserted after the original creation of the universe, but prior to the Genesis creation narrative, (when present biological species and humanity were created). Gap theorists can therefore agree with the scientific consensus regarding the age of the Earth and universe, while maintaining a literal interpretation of the biblical text.
Some gap creationists expand the basic version of creationism by proposing a "primordial creation" of biological life within the "gap" of time. This is thought to be "the world that then was" mentioned in 2 Peter 3:3–6. Discoveries of fossils and archaeological ruins older than 10,000 years are generally ascribed to this "world that then was," which may also be associated with Lucifer's rebellion.
Day-age creationism, a type of old Earth creationism, is a metaphorical interpretation of the creation accounts in Genesis. It holds that the six days referred to in the Genesis account of creation are not ordinary 24-hour days, but are much longer periods (from thousands to billions of years). The Genesis account is then reconciled with the age of the Earth. Proponents of the day-age theory can be found among both theistic evolutionists, who accept the scientific consensus on evolution, and progressive creationists, who reject it. The theories are said to be built on the understanding that the Hebrew word yom is also used to refer to a time period, with a beginning and an end and not necessarily that of a 24-hour day.
The day-age theory attempts to reconcile the Genesis creation narrative and modern science by asserting that the creation "days" were not ordinary 24-hour days, but actually lasted for long periods of time (as day-age implies, the "days" each lasted an age). According to this view, the sequence and duration of the creation "days" may be paralleled to the scientific consensus for the age of the earth and the universe.
Progressive creationism is the religious belief that God created new forms of life gradually over a period of hundreds of millions of years. As a form of old Earth creationism, it accepts mainstream geological and cosmological estimates for the age of the Earth, some tenets of biology such as microevolution as well as archaeology to make its case. In this view creation occurred in rapid bursts in which all "kinds" of plants and animals appear in stages lasting millions of years. The bursts are followed by periods of stasis or equilibrium to accommodate new arrivals. These bursts represent instances of God creating new types of organisms by divine intervention. As viewed from the archaeological record, progressive creationism holds that "species do not gradually appear by the steady transformation of its ancestors; [but] appear all at once and "fully formed."
The view rejects macroevolution, claiming it is biologically untenable and not supported by the fossil record, as well as rejects the concept of common descent from a last universal common ancestor. Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Organizations such as Reasons To Believe, founded by Hugh Ross, promote this version of creationism.
Progressive creationism can be held in conjunction with hermeneutic approaches to the Genesis creation narrative such as the day-age creationism or framework/metaphoric/poetic views.
Creation science, or initially scientific creationism, is a pseudoscience that emerged in the 1960s with proponents aiming to have young Earth creationist beliefs taught in school science classes as a counter to teaching of evolution. Common features of creation science argument include: creationist cosmologies which accommodate a universe on the order of thousands of years old, criticism of radiometric dating through a technical argument about radiohalos, explanations for the fossil record as a record of the Genesis flood narrative (see flood geology), and explanations for the present diversity as a result of pre-designed genetic variability and partially due to the rapid degradation of the perfect genomes God placed in "created kinds" or "baramins" due to mutations.
Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. This comes in response to the 1987 ruling by the United States Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment.
One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term "Darwinism", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory.
Unlike their philosophical forebears, neo-creationists largely do not believe in many of the traditional cornerstones of creationism such as a young Earth, or in a dogmatically literal interpretation of the Bible.
Intelligent design (ID) is the pseudoscientific view that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." All of its leading proponents are associated with the Discovery Institute, a think tank whose wedge strategy aims to replace the scientific method with "a science consonant with Christian and theistic convictions" which accepts supernatural explanations. It is widely accepted in the scientific and academic communities that intelligent design is a form of creationism, and is sometimes referred to as "intelligent design creationism."
ID originated as a re-branding of creation science in an attempt to avoid a series of court decisions ruling out the teaching of creationism in American public schools, and the Discovery Institute has run a series of campaigns to change school curricula. In Australia, where curricula are under the control of state governments rather than local school boards, there was a public outcry when the notion of ID being taught in science classes was raised by the Federal Education Minister Brendan Nelson; the minister quickly conceded that the correct forum for ID, if it were to be taught, is in religious or philosophy classes.
In the US, teaching of intelligent design in public schools has been decisively ruled by a federal district court to be in violation of the Establishment Clause of the First Amendment to the United States Constitution. In Kitzmiller v. Dover, the court found that intelligent design is not science and "cannot uncouple itself from its creationist, and thus religious, antecedents," and hence cannot be taught as an alternative to evolution in public school science classrooms under the jurisdiction of that court. This sets a persuasive precedent, based on previous US Supreme Court decisions in Edwards v. Aguillard and Epperson v. Arkansas (1968), and by the application of the Lemon test, that creates a legal hurdle to teaching intelligent design in public school districts in other federal court jurisdictions.
In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system), is a description of the cosmos where Earth is at the orbital center of all celestial bodies. This model served as the predominant cosmological system in many ancient civilizations such as ancient Greece. As such, they assumed that the Sun, Moon, stars, and naked eye planets circled Earth, including the noteworthy systems of Aristotle (see Aristotelian physics) and Ptolemy.
Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters associated with the Creation Research Society pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, Joshua 10:12–13 where the Sun and Moon are said to stop in the sky, and Psalms 93:1 where the world is described as immobile. Contemporary advocates for such religious beliefs include Robert Sungenis, co-author of the self-published Galileo Was Wrong: The Church Was Right (2006). These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview. Most contemporary creationist organizations reject such perspectives.
The Omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past six to ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is due to the creator introducing false evidence that makes the universe appear significantly older.
The idea was named after the title of an 1857 book, Omphalos by Philip Henry Gosse, in which Gosse argued that in order for the world to be functional God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with fully grown hair, fingernails, and navels (ὀμφαλός omphalos is Greek for "navel"), and all living creatures with fully formed evolutionary features, etc..., and that, therefore, no empirical evidence about the age of the Earth or universe can be taken as reliable.
Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence. The idea has seen some revival in the 20th century by some modern creationists, who have extended the argument to address the "starlight problem". The idea has been criticised as Last Thursdayism, and on the grounds that it requires a deliberately deceptive creator.
Theistic evolution, or evolutionary creation, is a belief that "the personal God of the Bible created the universe and life through evolutionary processes." According to the American Scientific Affiliation:
A theory of theistic evolution (TE) – also called evolutionary creation – proposes that God's method of creation was to cleverly design a universe in which everything would naturally evolve. Usually the "evolution" in "theistic evolution" means Total Evolution – astronomical evolution (to form galaxies, solar systems,...) and geological evolution (to form the earth's geology) plus chemical evolution (to form the first life) and biological evolution (for the development of life) – but it can refer only to biological evolution.
Through the 19th century the term creationism most commonly referred to direct creation of individual souls, in contrast to traducianism. Following the publication of Vestiges of the Natural History of Creation, there was interest in ideas of Creation by divine law. In particular, the liberal theologian Baden Powell argued that this illustrated the Creator's power better than the idea of miraculous creation, which he thought ridiculous. When On the Origin of Species was published, the cleric Charles Kingsley wrote of evolution as "just as noble a conception of Deity." Darwin's view at the time was of God creating life through the laws of nature, and the book makes several references to "creation," though he later regretted using the term rather than calling it an unknown process. In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in theistic terms, Natural Selection not inconsistent with Natural Theology. Theistic evolution, also called, evolutionary creation, became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured as being more compatible with purpose than natural selection.
Some theists took the general view that, instead of faith being in opposition to biological evolution, some or all classical religious teachings about Christian God and creation are compatible with some or all of modern scientific theory, including specifically evolution; it is also known as "evolutionary creation." In Evolution versus Creationism, Eugenie Scott and Niles Eldredge state that it is in fact a type of evolution.
It generally views evolution as a tool used by God, who is both the first cause and immanent sustainer/upholder of the universe; it is therefore well accepted by people of strong theistic (as opposed to deistic) convictions. Theistic evolution can synthesize with the day-age creationist interpretation of the Genesis creation narrative; however most adherents consider that the first chapters of the Book of Genesis should not be interpreted as a "literal" description, but rather as a literary framework or allegory.
From a theistic viewpoint, the underlying laws of nature were designed by God for a purpose, and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws.
In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies "have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man." Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as "creationism" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the "evolutionist" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):
...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God.
While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural.
There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism.
In the creation myth taught by Bahá'u'lláh, the Bahá'í Faith founder, the universe has "neither beginning nor ending," and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, 'Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. 'Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence.
Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning.
Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers.
As of 2006, most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe.
Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, "for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time."
Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment.
Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical Humani generis. In 1996, Pope John Paul II stated that "new knowledge has led to the recognition of the theory of evolution as more than a hypothesis," but, referring to previous papal writings, he concluded that "if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God."
In the US, Evangelical Christians have continued to believe in a literal Genesis. As of 2008, members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations were the most likely to reject the evolutionary interpretation of the origins of life.
Jehovah's Witnesses adhere to a combination of gap creationism and day-age creationism, asserting that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length.
The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects.
Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution are false from an absolute or "spiritual" point of view, as they both proceed from a (false) belief in the reality of a material universe. However, Christian Scientists do not oppose the teaching of evolution in schools, nor do they demand that alternative accounts be taught: they believe that both material science and literalist theology are concerned with the illusory, mortal and material, rather than the real, immortal and spiritual. With regard to material theories of creation, Eddy showed a preference for Darwin's theory of evolution over others.
Hindu creationists claim that species of plants and animals are material forms adopted by pure consciousness which live an endless cycle of births and rebirths. Ronald Numbers says that: "Hindu Creationists have insisted on the antiquity of humans, who they believe appeared fully formed as long, perhaps, as trillions of years ago." Hindu creationism is a form of old Earth creationism, according to Hindu creationists the universe may even be older than billions of years. These views are based on the Vedas, the creation myths of which depict an extreme antiquity of the universe and history of the Earth.
In Hindu cosmology, time cyclically repeats general events of creation and destruction, with many "first man", each known as Manu, the progenitor of mankind. Each Manu successively reigns over a 306.72 million year period known as a manvantara, each ending with the destruction of mankind followed by a sandhya (period of non-activity) before the next manvantara. 120.53 million years have elapsed in the current manvantara (current mankind) according to calculations on Hindu units of time. The universe is cyclically created at the start and destroyed at the end of a kalpa (day of Brahma), lasting for 4.32 billion years, which is followed by a pralaya (period of dissolution) of equal length. 1.97 billion years have elapsed in the current kalpa (current universe). The universal elements or building blocks (unmanifest matter) exists for a period known as a maha-kalpa, lasting for 311.04 trillion years, which is followed by a maha-pralaya (period of great dissolution) of equal length. 155.52 trillion years have elapsed in the current maha-kalpa.
Islamic creationism is the belief that the universe (including humanity) was directly created by God as explained in the Quran. It usually views the Book of Genesis as a corrupted version of God's message. The creation myths in the Quran are vaguer and allow for a wider range of interpretations similar to those in other Abrahamic religions.
Islam also has its own school of theistic evolutionism, which holds that mainstream scientific analysis of the origin of the universe is supported by the Quran. Some Muslims believe in evolutionary creation, especially among liberal movements within Islam.
Writing for The Boston Globe, Drake Bennett noted: "Without a Book of Genesis to account for [...] Muslim creationists have little interest in proving that the age of the Earth is measured in the thousands rather than the billions of years, nor do they show much interest in the problem of the dinosaurs. And the idea that animals might evolve into other animals also tends to be less controversial, in part because there are passages of the Koran that seem to support it. But the issue of whether human beings are the product of evolution is just as fraught among Muslims." Khalid Anees, president of the Islamic Society of Britain, states that Muslims do not agree that one species can develop from another.
Since the 1980s, Turkey has been a site of strong advocacy for creationism, supported by American adherents.
There are several verses in the Qur'an which some modern writers have interpreted as being compatible with the expansion of the universe, Big Bang and Big Crunch theories:
Do not the Unbelievers see that the heavens and the earth were joined together (as one unit of creation), before we clove them asunder? We made from water every living thing. Will they not then believe?
Moreover He comprehended in His design the sky, and it had been (as) smoke: He said to it and to the earth: 'Come ye together, willingly or unwillingly.' They said: 'We do come (together), in willing obedience.'
With power and skill did We construct the Firmament: for it is We Who create the vastness of space.
The Day that We roll up the heavens like a scroll rolled up for books (completed),- even as We produced the first creation, so shall We produce a new one: a promise We have undertaken: truly shall We fulfil it.
The Ahmadiyya movement actively promotes evolutionary theory. Ahmadis interpret scripture from the Qur'an to support the concept of macroevolution and give precedence to scientific theories. Furthermore, unlike orthodox Muslims, Ahmadis believe that humans have gradually evolved from different species. Ahmadis regard Adam as being the first Prophet of God – as opposed to him being the first man on Earth. Rather than wholly adopting the theory of natural selection, Ahmadis promote the idea of a "guided evolution," viewing each stage of the evolutionary process as having been selectively woven by God. Mirza Tahir Ahmad, Fourth Caliph of the Ahmadiyya Muslim Community has stated in his magnum opus Revelation, Rationality, Knowledge & Truth (1998) that evolution did occur but only through God being the One who brings it about. It does not occur itself, according to the Ahmadiyya Muslim Community.
For Orthodox Jews who seek to reconcile discrepancies between science and the creation myths in the Bible, the notion that science and the Bible should even be reconciled through traditional scientific means is questioned. To these groups, science is as true as the Torah and if there seems to be a problem, epistemological limits are to blame for apparently irreconcilable points. They point to discrepancies between what is expected and what actually is to demonstrate that things are not always as they appear. They note that even the root word for 'world' in the Hebrew language, עולם, Olam, means 'hidden' (נעלם, Neh-Eh-Lahm). Just as they know from the Torah that God created man and trees and the light on its way from the stars in their observed state, so too can they know that the world was created in its over the six days of Creation that reflects progression to its currently-observed state, with the understanding that physical ways to verify this may eventually be identified. This knowledge has been advanced by Rabbi Dovid Gottlieb, former philosophy professor at Johns Hopkins University. Relatively old Kabbalistic sources from well before the scientifically apparent age of the universe was first determined are also in close concord with modern scientific estimates of the age of the universe, according to Rabbi Aryeh Kaplan, and based on Sefer Temunah, an early kabbalistic work attributed to the first-century Tanna Nehunya ben HaKanah. Many kabbalists accepted the teachings of the Sefer HaTemunah, including the medieval Jewish scholar Nahmanides, his close student Isaac ben Samuel of Acre, and David ben Solomon ibn Abi Zimra. Other parallels are derived, among other sources, from Nahmanides, who expounds that there was a Neanderthal-like species with which Adam mated (he did this long before Neanderthals had even been discovered scientifically). Reform Judaism does not take the Torah as a literal text, but rather as a symbolic or open-ended work.
Some contemporary writers such as Rabbi Gedalyah Nadel have sought to reconcile the discrepancy between the account in the Torah, and scientific findings by arguing that each day referred to in the Bible was not 24 hours, but billions of years long. Others claim that the Earth was created a few thousand years ago, but was deliberately made to look as if it was five billion years old, e.g. by being created with ready made fossils. The best known exponent of this approach being Rabbi Menachem Mendel Schneerson. Others state that although the world was physically created in six 24-hour days, the Torah accounts can be interpreted to mean that there was a period of billions of years before the six days of creation.
Most vocal literalist creationists are from the US, and strict creationist views are much less common in other developed countries. According to a study published in Science, a survey of the US, Turkey, Japan and Europe showed that public acceptance of evolution is most prevalent in Iceland, Denmark and Sweden at 80% of the population. There seems to be no significant correlation between believing in evolution and understanding evolutionary science.
A 2009 Nielsen poll showed that 23% of Australians believe "the biblical account of human origins," 42% believe in a "wholly scientific" explanation for the origins of life, while 32% believe in an evolutionary process "guided by God".
A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution.
A 2011 Ipsos survey found that 47% of responders in Brazil identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
In 2004, IBOPE conducted a poll in Brazil that asked questions about creationism and the teaching of creationism in schools. When asked if creationism should be taught in schools, 89% of people said that creationism should be taught in schools. When asked if the teaching of creationism should replace the teaching of evolution in schools, 75% of people said that the teaching of creationism should replace the teaching of evolution in schools.
A 2012 survey, by Angus Reid Public Opinion revealed that 61 percent of Canadians believe in evolution. The poll asked "Where did human beings come from – did we start as singular cells millions of year ago and evolve into our present form, or did God create us in his image 10,000 years ago?"
In 2019, a Research Co. poll asked people in Canada if creationism "should be part of the school curriculum in their province". 38% of Canadians said that creationism should be part of the school curriculum, 39% of Canadians said that it should not be part of the school curriculum, and 23% of Canadians were undecided.
In 2023, a Research Co. poll found that 21% of Canadians "believe God created human beings in their present form within the last 10,000 years". The poll also found that "More than two-in-five Canadians (43%) think creationism should be part of the school curriculum in their province."
In Europe, literalist creationism is more widely rejected, though regular opinion polls are not available. Most people accept that evolution is the most widely accepted scientific theory as taught in most schools. In countries with a Roman Catholic majority, papal acceptance of evolutionary creationism as worthy of study has essentially ended debate on the matter for many people.
In the UK, a 2006 poll on the "origin and development of life", asked participants to choose between three different perspectives on the origin of life: 22% chose creationism, 17% opted for intelligent design, 48% selected evolutionary theory, and the rest did not know. A subsequent 2010 YouGov poll on the correct explanation for the origin of humans found that 9% opted for creationism, 12% intelligent design, 65% evolutionary theory and 13% didn't know. The former Archbishop of Canterbury Rowan Williams, head of the worldwide Anglican Communion, views the idea of teaching creationism in schools as a mistake. In 2009, an Ipsos Mori survey in the United Kingdom found that 54% of Britons agreed with the view: "Evolutionary theories should be taught in science lessons in schools together with other possible perspectives, such as intelligent design and creationism."
In Italy, Education Minister Letizia Moratti wanted to retire evolution from the secondary school level; after one week of massive protests, she reversed her opinion.
There continues to be scattered and possibly mounting efforts on the part of religious groups throughout Europe to introduce creationism into public education. In response, the Parliamentary Assembly of the Council of Europe has released a draft report titled The dangers of creationism in education on June 8, 2007, reinforced by a further proposal of banning it in schools dated October 4, 2007.
Serbia suspended the teaching of evolution for one week in September 2004, under education minister Ljiljana Čolić, only allowing schools to reintroduce evolution into the curriculum if they also taught creationism. "After a deluge of protest from scientists, teachers and opposition parties" says the BBC report, Čolić's deputy made the statement, "I have come here to confirm Charles Darwin is still alive" and announced that the decision was reversed. Čolić resigned after the government said that she had caused "problems that had started to reflect on the work of the entire government."
Poland saw a major controversy over creationism in 2006, when the Deputy Education Minister, Mirosław Orzechowski, denounced evolution as "one of many lies" taught in Polish schools. His superior, Minister of Education Roman Giertych, has stated that the theory of evolution would continue to be taught in Polish schools, "as long as most scientists in our country say that it is the right theory." Giertych's father, Member of the European Parliament Maciej Giertych, has opposed the teaching of evolution and has claimed that dinosaurs and humans co-existed.
A June 2015 - July 2016 Pew poll of Eastern European countries found that 56% of people from Armenia say that humans and other living things have "Existed in present state since the beginning of time". Armenia is followed by 52% from Bosnia, 42% from Moldova, 37% from Lithuania, 34% from Georgia and Ukraine, 33% from Croatia and Romania, 31% from Bulgaria, 29% from Greece and Serbia, 26% from Russia, 25% from Latvia, 23% from Belarus and Poland, 21% from Estonia and Hungary, and 16% from the Czech Republic.
A 2011 Ipsos survey found that 56% of responders in South Africa identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
In 2009, an EBS survey in South Korea found that 63% of people believed that creation and evolution should both be taught in schools simultaneously.
A 2017 poll by Pew Research found that 62% of Americans believe humans have evolved over time and 34% of Americans believe humans and other living things have existed in their present form since the beginning of time. A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings.
According to a 2014 Gallup poll, about 42% of Americans believe that "God created human beings pretty much in their present form at one time within the last 10,000 years or so." Another 31% believe that "human beings have developed over millions of years from less advanced forms of life, but God guided this process,"and 19% believe that "human beings have developed over millions of years from less advanced forms of life, but God had no part in this process."
Belief in creationism is inversely correlated to education; of those with postgraduate degrees, 74% accept evolution. In 1987, Newsweek reported: "By one count there are some 700 scientists with respectable academic credentials (out of a total of 480,000 U.S. earth and life scientists) who give credence to creation-science, the general theory that complex life forms did not evolve but appeared 'abruptly.'"
A 2000 poll for People for the American Way found 70% of the US public felt that evolution was compatible with a belief in God.
According to a study published in Science, between 1985 and 2005 the number of adult North Americans who accept evolution declined from 45% to 40%, the number of adults who reject evolution declined from 48% to 39% and the number of people who were unsure increased from 7% to 21%. Besides the US the study also compared data from 32 European countries, Turkey, and Japan. The only country where acceptance of evolution was lower than in the US was Turkey (25%).
According to a 2011 Fox News poll, 45% of Americans believe in creationism, down from 50% in a similar poll in 1999. 21% believe in 'the theory of evolution as outlined by Darwin and other scientists' (up from 15% in 1999), and 27% answered that both are true (up from 26% in 1999).
In September 2012, educator and television personality Bill Nye spoke with the Associated Press and aired his fears about acceptance of creationism, believing that teaching children that creationism is the only true answer without letting them understand the way science works will prevent any future innovation in the world of science. In February 2014, Nye defended evolution in the classroom in a debate with creationist Ken Ham on the topic of whether creation is a viable model of origins in today's modern, scientific era.
In the US, creationism has become centered in the political controversy over creation and evolution in public education, and whether teaching creationism in science classes conflicts with the separation of church and state. Currently, the controversy comes in the form of whether advocates of the intelligent design movement who wish to "Teach the Controversy" in science classes have conflated science with religion.
People for the American Way polled 1500 North Americans about the teaching of evolution and creationism in November and December 1999. They found that most North Americans were not familiar with creationism, and most North Americans had heard of evolution, but many did not fully understand the basics of the theory. The main findings were:
In such political contexts, creationists argue that their particular religiously based origin belief is superior to those of other belief systems, in particular those made through secular or scientific rationale. Political creationists are opposed by many individuals and organizations who have made detailed critiques and given testimony in various court cases that the alternatives to scientific reasoning offered by creationists are opposed by the consensus of the scientific community.
Most Christians disagree with the teaching of creationism as an alternative to evolution in schools. Several religious organizations, among them the Catholic Church, hold that their faith does not conflict with the scientific consensus regarding evolution. The Clergy Letter Project, which has collected more than 13,000 signatures, is an "endeavor designed to demonstrate that religion and science can be compatible."
In his 2002 article "Intelligent Design as a Theological Problem," George Murphy argues against the view that life on Earth, in all its forms, is direct evidence of God's act of creation (Murphy quotes Phillip E. Johnson's claim that he is speaking "of a God who acted openly and left his fingerprints on all the evidence."). Murphy argues that this view of God is incompatible with the Christian understanding of God as "the one revealed in the cross and resurrection of Christ." The basis of this theology is Isaiah 45:15, "Verily thou art a God that hidest thyself, O God of Israel, the Saviour."
Murphy observes that the execution of a Jewish carpenter by Roman authorities is in and of itself an ordinary event and did not require divine action. On the contrary, for the crucifixion to occur, God had to limit or "empty" himself. It was for this reason that Paul the Apostle wrote, in Philippians 2:5-8:
Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross.
Murphy concludes that,
Just as the Son of God limited himself by taking human form and dying on a cross, God limits divine action in the world to be in accord with rational laws which God has chosen. This enables us to understand the world on its own terms, but it also means that natural processes hide God from scientific observation.
For Murphy, a theology of the cross requires that Christians accept a methodological naturalism, meaning that one cannot invoke God to explain natural phenomena, while recognizing that such acceptance does not require one to accept a metaphysical naturalism, which proposes that nature is all that there is.
The Jesuit priest George Coyne has stated that it is "unfortunate that, especially here in America, creationism has come to mean...some literal interpretation of Genesis." He argues that "...Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in belief that everything depends on God, or better, all is a gift from God."
Other Christians have expressed qualms about teaching creationism. In March 2006, then Archbishop of Canterbury Rowan Williams, the leader of the world's Anglicans, stated his discomfort about teaching creationism, saying that creationism was "a kind of category mistake, as if the Bible were a theory like other theories." He also said: "My worry is creationism can end up reducing the doctrine of creation rather than enhancing it." The views of the Episcopal Church – a major American-based branch of the Anglican Communion – on teaching creationism resemble those of Williams.
The National Science Teachers Association is opposed to teaching creationism as a science, as is the Association for Science Teacher Education, the National Association of Biology Teachers, the American Anthropological Association, the American Geosciences Institute, the Geological Society of America, the American Geophysical Union, and numerous other professional teaching and scientific societies.
In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K‐12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, they, as well as other "worldviews that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others."
Randy Moore and Sehoya Cotner, from the biology program at the University of Minnesota, reflect on the relevance of teaching creationism in the article "The Creationist Down the Hall: Does It Matter When Teachers Teach Creationism?", in which they write: "Despite decades of science education reform, numerous legal decisions declaring the teaching of creationism in public-school science classes to be unconstitutional, overwhelming evidence supporting evolution, and the many denunciations of creationism as nonscientific by professional scientific societies, creationism remains popular throughout the United States."
Science is a system of knowledge based on observation, empirical evidence, and the development of theories that yield testable explanations and predictions of natural phenomena. By contrast, creationism is often based on literal interpretations of the narratives of particular religious texts. Creationist beliefs involve purported forces that lie outside of nature, such as supernatural intervention, and often do not allow predictions at all. Therefore, these can neither be confirmed nor disproved by scientists. However, many creationist beliefs can be framed as testable predictions about phenomena such as the age of the Earth, its geological history and the origins, distributions and relationships of living organisms found on it. Early science incorporated elements of these beliefs, but as science developed these beliefs were gradually falsified and were replaced with understandings based on accumulated and reproducible evidence that often allows the accurate prediction of future results.
Some scientists, such as Stephen Jay Gould, consider science and religion to be two compatible and complementary fields, with authorities in distinct areas of human experience, so-called non-overlapping magisteria. This view is also held by many theologians, who believe that ultimate origins and meaning are addressed by religion, but favor verifiable scientific explanations of natural phenomena over those of creationist beliefs. Other scientists, such as Richard Dawkins, reject the non-overlapping magisteria and argue that, in disproving literal interpretations of creationists, the scientific method also undermines religious texts as a source of truth. Irrespective of this diversity in viewpoints, since creationist beliefs are not supported by empirical evidence, the scientific consensus is that any attempt to teach creationism as science should be rejected. | [
{
"paragraph_id": 0,
"text": "Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation. In its broadest sense, creationism includes a continuum of religious views, which vary in their acceptance or rejection of scientific explanations such as evolution that describe the origin and development of natural phenomena.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term creationism most often refers to belief in special creation; the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myth found in the Bible's Genesis creation narrative. Since the 1970s, the most common form of this has been Young Earth creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism. Less prominently, there are also members of the Islamic and Hindu faiths who are creationists. Use of the term \"creationist\" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became On the Origin of Species, and he used the term later in letters to colleagues. In 1873, Asa Gray published an article in The Nation saying a \"special creationist\" who held that species \"were supernaturally originated just as they are, by the very terms of his doctrine places them out of the reach of scientific explanation.\"",
"title": ""
},
{
"paragraph_id": 3,
"text": "The basis for many creationists' beliefs is a literal or quasi-literal interpretation of the Book of Genesis. The Genesis creation narratives (Genesis 1–2) describe how God brings the Universe into being in a series of creative acts over six days and places the first man and woman (Adam and Eve) in the Garden of Eden. This story is the basis of creationist cosmology and biology. The Genesis flood narrative (Genesis 6–9) tells how God destroys the world and all life through a great flood, saving representatives of each form of life by means of Noah's Ark. This forms the basis of creationist geology, better known as flood geology.",
"title": "Biblical basis"
},
{
"paragraph_id": 4,
"text": "Recent decades have seen attempts to de-link creationism from the Bible and recast it as science; these include creation science and intelligent design.",
"title": "Biblical basis"
},
{
"paragraph_id": 5,
"text": "To counter the common misunderstanding that the creation–evolution controversy was a simple dichotomy of views, with \"creationists\" set against \"evolutionists\", Eugenie Scott of the National Center for Science Education produced a diagram and description of a continuum of religious views as a spectrum ranging from extreme literal biblical creationism to materialist evolution, grouped under main headings. This was used in public presentations, then published in 1999 in Reports of the NCSE. Other versions of a taxonomy of creationists were produced, and comparisons made between the different groupings. In 2009 Scott produced a revised continuum taking account of these issues, emphasizing that intelligent design creationism overlaps other types, and each type is a grouping of various beliefs and positions. The revised diagram is labelled to shows a spectrum relating to positions on the age of the Earth, and the part played by special creation as against evolution. This was published in the book Evolution Vs. Creationism: An Introduction, and the NCSE website rewritten on the basis of the book version.",
"title": "Types"
},
{
"paragraph_id": 6,
"text": "The main general types are listed below.",
"title": "Types"
},
{
"paragraph_id": 7,
"text": "Young Earth creationists such as Ken Ham and Doug Phillips believe that God created the Earth within the last ten thousand years, with a literalist interpretation of the Genesis creation narrative, within the approximate time-frame of biblical genealogies. Most young Earth creationists believe that the universe has a similar age as the Earth. A few assign a much older age to the universe than to Earth. Young Earth creationism gives the universe an age consistent with the Ussher chronology and other young Earth time frames. Other young Earth creationists believe that the Earth and the universe were created with the appearance of age, so that the world appears to be much older than it is, and that this appearance is what gives the geological findings and other methods of dating the Earth and the universe their much longer timelines.",
"title": "Types"
},
{
"paragraph_id": 8,
"text": "The Christian organizations Answers in Genesis (AiG), Institute for Creation Research (ICR) and the Creation Research Society (CRS) promote young Earth creationism in the United States. Carl Baugh's Creation Evidence Museum in Texas, United States AiG's Creation Museum and Ark Encounter in Kentucky, United States were opened to promote young Earth creationism. Creation Ministries International promotes young Earth views in Australia, Canada, South Africa, New Zealand, the United States, and the United Kingdom.",
"title": "Types"
},
{
"paragraph_id": 9,
"text": "Among Roman Catholics, the Kolbe Center for the Study of Creation promotes similar ideas.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "Old Earth creationism holds that the physical universe was created by God, but that the creation event described in the Book of Genesis is to be taken figuratively. This group generally believes that the age of the universe and the age of the Earth are as described by astronomers and geologists, but that details of modern evolutionary theory are questionable.",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "Old Earth creationism itself comes in at least three types:",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "Gap creationism (also known as ruin-restoration creationism, restoration creationism, or the Gap Theory) is a form of old Earth creationism that posits that the six-yom creation period, as described in the Book of Genesis, involved six literal 24-hour days, but that there was a gap of time between two distinct creations in the first and the second verses of Genesis, which the theory states explains many scientific observations, including the age of the Earth. Thus, the six days of creation (verse 3 onwards) start sometime after the Earth was \"without form and void.\" This allows an indefinite gap of time to be inserted after the original creation of the universe, but prior to the Genesis creation narrative, (when present biological species and humanity were created). Gap theorists can therefore agree with the scientific consensus regarding the age of the Earth and universe, while maintaining a literal interpretation of the biblical text.",
"title": "Types"
},
{
"paragraph_id": 13,
"text": "Some gap creationists expand the basic version of creationism by proposing a \"primordial creation\" of biological life within the \"gap\" of time. This is thought to be \"the world that then was\" mentioned in 2 Peter 3:3–6. Discoveries of fossils and archaeological ruins older than 10,000 years are generally ascribed to this \"world that then was,\" which may also be associated with Lucifer's rebellion.",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "Day-age creationism, a type of old Earth creationism, is a metaphorical interpretation of the creation accounts in Genesis. It holds that the six days referred to in the Genesis account of creation are not ordinary 24-hour days, but are much longer periods (from thousands to billions of years). The Genesis account is then reconciled with the age of the Earth. Proponents of the day-age theory can be found among both theistic evolutionists, who accept the scientific consensus on evolution, and progressive creationists, who reject it. The theories are said to be built on the understanding that the Hebrew word yom is also used to refer to a time period, with a beginning and an end and not necessarily that of a 24-hour day.",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "The day-age theory attempts to reconcile the Genesis creation narrative and modern science by asserting that the creation \"days\" were not ordinary 24-hour days, but actually lasted for long periods of time (as day-age implies, the \"days\" each lasted an age). According to this view, the sequence and duration of the creation \"days\" may be paralleled to the scientific consensus for the age of the earth and the universe.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "Progressive creationism is the religious belief that God created new forms of life gradually over a period of hundreds of millions of years. As a form of old Earth creationism, it accepts mainstream geological and cosmological estimates for the age of the Earth, some tenets of biology such as microevolution as well as archaeology to make its case. In this view creation occurred in rapid bursts in which all \"kinds\" of plants and animals appear in stages lasting millions of years. The bursts are followed by periods of stasis or equilibrium to accommodate new arrivals. These bursts represent instances of God creating new types of organisms by divine intervention. As viewed from the archaeological record, progressive creationism holds that \"species do not gradually appear by the steady transformation of its ancestors; [but] appear all at once and \"fully formed.\"",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "The view rejects macroevolution, claiming it is biologically untenable and not supported by the fossil record, as well as rejects the concept of common descent from a last universal common ancestor. Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Organizations such as Reasons To Believe, founded by Hugh Ross, promote this version of creationism.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "Progressive creationism can be held in conjunction with hermeneutic approaches to the Genesis creation narrative such as the day-age creationism or framework/metaphoric/poetic views.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "Creation science, or initially scientific creationism, is a pseudoscience that emerged in the 1960s with proponents aiming to have young Earth creationist beliefs taught in school science classes as a counter to teaching of evolution. Common features of creation science argument include: creationist cosmologies which accommodate a universe on the order of thousands of years old, criticism of radiometric dating through a technical argument about radiohalos, explanations for the fossil record as a record of the Genesis flood narrative (see flood geology), and explanations for the present diversity as a result of pre-designed genetic variability and partially due to the rapid degradation of the perfect genomes God placed in \"created kinds\" or \"baramins\" due to mutations.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. This comes in response to the 1987 ruling by the United States Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term \"Darwinism\", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory.",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "Unlike their philosophical forebears, neo-creationists largely do not believe in many of the traditional cornerstones of creationism such as a young Earth, or in a dogmatically literal interpretation of the Bible.",
"title": "Types"
},
{
"paragraph_id": 23,
"text": "Intelligent design (ID) is the pseudoscientific view that \"certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.\" All of its leading proponents are associated with the Discovery Institute, a think tank whose wedge strategy aims to replace the scientific method with \"a science consonant with Christian and theistic convictions\" which accepts supernatural explanations. It is widely accepted in the scientific and academic communities that intelligent design is a form of creationism, and is sometimes referred to as \"intelligent design creationism.\"",
"title": "Types"
},
{
"paragraph_id": 24,
"text": "ID originated as a re-branding of creation science in an attempt to avoid a series of court decisions ruling out the teaching of creationism in American public schools, and the Discovery Institute has run a series of campaigns to change school curricula. In Australia, where curricula are under the control of state governments rather than local school boards, there was a public outcry when the notion of ID being taught in science classes was raised by the Federal Education Minister Brendan Nelson; the minister quickly conceded that the correct forum for ID, if it were to be taught, is in religious or philosophy classes.",
"title": "Types"
},
{
"paragraph_id": 25,
"text": "In the US, teaching of intelligent design in public schools has been decisively ruled by a federal district court to be in violation of the Establishment Clause of the First Amendment to the United States Constitution. In Kitzmiller v. Dover, the court found that intelligent design is not science and \"cannot uncouple itself from its creationist, and thus religious, antecedents,\" and hence cannot be taught as an alternative to evolution in public school science classrooms under the jurisdiction of that court. This sets a persuasive precedent, based on previous US Supreme Court decisions in Edwards v. Aguillard and Epperson v. Arkansas (1968), and by the application of the Lemon test, that creates a legal hurdle to teaching intelligent design in public school districts in other federal court jurisdictions.",
"title": "Types"
},
{
"paragraph_id": 26,
"text": "In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system), is a description of the cosmos where Earth is at the orbital center of all celestial bodies. This model served as the predominant cosmological system in many ancient civilizations such as ancient Greece. As such, they assumed that the Sun, Moon, stars, and naked eye planets circled Earth, including the noteworthy systems of Aristotle (see Aristotelian physics) and Ptolemy.",
"title": "Types"
},
{
"paragraph_id": 27,
"text": "Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters associated with the Creation Research Society pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, Joshua 10:12–13 where the Sun and Moon are said to stop in the sky, and Psalms 93:1 where the world is described as immobile. Contemporary advocates for such religious beliefs include Robert Sungenis, co-author of the self-published Galileo Was Wrong: The Church Was Right (2006). These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview. Most contemporary creationist organizations reject such perspectives.",
"title": "Types"
},
{
"paragraph_id": 28,
"text": "The Omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past six to ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is due to the creator introducing false evidence that makes the universe appear significantly older.",
"title": "Types"
},
{
"paragraph_id": 29,
"text": "The idea was named after the title of an 1857 book, Omphalos by Philip Henry Gosse, in which Gosse argued that in order for the world to be functional God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with fully grown hair, fingernails, and navels (ὀμφαλός omphalos is Greek for \"navel\"), and all living creatures with fully formed evolutionary features, etc..., and that, therefore, no empirical evidence about the age of the Earth or universe can be taken as reliable.",
"title": "Types"
},
{
"paragraph_id": 30,
"text": "Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence. The idea has seen some revival in the 20th century by some modern creationists, who have extended the argument to address the \"starlight problem\". The idea has been criticised as Last Thursdayism, and on the grounds that it requires a deliberately deceptive creator.",
"title": "Types"
},
{
"paragraph_id": 31,
"text": "Theistic evolution, or evolutionary creation, is a belief that \"the personal God of the Bible created the universe and life through evolutionary processes.\" According to the American Scientific Affiliation:",
"title": "Theistic evolution"
},
{
"paragraph_id": 32,
"text": "A theory of theistic evolution (TE) – also called evolutionary creation – proposes that God's method of creation was to cleverly design a universe in which everything would naturally evolve. Usually the \"evolution\" in \"theistic evolution\" means Total Evolution – astronomical evolution (to form galaxies, solar systems,...) and geological evolution (to form the earth's geology) plus chemical evolution (to form the first life) and biological evolution (for the development of life) – but it can refer only to biological evolution.",
"title": "Theistic evolution"
},
{
"paragraph_id": 33,
"text": "Through the 19th century the term creationism most commonly referred to direct creation of individual souls, in contrast to traducianism. Following the publication of Vestiges of the Natural History of Creation, there was interest in ideas of Creation by divine law. In particular, the liberal theologian Baden Powell argued that this illustrated the Creator's power better than the idea of miraculous creation, which he thought ridiculous. When On the Origin of Species was published, the cleric Charles Kingsley wrote of evolution as \"just as noble a conception of Deity.\" Darwin's view at the time was of God creating life through the laws of nature, and the book makes several references to \"creation,\" though he later regretted using the term rather than calling it an unknown process. In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in theistic terms, Natural Selection not inconsistent with Natural Theology. Theistic evolution, also called, evolutionary creation, became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured as being more compatible with purpose than natural selection.",
"title": "Theistic evolution"
},
{
"paragraph_id": 34,
"text": "Some theists took the general view that, instead of faith being in opposition to biological evolution, some or all classical religious teachings about Christian God and creation are compatible with some or all of modern scientific theory, including specifically evolution; it is also known as \"evolutionary creation.\" In Evolution versus Creationism, Eugenie Scott and Niles Eldredge state that it is in fact a type of evolution.",
"title": "Theistic evolution"
},
{
"paragraph_id": 35,
"text": "It generally views evolution as a tool used by God, who is both the first cause and immanent sustainer/upholder of the universe; it is therefore well accepted by people of strong theistic (as opposed to deistic) convictions. Theistic evolution can synthesize with the day-age creationist interpretation of the Genesis creation narrative; however most adherents consider that the first chapters of the Book of Genesis should not be interpreted as a \"literal\" description, but rather as a literary framework or allegory.",
"title": "Theistic evolution"
},
{
"paragraph_id": 36,
"text": "From a theistic viewpoint, the underlying laws of nature were designed by God for a purpose, and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws.",
"title": "Theistic evolution"
},
{
"paragraph_id": 37,
"text": "In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies \"have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man.\" Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as \"creationism\" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the \"evolutionist\" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):",
"title": "Theistic evolution"
},
{
"paragraph_id": 38,
"text": "...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God.",
"title": "Theistic evolution"
},
{
"paragraph_id": 39,
"text": "While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural.",
"title": "Theistic evolution"
},
{
"paragraph_id": 40,
"text": "There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism.",
"title": "Religious views"
},
{
"paragraph_id": 41,
"text": "In the creation myth taught by Bahá'u'lláh, the Bahá'í Faith founder, the universe has \"neither beginning nor ending,\" and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, 'Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. 'Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence.",
"title": "Religious views"
},
{
"paragraph_id": 42,
"text": "Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning.",
"title": "Religious views"
},
{
"paragraph_id": 43,
"text": "Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers.",
"title": "Religious views"
},
{
"paragraph_id": 44,
"text": "As of 2006, most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe.",
"title": "Religious views"
},
{
"paragraph_id": 45,
"text": "Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, \"for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time.\"",
"title": "Religious views"
},
{
"paragraph_id": 46,
"text": "Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment.",
"title": "Religious views"
},
{
"paragraph_id": 47,
"text": "Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical Humani generis. In 1996, Pope John Paul II stated that \"new knowledge has led to the recognition of the theory of evolution as more than a hypothesis,\" but, referring to previous papal writings, he concluded that \"if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God.\"",
"title": "Religious views"
},
{
"paragraph_id": 48,
"text": "In the US, Evangelical Christians have continued to believe in a literal Genesis. As of 2008, members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations were the most likely to reject the evolutionary interpretation of the origins of life.",
"title": "Religious views"
},
{
"paragraph_id": 49,
"text": "Jehovah's Witnesses adhere to a combination of gap creationism and day-age creationism, asserting that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length.",
"title": "Religious views"
},
{
"paragraph_id": 50,
"text": "The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects.",
"title": "Religious views"
},
{
"paragraph_id": 51,
"text": "Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution are false from an absolute or \"spiritual\" point of view, as they both proceed from a (false) belief in the reality of a material universe. However, Christian Scientists do not oppose the teaching of evolution in schools, nor do they demand that alternative accounts be taught: they believe that both material science and literalist theology are concerned with the illusory, mortal and material, rather than the real, immortal and spiritual. With regard to material theories of creation, Eddy showed a preference for Darwin's theory of evolution over others.",
"title": "Religious views"
},
{
"paragraph_id": 52,
"text": "Hindu creationists claim that species of plants and animals are material forms adopted by pure consciousness which live an endless cycle of births and rebirths. Ronald Numbers says that: \"Hindu Creationists have insisted on the antiquity of humans, who they believe appeared fully formed as long, perhaps, as trillions of years ago.\" Hindu creationism is a form of old Earth creationism, according to Hindu creationists the universe may even be older than billions of years. These views are based on the Vedas, the creation myths of which depict an extreme antiquity of the universe and history of the Earth.",
"title": "Religious views"
},
{
"paragraph_id": 53,
"text": "In Hindu cosmology, time cyclically repeats general events of creation and destruction, with many \"first man\", each known as Manu, the progenitor of mankind. Each Manu successively reigns over a 306.72 million year period known as a manvantara, each ending with the destruction of mankind followed by a sandhya (period of non-activity) before the next manvantara. 120.53 million years have elapsed in the current manvantara (current mankind) according to calculations on Hindu units of time. The universe is cyclically created at the start and destroyed at the end of a kalpa (day of Brahma), lasting for 4.32 billion years, which is followed by a pralaya (period of dissolution) of equal length. 1.97 billion years have elapsed in the current kalpa (current universe). The universal elements or building blocks (unmanifest matter) exists for a period known as a maha-kalpa, lasting for 311.04 trillion years, which is followed by a maha-pralaya (period of great dissolution) of equal length. 155.52 trillion years have elapsed in the current maha-kalpa.",
"title": "Religious views"
},
{
"paragraph_id": 54,
"text": "Islamic creationism is the belief that the universe (including humanity) was directly created by God as explained in the Quran. It usually views the Book of Genesis as a corrupted version of God's message. The creation myths in the Quran are vaguer and allow for a wider range of interpretations similar to those in other Abrahamic religions.",
"title": "Religious views"
},
{
"paragraph_id": 55,
"text": "Islam also has its own school of theistic evolutionism, which holds that mainstream scientific analysis of the origin of the universe is supported by the Quran. Some Muslims believe in evolutionary creation, especially among liberal movements within Islam.",
"title": "Religious views"
},
{
"paragraph_id": 56,
"text": "Writing for The Boston Globe, Drake Bennett noted: \"Without a Book of Genesis to account for [...] Muslim creationists have little interest in proving that the age of the Earth is measured in the thousands rather than the billions of years, nor do they show much interest in the problem of the dinosaurs. And the idea that animals might evolve into other animals also tends to be less controversial, in part because there are passages of the Koran that seem to support it. But the issue of whether human beings are the product of evolution is just as fraught among Muslims.\" Khalid Anees, president of the Islamic Society of Britain, states that Muslims do not agree that one species can develop from another.",
"title": "Religious views"
},
{
"paragraph_id": 57,
"text": "Since the 1980s, Turkey has been a site of strong advocacy for creationism, supported by American adherents.",
"title": "Religious views"
},
{
"paragraph_id": 58,
"text": "There are several verses in the Qur'an which some modern writers have interpreted as being compatible with the expansion of the universe, Big Bang and Big Crunch theories:",
"title": "Religious views"
},
{
"paragraph_id": 59,
"text": "Do not the Unbelievers see that the heavens and the earth were joined together (as one unit of creation), before we clove them asunder? We made from water every living thing. Will they not then believe?",
"title": "Religious views"
},
{
"paragraph_id": 60,
"text": "Moreover He comprehended in His design the sky, and it had been (as) smoke: He said to it and to the earth: 'Come ye together, willingly or unwillingly.' They said: 'We do come (together), in willing obedience.'",
"title": "Religious views"
},
{
"paragraph_id": 61,
"text": "With power and skill did We construct the Firmament: for it is We Who create the vastness of space.",
"title": "Religious views"
},
{
"paragraph_id": 62,
"text": "The Day that We roll up the heavens like a scroll rolled up for books (completed),- even as We produced the first creation, so shall We produce a new one: a promise We have undertaken: truly shall We fulfil it.",
"title": "Religious views"
},
{
"paragraph_id": 63,
"text": "The Ahmadiyya movement actively promotes evolutionary theory. Ahmadis interpret scripture from the Qur'an to support the concept of macroevolution and give precedence to scientific theories. Furthermore, unlike orthodox Muslims, Ahmadis believe that humans have gradually evolved from different species. Ahmadis regard Adam as being the first Prophet of God – as opposed to him being the first man on Earth. Rather than wholly adopting the theory of natural selection, Ahmadis promote the idea of a \"guided evolution,\" viewing each stage of the evolutionary process as having been selectively woven by God. Mirza Tahir Ahmad, Fourth Caliph of the Ahmadiyya Muslim Community has stated in his magnum opus Revelation, Rationality, Knowledge & Truth (1998) that evolution did occur but only through God being the One who brings it about. It does not occur itself, according to the Ahmadiyya Muslim Community.",
"title": "Religious views"
},
{
"paragraph_id": 64,
"text": "For Orthodox Jews who seek to reconcile discrepancies between science and the creation myths in the Bible, the notion that science and the Bible should even be reconciled through traditional scientific means is questioned. To these groups, science is as true as the Torah and if there seems to be a problem, epistemological limits are to blame for apparently irreconcilable points. They point to discrepancies between what is expected and what actually is to demonstrate that things are not always as they appear. They note that even the root word for 'world' in the Hebrew language, עולם, Olam, means 'hidden' (נעלם, Neh-Eh-Lahm). Just as they know from the Torah that God created man and trees and the light on its way from the stars in their observed state, so too can they know that the world was created in its over the six days of Creation that reflects progression to its currently-observed state, with the understanding that physical ways to verify this may eventually be identified. This knowledge has been advanced by Rabbi Dovid Gottlieb, former philosophy professor at Johns Hopkins University. Relatively old Kabbalistic sources from well before the scientifically apparent age of the universe was first determined are also in close concord with modern scientific estimates of the age of the universe, according to Rabbi Aryeh Kaplan, and based on Sefer Temunah, an early kabbalistic work attributed to the first-century Tanna Nehunya ben HaKanah. Many kabbalists accepted the teachings of the Sefer HaTemunah, including the medieval Jewish scholar Nahmanides, his close student Isaac ben Samuel of Acre, and David ben Solomon ibn Abi Zimra. Other parallels are derived, among other sources, from Nahmanides, who expounds that there was a Neanderthal-like species with which Adam mated (he did this long before Neanderthals had even been discovered scientifically). Reform Judaism does not take the Torah as a literal text, but rather as a symbolic or open-ended work.",
"title": "Religious views"
},
{
"paragraph_id": 65,
"text": "Some contemporary writers such as Rabbi Gedalyah Nadel have sought to reconcile the discrepancy between the account in the Torah, and scientific findings by arguing that each day referred to in the Bible was not 24 hours, but billions of years long. Others claim that the Earth was created a few thousand years ago, but was deliberately made to look as if it was five billion years old, e.g. by being created with ready made fossils. The best known exponent of this approach being Rabbi Menachem Mendel Schneerson. Others state that although the world was physically created in six 24-hour days, the Torah accounts can be interpreted to mean that there was a period of billions of years before the six days of creation.",
"title": "Religious views"
},
{
"paragraph_id": 66,
"text": "Most vocal literalist creationists are from the US, and strict creationist views are much less common in other developed countries. According to a study published in Science, a survey of the US, Turkey, Japan and Europe showed that public acceptance of evolution is most prevalent in Iceland, Denmark and Sweden at 80% of the population. There seems to be no significant correlation between believing in evolution and understanding evolutionary science.",
"title": "Prevalence"
},
{
"paragraph_id": 67,
"text": "A 2009 Nielsen poll showed that 23% of Australians believe \"the biblical account of human origins,\" 42% believe in a \"wholly scientific\" explanation for the origins of life, while 32% believe in an evolutionary process \"guided by God\".",
"title": "Prevalence"
},
{
"paragraph_id": 68,
"text": "A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution.",
"title": "Prevalence"
},
{
"paragraph_id": 69,
"text": "A 2011 Ipsos survey found that 47% of responders in Brazil identified themselves as \"creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes\".",
"title": "Prevalence"
},
{
"paragraph_id": 70,
"text": "In 2004, IBOPE conducted a poll in Brazil that asked questions about creationism and the teaching of creationism in schools. When asked if creationism should be taught in schools, 89% of people said that creationism should be taught in schools. When asked if the teaching of creationism should replace the teaching of evolution in schools, 75% of people said that the teaching of creationism should replace the teaching of evolution in schools.",
"title": "Prevalence"
},
{
"paragraph_id": 71,
"text": "A 2012 survey, by Angus Reid Public Opinion revealed that 61 percent of Canadians believe in evolution. The poll asked \"Where did human beings come from – did we start as singular cells millions of year ago and evolve into our present form, or did God create us in his image 10,000 years ago?\"",
"title": "Prevalence"
},
{
"paragraph_id": 72,
"text": "In 2019, a Research Co. poll asked people in Canada if creationism \"should be part of the school curriculum in their province\". 38% of Canadians said that creationism should be part of the school curriculum, 39% of Canadians said that it should not be part of the school curriculum, and 23% of Canadians were undecided.",
"title": "Prevalence"
},
{
"paragraph_id": 73,
"text": "In 2023, a Research Co. poll found that 21% of Canadians \"believe God created human beings in their present form within the last 10,000 years\". The poll also found that \"More than two-in-five Canadians (43%) think creationism should be part of the school curriculum in their province.\"",
"title": "Prevalence"
},
{
"paragraph_id": 74,
"text": "In Europe, literalist creationism is more widely rejected, though regular opinion polls are not available. Most people accept that evolution is the most widely accepted scientific theory as taught in most schools. In countries with a Roman Catholic majority, papal acceptance of evolutionary creationism as worthy of study has essentially ended debate on the matter for many people.",
"title": "Prevalence"
},
{
"paragraph_id": 75,
"text": "In the UK, a 2006 poll on the \"origin and development of life\", asked participants to choose between three different perspectives on the origin of life: 22% chose creationism, 17% opted for intelligent design, 48% selected evolutionary theory, and the rest did not know. A subsequent 2010 YouGov poll on the correct explanation for the origin of humans found that 9% opted for creationism, 12% intelligent design, 65% evolutionary theory and 13% didn't know. The former Archbishop of Canterbury Rowan Williams, head of the worldwide Anglican Communion, views the idea of teaching creationism in schools as a mistake. In 2009, an Ipsos Mori survey in the United Kingdom found that 54% of Britons agreed with the view: \"Evolutionary theories should be taught in science lessons in schools together with other possible perspectives, such as intelligent design and creationism.\"",
"title": "Prevalence"
},
{
"paragraph_id": 76,
"text": "In Italy, Education Minister Letizia Moratti wanted to retire evolution from the secondary school level; after one week of massive protests, she reversed her opinion.",
"title": "Prevalence"
},
{
"paragraph_id": 77,
"text": "There continues to be scattered and possibly mounting efforts on the part of religious groups throughout Europe to introduce creationism into public education. In response, the Parliamentary Assembly of the Council of Europe has released a draft report titled The dangers of creationism in education on June 8, 2007, reinforced by a further proposal of banning it in schools dated October 4, 2007.",
"title": "Prevalence"
},
{
"paragraph_id": 78,
"text": "Serbia suspended the teaching of evolution for one week in September 2004, under education minister Ljiljana Čolić, only allowing schools to reintroduce evolution into the curriculum if they also taught creationism. \"After a deluge of protest from scientists, teachers and opposition parties\" says the BBC report, Čolić's deputy made the statement, \"I have come here to confirm Charles Darwin is still alive\" and announced that the decision was reversed. Čolić resigned after the government said that she had caused \"problems that had started to reflect on the work of the entire government.\"",
"title": "Prevalence"
},
{
"paragraph_id": 79,
"text": "Poland saw a major controversy over creationism in 2006, when the Deputy Education Minister, Mirosław Orzechowski, denounced evolution as \"one of many lies\" taught in Polish schools. His superior, Minister of Education Roman Giertych, has stated that the theory of evolution would continue to be taught in Polish schools, \"as long as most scientists in our country say that it is the right theory.\" Giertych's father, Member of the European Parliament Maciej Giertych, has opposed the teaching of evolution and has claimed that dinosaurs and humans co-existed.",
"title": "Prevalence"
},
{
"paragraph_id": 80,
"text": "A June 2015 - July 2016 Pew poll of Eastern European countries found that 56% of people from Armenia say that humans and other living things have \"Existed in present state since the beginning of time\". Armenia is followed by 52% from Bosnia, 42% from Moldova, 37% from Lithuania, 34% from Georgia and Ukraine, 33% from Croatia and Romania, 31% from Bulgaria, 29% from Greece and Serbia, 26% from Russia, 25% from Latvia, 23% from Belarus and Poland, 21% from Estonia and Hungary, and 16% from the Czech Republic.",
"title": "Prevalence"
},
{
"paragraph_id": 81,
"text": "A 2011 Ipsos survey found that 56% of responders in South Africa identified themselves as \"creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes\".",
"title": "Prevalence"
},
{
"paragraph_id": 82,
"text": "In 2009, an EBS survey in South Korea found that 63% of people believed that creation and evolution should both be taught in schools simultaneously.",
"title": "Prevalence"
},
{
"paragraph_id": 83,
"text": "A 2017 poll by Pew Research found that 62% of Americans believe humans have evolved over time and 34% of Americans believe humans and other living things have existed in their present form since the beginning of time. A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the view that \"God created humans in their present form at one time within the last 10,000 years\" when asked for their views on the origin and development of human beings.",
"title": "Prevalence"
},
{
"paragraph_id": 84,
"text": "According to a 2014 Gallup poll, about 42% of Americans believe that \"God created human beings pretty much in their present form at one time within the last 10,000 years or so.\" Another 31% believe that \"human beings have developed over millions of years from less advanced forms of life, but God guided this process,\"and 19% believe that \"human beings have developed over millions of years from less advanced forms of life, but God had no part in this process.\"",
"title": "Prevalence"
},
{
"paragraph_id": 85,
"text": "Belief in creationism is inversely correlated to education; of those with postgraduate degrees, 74% accept evolution. In 1987, Newsweek reported: \"By one count there are some 700 scientists with respectable academic credentials (out of a total of 480,000 U.S. earth and life scientists) who give credence to creation-science, the general theory that complex life forms did not evolve but appeared 'abruptly.'\"",
"title": "Prevalence"
},
{
"paragraph_id": 86,
"text": "A 2000 poll for People for the American Way found 70% of the US public felt that evolution was compatible with a belief in God.",
"title": "Prevalence"
},
{
"paragraph_id": 87,
"text": "According to a study published in Science, between 1985 and 2005 the number of adult North Americans who accept evolution declined from 45% to 40%, the number of adults who reject evolution declined from 48% to 39% and the number of people who were unsure increased from 7% to 21%. Besides the US the study also compared data from 32 European countries, Turkey, and Japan. The only country where acceptance of evolution was lower than in the US was Turkey (25%).",
"title": "Prevalence"
},
{
"paragraph_id": 88,
"text": "According to a 2011 Fox News poll, 45% of Americans believe in creationism, down from 50% in a similar poll in 1999. 21% believe in 'the theory of evolution as outlined by Darwin and other scientists' (up from 15% in 1999), and 27% answered that both are true (up from 26% in 1999).",
"title": "Prevalence"
},
{
"paragraph_id": 89,
"text": "In September 2012, educator and television personality Bill Nye spoke with the Associated Press and aired his fears about acceptance of creationism, believing that teaching children that creationism is the only true answer without letting them understand the way science works will prevent any future innovation in the world of science. In February 2014, Nye defended evolution in the classroom in a debate with creationist Ken Ham on the topic of whether creation is a viable model of origins in today's modern, scientific era.",
"title": "Prevalence"
},
{
"paragraph_id": 90,
"text": "In the US, creationism has become centered in the political controversy over creation and evolution in public education, and whether teaching creationism in science classes conflicts with the separation of church and state. Currently, the controversy comes in the form of whether advocates of the intelligent design movement who wish to \"Teach the Controversy\" in science classes have conflated science with religion.",
"title": "Prevalence"
},
{
"paragraph_id": 91,
"text": "People for the American Way polled 1500 North Americans about the teaching of evolution and creationism in November and December 1999. They found that most North Americans were not familiar with creationism, and most North Americans had heard of evolution, but many did not fully understand the basics of the theory. The main findings were:",
"title": "Prevalence"
},
{
"paragraph_id": 92,
"text": "In such political contexts, creationists argue that their particular religiously based origin belief is superior to those of other belief systems, in particular those made through secular or scientific rationale. Political creationists are opposed by many individuals and organizations who have made detailed critiques and given testimony in various court cases that the alternatives to scientific reasoning offered by creationists are opposed by the consensus of the scientific community.",
"title": "Prevalence"
},
{
"paragraph_id": 93,
"text": "Most Christians disagree with the teaching of creationism as an alternative to evolution in schools. Several religious organizations, among them the Catholic Church, hold that their faith does not conflict with the scientific consensus regarding evolution. The Clergy Letter Project, which has collected more than 13,000 signatures, is an \"endeavor designed to demonstrate that religion and science can be compatible.\"",
"title": "Criticism"
},
{
"paragraph_id": 94,
"text": "In his 2002 article \"Intelligent Design as a Theological Problem,\" George Murphy argues against the view that life on Earth, in all its forms, is direct evidence of God's act of creation (Murphy quotes Phillip E. Johnson's claim that he is speaking \"of a God who acted openly and left his fingerprints on all the evidence.\"). Murphy argues that this view of God is incompatible with the Christian understanding of God as \"the one revealed in the cross and resurrection of Christ.\" The basis of this theology is Isaiah 45:15, \"Verily thou art a God that hidest thyself, O God of Israel, the Saviour.\"",
"title": "Criticism"
},
{
"paragraph_id": 95,
"text": "Murphy observes that the execution of a Jewish carpenter by Roman authorities is in and of itself an ordinary event and did not require divine action. On the contrary, for the crucifixion to occur, God had to limit or \"empty\" himself. It was for this reason that Paul the Apostle wrote, in Philippians 2:5-8:",
"title": "Criticism"
},
{
"paragraph_id": 96,
"text": "Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross.",
"title": "Criticism"
},
{
"paragraph_id": 97,
"text": "Murphy concludes that,",
"title": "Criticism"
},
{
"paragraph_id": 98,
"text": "Just as the Son of God limited himself by taking human form and dying on a cross, God limits divine action in the world to be in accord with rational laws which God has chosen. This enables us to understand the world on its own terms, but it also means that natural processes hide God from scientific observation.",
"title": "Criticism"
},
{
"paragraph_id": 99,
"text": "For Murphy, a theology of the cross requires that Christians accept a methodological naturalism, meaning that one cannot invoke God to explain natural phenomena, while recognizing that such acceptance does not require one to accept a metaphysical naturalism, which proposes that nature is all that there is.",
"title": "Criticism"
},
{
"paragraph_id": 100,
"text": "The Jesuit priest George Coyne has stated that it is \"unfortunate that, especially here in America, creationism has come to mean...some literal interpretation of Genesis.\" He argues that \"...Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in belief that everything depends on God, or better, all is a gift from God.\"",
"title": "Criticism"
},
{
"paragraph_id": 101,
"text": "Other Christians have expressed qualms about teaching creationism. In March 2006, then Archbishop of Canterbury Rowan Williams, the leader of the world's Anglicans, stated his discomfort about teaching creationism, saying that creationism was \"a kind of category mistake, as if the Bible were a theory like other theories.\" He also said: \"My worry is creationism can end up reducing the doctrine of creation rather than enhancing it.\" The views of the Episcopal Church – a major American-based branch of the Anglican Communion – on teaching creationism resemble those of Williams.",
"title": "Criticism"
},
{
"paragraph_id": 102,
"text": "The National Science Teachers Association is opposed to teaching creationism as a science, as is the Association for Science Teacher Education, the National Association of Biology Teachers, the American Anthropological Association, the American Geosciences Institute, the Geological Society of America, the American Geophysical Union, and numerous other professional teaching and scientific societies.",
"title": "Criticism"
},
{
"paragraph_id": 103,
"text": "In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K‐12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as \"Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning.\" However, they, as well as other \"worldviews that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others.\"",
"title": "Criticism"
},
{
"paragraph_id": 104,
"text": "Randy Moore and Sehoya Cotner, from the biology program at the University of Minnesota, reflect on the relevance of teaching creationism in the article \"The Creationist Down the Hall: Does It Matter When Teachers Teach Creationism?\", in which they write: \"Despite decades of science education reform, numerous legal decisions declaring the teaching of creationism in public-school science classes to be unconstitutional, overwhelming evidence supporting evolution, and the many denunciations of creationism as nonscientific by professional scientific societies, creationism remains popular throughout the United States.\"",
"title": "Criticism"
},
{
"paragraph_id": 105,
"text": "Science is a system of knowledge based on observation, empirical evidence, and the development of theories that yield testable explanations and predictions of natural phenomena. By contrast, creationism is often based on literal interpretations of the narratives of particular religious texts. Creationist beliefs involve purported forces that lie outside of nature, such as supernatural intervention, and often do not allow predictions at all. Therefore, these can neither be confirmed nor disproved by scientists. However, many creationist beliefs can be framed as testable predictions about phenomena such as the age of the Earth, its geological history and the origins, distributions and relationships of living organisms found on it. Early science incorporated elements of these beliefs, but as science developed these beliefs were gradually falsified and were replaced with understandings based on accumulated and reproducible evidence that often allows the accurate prediction of future results.",
"title": "Criticism"
},
{
"paragraph_id": 106,
"text": "Some scientists, such as Stephen Jay Gould, consider science and religion to be two compatible and complementary fields, with authorities in distinct areas of human experience, so-called non-overlapping magisteria. This view is also held by many theologians, who believe that ultimate origins and meaning are addressed by religion, but favor verifiable scientific explanations of natural phenomena over those of creationist beliefs. Other scientists, such as Richard Dawkins, reject the non-overlapping magisteria and argue that, in disproving literal interpretations of creationists, the scientific method also undermines religious texts as a source of truth. Irrespective of this diversity in viewpoints, since creationist beliefs are not supported by empirical evidence, the scientific consensus is that any attempt to teach creationism as science should be rejected.",
"title": "Criticism"
}
] | Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation. In its broadest sense, creationism includes a continuum of religious views, which vary in their acceptance or rejection of scientific explanations such as evolution that describe the origin and development of natural phenomena. The term creationism most often refers to belief in special creation; the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myth found in the Bible's Genesis creation narrative. Since the 1970s, the most common form of this has been Young Earth creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism. Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism. Less prominently, there are also members of the Islamic and Hindu faiths who are creationists. Use of the term "creationist" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became On the Origin of Species, and he used the term later in letters to colleagues. In 1873, Asa Gray published an article in The Nation saying a "special creationist" who held that species "were supernaturally originated just as they are, by the very terms of his doctrine places them out of the reach of scientific explanation." | 2001-10-19T17:58:25Z | 2023-12-26T23:15:06Z | [
"Template:Cite web",
"Template:Genesis 1",
"Template:Authority control",
"Template:Creationism2",
"Template:Harvnb",
"Template:Cite video",
"Template:Which",
"Template:Bibleref2",
"Template:Cite AV media",
"Template:Notelist",
"Template:Wikiquote",
"Template:Cn",
"Template:Lang",
"Template:Spaced ndash",
"Template:Div col",
"Template:Div col end",
"Template:Cite report",
"Template:Portal bar",
"Template:Main",
"Template:Efn",
"Template:Rp",
"Template:Cite news",
"Template:Webarchive",
"Template:For",
"Template:Excessive citations inline",
"Template:Bibleverse",
"Template:Transliteration",
"Template:Reflist",
"Template:Refbegin",
"Template:Small",
"Template:Hatnote",
"Template:Blockquote",
"Template:Cite journal",
"Template:Pp-protect",
"Template:See",
"Template:Bar box",
"Template:Cbignore",
"Template:See also",
"Template:Snd",
"Template:Harv",
"Template:Further",
"Template:Cite court",
"Template:Refend",
"Template:Cite press release",
"Template:Philosophy of religion",
"Template:Refn",
"Template:Quote",
"Template:Lang-hbo",
"Template:Nbsp",
"Template:Citation needed",
"Template:Cite book",
"Template:Cite interview",
"Template:Cite encyclopedia",
"Template:Intelligent Design",
"Template:Sfn",
"Template:As of",
"Template:Creationism topics",
"Template:Short description",
"Template:Cite conference",
"Template:Commons"
] | https://en.wikipedia.org/wiki/Creationism |
5,329 | History of Chad | Chad (Arabic: تشاد; French: Tchad), officially the Republic of Chad, is a landlocked country in Central Africa. It borders Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon and Nigeria to the southwest, and Niger to the west. Due to its distance from the sea and its largely desert climate, the country is sometimes referred to as the "Dead Heart of Africa".
The territory now known as Chad possesses some of the richest archaeological sites in Africa. A hominid skull was found by Michel Brunet, that is more than 7 million years old, the oldest discovered anywhere in the world; it has been given the name Sahelanthropus tchadensis. In 1996 Michel Brunet had unearthed a hominid jaw which he named Australopithecus bahrelghazali, and unofficially dubbed Abel. It was dated using Beryllium based Radiometric dating as living circa. 3.6 million years ago.
During the 7th millennium BC, the northern half of Chad was part of a broad expanse of land, stretching from the Indus River in the east to the Atlantic Ocean in the west, in which ecological conditions favored early human settlement. Rock art of the "Round Head" style, found in the Ennedi region, has been dated to before the 7th millennium BC and, because of the tools with which the rocks were carved and the scenes they depict, may represent the oldest evidence in the Sahara of Neolithic industries. Many of the pottery-making and Neolithic activities in Ennedi date back further than any of those of the Nile Valley to the east.
In the prehistoric period, Chad was much wetter than it is today, as evidenced by large game animals depicted in rock paintings in the Tibesti and Borkou regions.
Recent linguistic research suggests that all of Africa's major language groupings south of the Sahara Desert (except Khoisan, which is not considered a valid genetic grouping anyway), i.e. the Afro-Asiatic, Nilo-Saharan and Niger–Congo phyla, originated in prehistoric times in a narrow band between Lake Chad and the Nile Valley. The origins of Chad's peoples, however, remain unclear. Several of the proven archaeological sites have been only partially studied, and other sites of great potential have yet to be mapped.
At the end of the 1st millennium AD, the formation of states began across central Chad in the sahelian zone between the desert and the savanna. For almost the next 1,000 years, these states, their relations with each other, and their effects on the peoples who lived in stateless societies along their peripheries dominated Chad's political history. Recent research suggests that indigenous Africans founded of these states, not migrating Arabic-speaking groups, as was believed previously. Nonetheless, immigrants, Arabic-speaking or otherwise, played a significant role, along with Islam, in the formation and early evolution of these states.
Most states began as kingdoms, in which the king was considered divine and endowed with temporal and spiritual powers. All states were militaristic (or they did not survive long), but none was able to expand far into southern Chad, where forests and the tsetse fly complicated the use of cavalry. Control over the trans-Saharan trade routes that passed through the region formed the economic basis of these kingdoms. Although many states rose and fell, the most important and durable of the empires were Kanem–Bornu, Baguirmi, and Ouaddai, according to most written sources (mainly court chronicles and writings of Arab traders and travelers).Chad - ERA OF EMPIRES, A.D. 900–1900
The Kanem Empire originated in the 9th century AD to the northeast of Lake Chad. Historians agree that the leaders of the new state were ancestors of the Kanembu people. Toward the end of the 11th century the Sayfawa king (or mai, the title of the Sayfawa rulers) Hummay, converted to Islam. In the following century the Sayfawa rulers expanded southward into Kanem, where was to rise their first capital, Njimi. Kanem's expansion peaked during the long and energetic reign of Mai Dunama Dabbalemi (c. 1221–1259).
By the end of the 14th century, internal struggles and external attacks had torn Kanem apart. Finally, around 1396 the Bulala invaders forced Mai Umar Idrismi to abandon Njimi and move the Kanembu people to Bornu on the western edge of Lake Chad. Over time, the intermarriage of the Kanembu and Bornu peoples created a new people and language, the Kanuri, and founded a new capital, Ngazargamu.
Kanem–Bornu peaked during the reign of the outstanding statesman Mai Idris Aluma (c. 1571–1603). Aluma is remembered for his military skills, administrative reforms, and Islamic piety. The administrative reforms and military brilliance of Aluma sustained the empire until the mid-17th century, when its power began to fade. By the early 19th century, Kanem–Bornu was clearly an empire in decline, and in 1808 Fulani warriors conquered Ngazargamu. Bornu survived, but the Sayfawa dynasty ended in 1846 and the Empire itself fell in 1893.
The Kingdom of Baguirmi, located southeast of Kanem-Bornu, was founded in the late 15th or early 16th century, and adopted Islam in the reign of Abdullah IV (1568-98). Baguirmi was in a tributary relationship with Kanem–Bornu at various points in the 17th and 18th centuries, then to Ouaddai in the 19th century. In 1893, Baguirmi sultan Abd ar Rahman Gwaranga surrendered the territory to France, and it became a French protectorate.
The Ouaddai Kingdom, west of Kanem–Bornu, was established in the early 16th century by Tunjur rulers. In the 1630s, Abd al Karim invaded and established an Islamic sultanate. Among its most impactful rulers for the next three centuries were Muhammad Sabun, who controlled a new trade route to the north and established a currency during the early 19th century, and Muhammad Sharif, whose military campaigns in the mid 19th century fended off an assimilation attempt from Darfur, conquered Baguirmi, and successfully resisted French colonization. However, Ouaddai lost its independence to France after a war from 1909 to 1912.
The French first invaded Chad in 1891, establishing their authority through military expeditions primarily against the Muslim kingdoms. The decisive colonial battle for Chad was fought on April 22, 1900 at Battle of Kousséri between forces of French Major Amédée-François Lamy and forces of the Sudanese warlord Rabih az-Zubayr. Both leaders were killed in the battle.
In 1905, administrative responsibility for Chad was placed under a governor-general stationed at Brazzaville, capital of French Equatorial Africa (FEA). Chad did not have a separate colonial status until 1920, when it was placed under a lieutenant-governor stationed in Fort-Lamy (today N'Djamena).
Two fundamental themes dominated Chad's colonial experience with the French: an absence of policies designed to unify the territory and an exceptionally slow pace of modernization. In the French scale of priorities, the colony of Chad ranked near the bottom, and the French came to perceive Chad primarily as a source of raw cotton and untrained labour to be used in the more productive colonies to the south.
Throughout the colonial period, large areas of Chad were never governed effectively: in the huge BET Prefecture, the handful of French military administrators usually left the people alone, and in central Chad, French rule was only slightly more substantive. Truly speaking, France managed to govern effectively only the south.
During World War II, Chad was the first French colony to rejoin the Allies (August 26, 1940), after the defeat of France by Germany. Under the administration of Félix Éboué, France's first black colonial governor, a military column, commanded by Colonel Philippe Leclerc de Hauteclocque, and including two battalions of Sara troops, moved north from N'Djamena (then Fort Lamy) to engage Axis forces in Libya, where, in partnership with the British Army's Long Range Desert Group, they captured Kufra. On 21 January 1942, N'Djamena was bombed by a German aircraft.
After the war ended, local parties started to develop in Chad. The first to be born was the radical Chadian Progressive Party (PPT) in February 1947, initially headed by Panamanian born Gabriel Lisette, but from 1959 headed by François Tombalbaye. The more conservative Chadian Democratic Union (UDT) was founded in November 1947 and represented French commercial interests and a bloc of traditional leaders composed primarily of Muslim and Ouaddaïan nobility. The confrontation between the PPT and UDT was more than simply ideological; it represented different regional identities, with the PPT representing the Christian and animist south and the UDT the Islamic north.
The PPT won the May 1957 pre-independence elections thanks to a greatly expanded franchise, and Lisette led the government of the Territorial Assembly until he lost a confidence vote on 11 February 1959. After a referendum on territorial autonomy on 28 September 1958, French Equatorial Africa was dissolved, and its four constituent states – Gabon, Congo (Brazzaville), the Central African Republic, and Chad became autonomous members of the French Community from 28 November 1958. Following Lisette's fall in February 1959 the opposition leaders Gontchome Sahoulba and Ahmed Koulamallah could not form a stable government, so the PPT was again asked to form an administration - which it did under the leadership of François Tombalbaye on 26 March 1959. On 12 July 1960 France agreed to Chad becoming fully independent. On 11 August 1960, Chad became an independent country and François Tombalbaye became its first president.
One of the most prominent aspects of Tombalbaye's rule to prove itself was his authoritarianism and distrust of democracy. Already in January 1962 he banned all political parties except his own PPT, and started immediately concentrating all power in his own hands. His treatment of opponents, real or imagined, was extremely harsh, filling the prisons with thousands of political prisoners.
What was even worse was his constant discrimination against the central and northern regions of Chad, where the southern Chadian administrators came to be perceived as arrogant and incompetent. This resentment at last exploded in a tax revolt on September 2, 1965 in the Guéra Prefecture, causing 500 deaths. The year after saw the birth in Sudan of the National Liberation Front of Chad (FROLINAT), created to militarily oust Tombalbaye and the Southern dominance. It was the start of a bloody civil war.
Tombalbaye resorted to calling in French troops; while moderately successful, they were not fully able to quell the insurgency. Proving more fortunate was his choice to break with the French and seek friendly ties with Libyan Brotherly Leader Gaddafi, taking away the rebels' principal source of supplies.
But while he had reported some success against the rebels, Tombalbaye started behaving more and more irrationally and brutally, continuously eroding his consensus among the southern elites, which dominated all key positions in the army, the civil service and the ruling party. As a consequence on April 13, 1975, several units of N'Djamena's gendarmerie killed Tombalbaye during a coup.
The coup d'état that terminated Tombalbaye's government received an enthusiastic response in N'Djamena. The southerner General Félix Malloum emerged early as the chairman of the new junta.
The new military leaders were unable to retain for long the popularity that they had gained through their overthrow of Tombalbaye. Malloum proved himself unable to cope with the FROLINAT and at the end decided his only chance was in coopting some of the rebels: in 1978 he allied himself with the insurgent leader Hissène Habré, who entered the government as prime minister.
Internal dissent within the government led Prime Minister Habré to send his forces against Malloum's national army in the capital in February 1979. Malloum was ousted from the presidency, but the resulting civil war amongst the 11 emergent factions was so widespread that it rendered the central government largely irrelevant. At that point, other African governments decided to intervene.
A series of four international conferences held first under Nigerian and then Organization of African Unity (OAU) sponsorship attempted to bring the Chadian factions together. At the fourth conference, held in Lagos, Nigeria, in August 1979, the Lagos Accord was signed. This accord established a transitional government pending national elections. In November 1979, the Transitional Government of National Unity (GUNT) was created with a mandate to govern for 18 months. Goukouni Oueddei, a northerner, was named president; Colonel Kamougué, a southerner, Vice President; and Habré, Minister of Defense. This coalition proved fragile; in January 1980, fighting broke out again between Goukouni's and Habré's forces. With assistance from Libya, Goukouni regained control of the capital and other urban centers by year's end. However, Goukouni's January 1981 statement that Chad and Libya had agreed to work for the realization of complete unity between the two countries generated intense international pressure and Goukouni's subsequent call for the complete withdrawal of external forces.
Libya's partial withdrawal to the Aozou Strip in northern Chad cleared the way for Habré's forces to enter N’Djamena in June. French troops and an OAU peacekeeping force of 3,500 Nigerian, Senegalese, and Zairian troops (partially funded by the United States) remained neutral during the conflict.
Habré continued to face armed opposition on various fronts, and was brutal in his repression of suspected opponents, massacring and torturing many during his rule. In the summer of 1983, GUNT forces launched an offensive against government positions in northern and eastern Chad with heavy Libyan support. In response to Libya's direct intervention, French and Zairian forces intervened to defend Habré, pushing Libyan and rebel forces north of the 16th parallel. In September 1984, the French and the Libyan governments announced an agreement for the mutual withdrawal of their forces from Chad. By the end of the year, all French and Zairian troops were withdrawn. Libya did not honor the withdrawal accord, and its forces continued to occupy the northern third of Chad.
Rebel commando groups (Codos) in southern Chad were broken up by government massacres in 1984. In 1985 Habré briefly reconciled with some of his opponents, including the Democratic Front of Chad (FDT) and the Coordinating Action Committee of the Democratic Revolutionary Council. Goukouni also began to rally toward Habré, and with his support Habré successfully expelled Libyan forces from most of Chadian territory. A cease-fire between Chad and Libya held from 1987 to 1988, and negotiations over the next several years led to the 1994 International Court of Justice decision granting Chad sovereignty over the Aouzou strip, effectively ending Libyan occupation.
However, rivalry between Hadjerai, Zaghawa and Gorane groups within the government grew in the late 1980s. In April 1989, Idriss Déby, one of Habré's leading generals and a Zaghawa, defected and fled to Darfur in Sudan, from which he mounted a Zaghawa-supported series of attacks on Habré (a Gorane). In December 1990, with Libyan assistance and no opposition from French troops stationed in Chad, Déby's forces successfully marched on N’Djamena. After 3 months of provisional government, Déby's Patriotic Salvation Movement (MPS) approved a national charter on February 28, 1991, with Déby as president.
During the next two years, Déby faced at least two coup attempts. Government forces clashed violently with rebel forces, including the Movement for Democracy and Development, MDD, National Revival Committee for Peace and Democracy (CSNPD), Chadian National Front (FNT) and the Western Armed Forces (FAO), near Lake Chad and in southern regions of the country. Earlier French demands for the country to hold a National Conference resulted in the gathering of 750 delegates representing political parties (which were legalized in 1992), the government, trade unions and the army to discuss the creation of a pluralist democratic regime.
However, unrest continued, sparked in part by large-scale killings of civilians in southern Chad. The CSNPD, led by Kette Moise and other southern groups entered into a peace agreement with government forces in 1994, which later broke down. Two new groups, the Armed Forces for a Federal Republic (FARF) led by former Kette ally Laokein Barde and the Democratic Front for Renewal (FDR), and a reformulated MDD clashed with government forces from 1994 to 1995.
Talks with political opponents in early 1996 did not go well, but Déby announced his intent to hold presidential elections in June. Déby won the country's first multi-party presidential elections with support in the second round from opposition leader Kebzabo, defeating General Kamougue (leader of the 1975 coup against Tombalbaye). Déby's MPS party won 63 of 125 seats in the January 1997 legislative elections. International observers noted numerous serious irregularities in presidential and legislative election proceedings.
By mid-1997 the government signed peace deals with FARF and the MDD leadership and succeeded in cutting off the groups from their rear bases in the Central African Republic and Cameroon. Agreements also were struck with rebels from the National Front of Chad (FNT) and Movement for Social Justice and Democracy in October 1997. However, peace was short-lived, as FARF rebels clashed with government soldiers, finally surrendering to government forces in May 1998. Barde was killed in the fighting, as were hundreds of other southerners, most civilians.
Since October 1998, Chadian Movement for Justice and Democracy (MDJT) rebels, led by Youssuf Togoimi until his death in September 2002, have skirmished with government troops in the Tibesti region, resulting in hundreds of civilian, government, and rebel casualties, but little ground won or lost. No active armed opposition has emerged in other parts of Chad, although Kette Moise, following senior postings at the Ministry of Interior, mounted a smallscale local operation near Moundou which was quickly and violently suppressed by government forces in late 2000.
Déby, in the mid-1990s, gradually restored basic functions of government and entered into agreements with the World Bank and IMF to carry out substantial economic reforms. Oil exploitation in the southern Doba region began in June 2000, with World Bank Board approval to finance a small portion of a project, the Chad-Cameroon Petroleum Development Project, aimed at transport of Chadian crude through a 1000-km buried pipeline through Cameroon to the Gulf of Guinea. The project established unique mechanisms for World Bank, private sector, government, and civil society collaboration to guarantee that future oil revenues benefit local populations and result in poverty alleviation. Success of the project depended on multiple monitoring efforts to ensure that all parties keep their commitments. These "unique" mechanisms for monitoring and revenue management have faced intense criticism from the beginning. Debt relief was accorded to Chad in May 2001.
Déby won a flawed 63% first-round victory in May 2001 presidential elections after legislative elections were postponed until spring 2002. Having accused the government of fraud, six opposition leaders were arrested (twice) and one opposition party activist was killed following the announcement of election results. However, despite claims of government corruption, favoritism of Zaghawas, and abuses by the security forces, opposition party and labor union calls for general strikes and more active demonstrations against the government have been unsuccessful. Despite movement toward democratic reform, power remains in the hands of a northern ethnic oligarchy.
In 2003, Chad began receiving refugees from the Darfur region of western Sudan. More than 200,000 refugees fled the fighting between two rebel groups and government-supported militias known as Janjaweed. A number of border incidents led to the Chadian-Sudanese War.
Chad become an oil producer in 2003. In order to avoid resource curse and corruption, elaborate plans sponsored by World Bank were made. This plan ensured transparency in payments, as well as that 80% of money from oil exports would be spent on five priority development sectors, two most important of these being: education and healthcare. However money started getting diverted towards the military even before the civil war broke out. In 2006 when the civil war escalated, Chad abandoned previous economic plans sponsored by World Bank and added "national security" as priority development sector, money from this sector was used to improve the military. During the civil war, more than 600 million dollars were used to buy fighter jets, attack helicopters, and armored personnel carriers.
Chad earned between 10 and 11 billion dollars from oil production, and estimated 4 billion dollars were invested in the army.
The war started on December 23, 2005, when the government of Chad declared a state of war with Sudan and called for the citizens of Chad to mobilize themselves against the "common enemy," which the Chadian government sees as the Rally for Democracy and Liberty (RDL) militants, Chadian rebels, backed by the Sudanese government, and Sudanese militiamen. Militants have attacked villages and towns in eastern Chad, stealing cattle, murdering citizens, and burning houses. Over 200,000 refugees from the Darfur region of northwestern Sudan currently claim asylum in eastern Chad. Chadian president Idriss Déby accuses Sudanese President Omar Hasan Ahmad al-Bashir of trying to "destabilize our country, to drive our people into misery, to create disorder and export the war from Darfur to Chad."
An attack on the Chadian town of Adre near the Sudanese border led to the deaths of either one hundred rebels, as every news source other than CNN has reported, or three hundred rebels. The Sudanese government was blamed for the attack, which was the second in the region in three days, but Sudanese foreign ministry spokesman Jamal Mohammed Ibrahim denies any Sudanese involvement, "We are not for any escalation with Chad. We technically deny involvement in Chadian internal affairs." This attack was the final straw that led to the declaration of war by Chad and the alleged deployment of the Chadian airforce into Sudanese airspace, which the Chadian government denies.
An attack on N'Djamena was defeated on April 13, 2006 in the Battle of N'Djamena. The President on national radio stated that the situation was under control, but residents, diplomats and journalists reportedly heard shots of weapons fire.
On November 25, 2006, rebels captured the eastern town of Abeche, capital of the Ouaddaï Region and center for humanitarian aid to the Darfur region in Sudan. On the same day, a separate rebel group Rally of Democratic Forces had captured Biltine. On November 26, 2006, the Chadian government claimed to have recaptured both towns, although rebels still claimed control of Biltine. Government buildings and humanitarian aid offices in Abeche were said to have been looted. The Chadian government denied a warning issued by the French Embassy in N'Djamena that a group of rebels was making its way through the Batha Prefecture in central Chad. Chad insists that both rebel groups are supported by the Sudanese government.
Nearly 100 children at the center of an international scandal that left them stranded at an orphanage in remote eastern Chad returned home after nearly five months March 14, 2008. The 97 children were taken from their homes in October 2007 by a then-obscure French charity, Zoé's Ark, which claimed they were orphans from Sudan's war-torn Darfur region.
On Friday, February 1, 2008, rebels, an opposition alliance of leaders Mahamat Nouri, a former defense minister, and Timane Erdimi, a nephew of Idriss Déby who was his chief of staff, attacked the Chadian capital of Ndjamena - even surrounding the Presidential Palace. But Idris Deby with government troops fought back. French forces flew in ammunition for Chadian government troops but took no active part in the fighting. UN has said that up to 20,000 people left the region, taking refuge in nearby Cameroon and Nigeria. Hundreds of people were killed, mostly civilians. The rebels accuse Deby of corruption and embezzling millions in oil revenue. While many Chadians may share that assessment, the uprising appears to be a power struggle within the elite that has long controlled Chad. The French government believes that the opposition has regrouped east of the capital. Déby has blamed Sudan for the current unrest in Chad.
During the Déby era, Chad intervened in conflicts in Mali, Central African Republic, Niger and Nigeria.
In 2013, Chad sent 2000 men from its military to help France in Operation Serval during the Mali War. Later in the same year Chad sent 850 troops to Central African Republic to help peacekeeping operation MISCA, those troops withdrew in April 2014 after allegations of human rights violations.
During the Boko Haram insurgency, Chad multiple times sent troops to assist the fight against Boko Haram in Niger and Nigeria.
In August 2018, rebel fighters of the Military Command Council for the Salvation of the Republic (CCMSR) attacked government forces in northern Chad. Chad experienced threats from jihadists fleeing the Libyan conflict. Chad had been an ally of the West in the fight against Islamist militants in West Africa.
In January 2019, after 47 years, Chad restored diplomatic relations with Israel. It was announced during a visit to N’Djamena by Israeli Prime Minister Benjamin Netanyahu.
In April 2021, Chad's army announced that President Idriss Déby had died of his injuries following clashes with rebels in the north of the country. Idriss Deby ruled the country for more than 30 years since 1990. It was also announced that a military council led by Déby's son, Mahamat Idriss Déby a 37-year-old four star general, will govern for the next 18 months. | [
{
"paragraph_id": 0,
"text": "Chad (Arabic: تشاد; French: Tchad), officially the Republic of Chad, is a landlocked country in Central Africa. It borders Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon and Nigeria to the southwest, and Niger to the west. Due to its distance from the sea and its largely desert climate, the country is sometimes referred to as the \"Dead Heart of Africa\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "The territory now known as Chad possesses some of the richest archaeological sites in Africa. A hominid skull was found by Michel Brunet, that is more than 7 million years old, the oldest discovered anywhere in the world; it has been given the name Sahelanthropus tchadensis. In 1996 Michel Brunet had unearthed a hominid jaw which he named Australopithecus bahrelghazali, and unofficially dubbed Abel. It was dated using Beryllium based Radiometric dating as living circa. 3.6 million years ago.",
"title": "Prehistory"
},
{
"paragraph_id": 2,
"text": "During the 7th millennium BC, the northern half of Chad was part of a broad expanse of land, stretching from the Indus River in the east to the Atlantic Ocean in the west, in which ecological conditions favored early human settlement. Rock art of the \"Round Head\" style, found in the Ennedi region, has been dated to before the 7th millennium BC and, because of the tools with which the rocks were carved and the scenes they depict, may represent the oldest evidence in the Sahara of Neolithic industries. Many of the pottery-making and Neolithic activities in Ennedi date back further than any of those of the Nile Valley to the east.",
"title": "Prehistory"
},
{
"paragraph_id": 3,
"text": "In the prehistoric period, Chad was much wetter than it is today, as evidenced by large game animals depicted in rock paintings in the Tibesti and Borkou regions.",
"title": "Prehistory"
},
{
"paragraph_id": 4,
"text": "Recent linguistic research suggests that all of Africa's major language groupings south of the Sahara Desert (except Khoisan, which is not considered a valid genetic grouping anyway), i.e. the Afro-Asiatic, Nilo-Saharan and Niger–Congo phyla, originated in prehistoric times in a narrow band between Lake Chad and the Nile Valley. The origins of Chad's peoples, however, remain unclear. Several of the proven archaeological sites have been only partially studied, and other sites of great potential have yet to be mapped.",
"title": "Prehistory"
},
{
"paragraph_id": 5,
"text": "At the end of the 1st millennium AD, the formation of states began across central Chad in the sahelian zone between the desert and the savanna. For almost the next 1,000 years, these states, their relations with each other, and their effects on the peoples who lived in stateless societies along their peripheries dominated Chad's political history. Recent research suggests that indigenous Africans founded of these states, not migrating Arabic-speaking groups, as was believed previously. Nonetheless, immigrants, Arabic-speaking or otherwise, played a significant role, along with Islam, in the formation and early evolution of these states.",
"title": "Era of Empires (AD 900–1900)"
},
{
"paragraph_id": 6,
"text": "Most states began as kingdoms, in which the king was considered divine and endowed with temporal and spiritual powers. All states were militaristic (or they did not survive long), but none was able to expand far into southern Chad, where forests and the tsetse fly complicated the use of cavalry. Control over the trans-Saharan trade routes that passed through the region formed the economic basis of these kingdoms. Although many states rose and fell, the most important and durable of the empires were Kanem–Bornu, Baguirmi, and Ouaddai, according to most written sources (mainly court chronicles and writings of Arab traders and travelers).Chad - ERA OF EMPIRES, A.D. 900–1900",
"title": "Era of Empires (AD 900–1900)"
},
{
"paragraph_id": 7,
"text": "The Kanem Empire originated in the 9th century AD to the northeast of Lake Chad. Historians agree that the leaders of the new state were ancestors of the Kanembu people. Toward the end of the 11th century the Sayfawa king (or mai, the title of the Sayfawa rulers) Hummay, converted to Islam. In the following century the Sayfawa rulers expanded southward into Kanem, where was to rise their first capital, Njimi. Kanem's expansion peaked during the long and energetic reign of Mai Dunama Dabbalemi (c. 1221–1259).",
"title": "Era of Empires (AD 900–1900)"
},
{
"paragraph_id": 8,
"text": "By the end of the 14th century, internal struggles and external attacks had torn Kanem apart. Finally, around 1396 the Bulala invaders forced Mai Umar Idrismi to abandon Njimi and move the Kanembu people to Bornu on the western edge of Lake Chad. Over time, the intermarriage of the Kanembu and Bornu peoples created a new people and language, the Kanuri, and founded a new capital, Ngazargamu.",
"title": "Era of Empires (AD 900–1900)"
},
{
"paragraph_id": 9,
"text": "Kanem–Bornu peaked during the reign of the outstanding statesman Mai Idris Aluma (c. 1571–1603). Aluma is remembered for his military skills, administrative reforms, and Islamic piety. The administrative reforms and military brilliance of Aluma sustained the empire until the mid-17th century, when its power began to fade. By the early 19th century, Kanem–Bornu was clearly an empire in decline, and in 1808 Fulani warriors conquered Ngazargamu. Bornu survived, but the Sayfawa dynasty ended in 1846 and the Empire itself fell in 1893.",
"title": "Era of Empires (AD 900–1900)"
},
{
"paragraph_id": 10,
"text": "The Kingdom of Baguirmi, located southeast of Kanem-Bornu, was founded in the late 15th or early 16th century, and adopted Islam in the reign of Abdullah IV (1568-98). Baguirmi was in a tributary relationship with Kanem–Bornu at various points in the 17th and 18th centuries, then to Ouaddai in the 19th century. In 1893, Baguirmi sultan Abd ar Rahman Gwaranga surrendered the territory to France, and it became a French protectorate.",
"title": "Era of Empires (AD 900–1900)"
},
{
"paragraph_id": 11,
"text": "The Ouaddai Kingdom, west of Kanem–Bornu, was established in the early 16th century by Tunjur rulers. In the 1630s, Abd al Karim invaded and established an Islamic sultanate. Among its most impactful rulers for the next three centuries were Muhammad Sabun, who controlled a new trade route to the north and established a currency during the early 19th century, and Muhammad Sharif, whose military campaigns in the mid 19th century fended off an assimilation attempt from Darfur, conquered Baguirmi, and successfully resisted French colonization. However, Ouaddai lost its independence to France after a war from 1909 to 1912.",
"title": "Era of Empires (AD 900–1900)"
},
{
"paragraph_id": 12,
"text": "The French first invaded Chad in 1891, establishing their authority through military expeditions primarily against the Muslim kingdoms. The decisive colonial battle for Chad was fought on April 22, 1900 at Battle of Kousséri between forces of French Major Amédée-François Lamy and forces of the Sudanese warlord Rabih az-Zubayr. Both leaders were killed in the battle.",
"title": "Colonialism (1900–1940)"
},
{
"paragraph_id": 13,
"text": "In 1905, administrative responsibility for Chad was placed under a governor-general stationed at Brazzaville, capital of French Equatorial Africa (FEA). Chad did not have a separate colonial status until 1920, when it was placed under a lieutenant-governor stationed in Fort-Lamy (today N'Djamena).",
"title": "Colonialism (1900–1940)"
},
{
"paragraph_id": 14,
"text": "Two fundamental themes dominated Chad's colonial experience with the French: an absence of policies designed to unify the territory and an exceptionally slow pace of modernization. In the French scale of priorities, the colony of Chad ranked near the bottom, and the French came to perceive Chad primarily as a source of raw cotton and untrained labour to be used in the more productive colonies to the south.",
"title": "Colonialism (1900–1940)"
},
{
"paragraph_id": 15,
"text": "Throughout the colonial period, large areas of Chad were never governed effectively: in the huge BET Prefecture, the handful of French military administrators usually left the people alone, and in central Chad, French rule was only slightly more substantive. Truly speaking, France managed to govern effectively only the south.",
"title": "Colonialism (1900–1940)"
},
{
"paragraph_id": 16,
"text": "During World War II, Chad was the first French colony to rejoin the Allies (August 26, 1940), after the defeat of France by Germany. Under the administration of Félix Éboué, France's first black colonial governor, a military column, commanded by Colonel Philippe Leclerc de Hauteclocque, and including two battalions of Sara troops, moved north from N'Djamena (then Fort Lamy) to engage Axis forces in Libya, where, in partnership with the British Army's Long Range Desert Group, they captured Kufra. On 21 January 1942, N'Djamena was bombed by a German aircraft.",
"title": "Decolonization (1940–1960)"
},
{
"paragraph_id": 17,
"text": "After the war ended, local parties started to develop in Chad. The first to be born was the radical Chadian Progressive Party (PPT) in February 1947, initially headed by Panamanian born Gabriel Lisette, but from 1959 headed by François Tombalbaye. The more conservative Chadian Democratic Union (UDT) was founded in November 1947 and represented French commercial interests and a bloc of traditional leaders composed primarily of Muslim and Ouaddaïan nobility. The confrontation between the PPT and UDT was more than simply ideological; it represented different regional identities, with the PPT representing the Christian and animist south and the UDT the Islamic north.",
"title": "Decolonization (1940–1960)"
},
{
"paragraph_id": 18,
"text": "The PPT won the May 1957 pre-independence elections thanks to a greatly expanded franchise, and Lisette led the government of the Territorial Assembly until he lost a confidence vote on 11 February 1959. After a referendum on territorial autonomy on 28 September 1958, French Equatorial Africa was dissolved, and its four constituent states – Gabon, Congo (Brazzaville), the Central African Republic, and Chad became autonomous members of the French Community from 28 November 1958. Following Lisette's fall in February 1959 the opposition leaders Gontchome Sahoulba and Ahmed Koulamallah could not form a stable government, so the PPT was again asked to form an administration - which it did under the leadership of François Tombalbaye on 26 March 1959. On 12 July 1960 France agreed to Chad becoming fully independent. On 11 August 1960, Chad became an independent country and François Tombalbaye became its first president.",
"title": "Decolonization (1940–1960)"
},
{
"paragraph_id": 19,
"text": "One of the most prominent aspects of Tombalbaye's rule to prove itself was his authoritarianism and distrust of democracy. Already in January 1962 he banned all political parties except his own PPT, and started immediately concentrating all power in his own hands. His treatment of opponents, real or imagined, was extremely harsh, filling the prisons with thousands of political prisoners.",
"title": "The Tombalbaye era (1960–1975)"
},
{
"paragraph_id": 20,
"text": "What was even worse was his constant discrimination against the central and northern regions of Chad, where the southern Chadian administrators came to be perceived as arrogant and incompetent. This resentment at last exploded in a tax revolt on September 2, 1965 in the Guéra Prefecture, causing 500 deaths. The year after saw the birth in Sudan of the National Liberation Front of Chad (FROLINAT), created to militarily oust Tombalbaye and the Southern dominance. It was the start of a bloody civil war.",
"title": "The Tombalbaye era (1960–1975)"
},
{
"paragraph_id": 21,
"text": "Tombalbaye resorted to calling in French troops; while moderately successful, they were not fully able to quell the insurgency. Proving more fortunate was his choice to break with the French and seek friendly ties with Libyan Brotherly Leader Gaddafi, taking away the rebels' principal source of supplies.",
"title": "The Tombalbaye era (1960–1975)"
},
{
"paragraph_id": 22,
"text": "But while he had reported some success against the rebels, Tombalbaye started behaving more and more irrationally and brutally, continuously eroding his consensus among the southern elites, which dominated all key positions in the army, the civil service and the ruling party. As a consequence on April 13, 1975, several units of N'Djamena's gendarmerie killed Tombalbaye during a coup.",
"title": "The Tombalbaye era (1960–1975)"
},
{
"paragraph_id": 23,
"text": "The coup d'état that terminated Tombalbaye's government received an enthusiastic response in N'Djamena. The southerner General Félix Malloum emerged early as the chairman of the new junta.",
"title": "Military rule (1975–1978)"
},
{
"paragraph_id": 24,
"text": "The new military leaders were unable to retain for long the popularity that they had gained through their overthrow of Tombalbaye. Malloum proved himself unable to cope with the FROLINAT and at the end decided his only chance was in coopting some of the rebels: in 1978 he allied himself with the insurgent leader Hissène Habré, who entered the government as prime minister.",
"title": "Military rule (1975–1978)"
},
{
"paragraph_id": 25,
"text": "Internal dissent within the government led Prime Minister Habré to send his forces against Malloum's national army in the capital in February 1979. Malloum was ousted from the presidency, but the resulting civil war amongst the 11 emergent factions was so widespread that it rendered the central government largely irrelevant. At that point, other African governments decided to intervene.",
"title": "Civil war (1979-1982)"
},
{
"paragraph_id": 26,
"text": "A series of four international conferences held first under Nigerian and then Organization of African Unity (OAU) sponsorship attempted to bring the Chadian factions together. At the fourth conference, held in Lagos, Nigeria, in August 1979, the Lagos Accord was signed. This accord established a transitional government pending national elections. In November 1979, the Transitional Government of National Unity (GUNT) was created with a mandate to govern for 18 months. Goukouni Oueddei, a northerner, was named president; Colonel Kamougué, a southerner, Vice President; and Habré, Minister of Defense. This coalition proved fragile; in January 1980, fighting broke out again between Goukouni's and Habré's forces. With assistance from Libya, Goukouni regained control of the capital and other urban centers by year's end. However, Goukouni's January 1981 statement that Chad and Libya had agreed to work for the realization of complete unity between the two countries generated intense international pressure and Goukouni's subsequent call for the complete withdrawal of external forces.",
"title": "Civil war (1979-1982)"
},
{
"paragraph_id": 27,
"text": "Libya's partial withdrawal to the Aozou Strip in northern Chad cleared the way for Habré's forces to enter N’Djamena in June. French troops and an OAU peacekeeping force of 3,500 Nigerian, Senegalese, and Zairian troops (partially funded by the United States) remained neutral during the conflict.",
"title": "The Habré era (1982–1990)"
},
{
"paragraph_id": 28,
"text": "Habré continued to face armed opposition on various fronts, and was brutal in his repression of suspected opponents, massacring and torturing many during his rule. In the summer of 1983, GUNT forces launched an offensive against government positions in northern and eastern Chad with heavy Libyan support. In response to Libya's direct intervention, French and Zairian forces intervened to defend Habré, pushing Libyan and rebel forces north of the 16th parallel. In September 1984, the French and the Libyan governments announced an agreement for the mutual withdrawal of their forces from Chad. By the end of the year, all French and Zairian troops were withdrawn. Libya did not honor the withdrawal accord, and its forces continued to occupy the northern third of Chad.",
"title": "The Habré era (1982–1990)"
},
{
"paragraph_id": 29,
"text": "Rebel commando groups (Codos) in southern Chad were broken up by government massacres in 1984. In 1985 Habré briefly reconciled with some of his opponents, including the Democratic Front of Chad (FDT) and the Coordinating Action Committee of the Democratic Revolutionary Council. Goukouni also began to rally toward Habré, and with his support Habré successfully expelled Libyan forces from most of Chadian territory. A cease-fire between Chad and Libya held from 1987 to 1988, and negotiations over the next several years led to the 1994 International Court of Justice decision granting Chad sovereignty over the Aouzou strip, effectively ending Libyan occupation.",
"title": "The Habré era (1982–1990)"
},
{
"paragraph_id": 30,
"text": "However, rivalry between Hadjerai, Zaghawa and Gorane groups within the government grew in the late 1980s. In April 1989, Idriss Déby, one of Habré's leading generals and a Zaghawa, defected and fled to Darfur in Sudan, from which he mounted a Zaghawa-supported series of attacks on Habré (a Gorane). In December 1990, with Libyan assistance and no opposition from French troops stationed in Chad, Déby's forces successfully marched on N’Djamena. After 3 months of provisional government, Déby's Patriotic Salvation Movement (MPS) approved a national charter on February 28, 1991, with Déby as president.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 31,
"text": "During the next two years, Déby faced at least two coup attempts. Government forces clashed violently with rebel forces, including the Movement for Democracy and Development, MDD, National Revival Committee for Peace and Democracy (CSNPD), Chadian National Front (FNT) and the Western Armed Forces (FAO), near Lake Chad and in southern regions of the country. Earlier French demands for the country to hold a National Conference resulted in the gathering of 750 delegates representing political parties (which were legalized in 1992), the government, trade unions and the army to discuss the creation of a pluralist democratic regime.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 32,
"text": "However, unrest continued, sparked in part by large-scale killings of civilians in southern Chad. The CSNPD, led by Kette Moise and other southern groups entered into a peace agreement with government forces in 1994, which later broke down. Two new groups, the Armed Forces for a Federal Republic (FARF) led by former Kette ally Laokein Barde and the Democratic Front for Renewal (FDR), and a reformulated MDD clashed with government forces from 1994 to 1995.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 33,
"text": "Talks with political opponents in early 1996 did not go well, but Déby announced his intent to hold presidential elections in June. Déby won the country's first multi-party presidential elections with support in the second round from opposition leader Kebzabo, defeating General Kamougue (leader of the 1975 coup against Tombalbaye). Déby's MPS party won 63 of 125 seats in the January 1997 legislative elections. International observers noted numerous serious irregularities in presidential and legislative election proceedings.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 34,
"text": "By mid-1997 the government signed peace deals with FARF and the MDD leadership and succeeded in cutting off the groups from their rear bases in the Central African Republic and Cameroon. Agreements also were struck with rebels from the National Front of Chad (FNT) and Movement for Social Justice and Democracy in October 1997. However, peace was short-lived, as FARF rebels clashed with government soldiers, finally surrendering to government forces in May 1998. Barde was killed in the fighting, as were hundreds of other southerners, most civilians.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 35,
"text": "Since October 1998, Chadian Movement for Justice and Democracy (MDJT) rebels, led by Youssuf Togoimi until his death in September 2002, have skirmished with government troops in the Tibesti region, resulting in hundreds of civilian, government, and rebel casualties, but little ground won or lost. No active armed opposition has emerged in other parts of Chad, although Kette Moise, following senior postings at the Ministry of Interior, mounted a smallscale local operation near Moundou which was quickly and violently suppressed by government forces in late 2000.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 36,
"text": "Déby, in the mid-1990s, gradually restored basic functions of government and entered into agreements with the World Bank and IMF to carry out substantial economic reforms. Oil exploitation in the southern Doba region began in June 2000, with World Bank Board approval to finance a small portion of a project, the Chad-Cameroon Petroleum Development Project, aimed at transport of Chadian crude through a 1000-km buried pipeline through Cameroon to the Gulf of Guinea. The project established unique mechanisms for World Bank, private sector, government, and civil society collaboration to guarantee that future oil revenues benefit local populations and result in poverty alleviation. Success of the project depended on multiple monitoring efforts to ensure that all parties keep their commitments. These \"unique\" mechanisms for monitoring and revenue management have faced intense criticism from the beginning. Debt relief was accorded to Chad in May 2001.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 37,
"text": "Déby won a flawed 63% first-round victory in May 2001 presidential elections after legislative elections were postponed until spring 2002. Having accused the government of fraud, six opposition leaders were arrested (twice) and one opposition party activist was killed following the announcement of election results. However, despite claims of government corruption, favoritism of Zaghawas, and abuses by the security forces, opposition party and labor union calls for general strikes and more active demonstrations against the government have been unsuccessful. Despite movement toward democratic reform, power remains in the hands of a northern ethnic oligarchy.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 38,
"text": "In 2003, Chad began receiving refugees from the Darfur region of western Sudan. More than 200,000 refugees fled the fighting between two rebel groups and government-supported militias known as Janjaweed. A number of border incidents led to the Chadian-Sudanese War.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 39,
"text": "Chad become an oil producer in 2003. In order to avoid resource curse and corruption, elaborate plans sponsored by World Bank were made. This plan ensured transparency in payments, as well as that 80% of money from oil exports would be spent on five priority development sectors, two most important of these being: education and healthcare. However money started getting diverted towards the military even before the civil war broke out. In 2006 when the civil war escalated, Chad abandoned previous economic plans sponsored by World Bank and added \"national security\" as priority development sector, money from this sector was used to improve the military. During the civil war, more than 600 million dollars were used to buy fighter jets, attack helicopters, and armored personnel carriers.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 40,
"text": "Chad earned between 10 and 11 billion dollars from oil production, and estimated 4 billion dollars were invested in the army.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 41,
"text": "The war started on December 23, 2005, when the government of Chad declared a state of war with Sudan and called for the citizens of Chad to mobilize themselves against the \"common enemy,\" which the Chadian government sees as the Rally for Democracy and Liberty (RDL) militants, Chadian rebels, backed by the Sudanese government, and Sudanese militiamen. Militants have attacked villages and towns in eastern Chad, stealing cattle, murdering citizens, and burning houses. Over 200,000 refugees from the Darfur region of northwestern Sudan currently claim asylum in eastern Chad. Chadian president Idriss Déby accuses Sudanese President Omar Hasan Ahmad al-Bashir of trying to \"destabilize our country, to drive our people into misery, to create disorder and export the war from Darfur to Chad.\"",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 42,
"text": "An attack on the Chadian town of Adre near the Sudanese border led to the deaths of either one hundred rebels, as every news source other than CNN has reported, or three hundred rebels. The Sudanese government was blamed for the attack, which was the second in the region in three days, but Sudanese foreign ministry spokesman Jamal Mohammed Ibrahim denies any Sudanese involvement, \"We are not for any escalation with Chad. We technically deny involvement in Chadian internal affairs.\" This attack was the final straw that led to the declaration of war by Chad and the alleged deployment of the Chadian airforce into Sudanese airspace, which the Chadian government denies.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 43,
"text": "An attack on N'Djamena was defeated on April 13, 2006 in the Battle of N'Djamena. The President on national radio stated that the situation was under control, but residents, diplomats and journalists reportedly heard shots of weapons fire.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 44,
"text": "On November 25, 2006, rebels captured the eastern town of Abeche, capital of the Ouaddaï Region and center for humanitarian aid to the Darfur region in Sudan. On the same day, a separate rebel group Rally of Democratic Forces had captured Biltine. On November 26, 2006, the Chadian government claimed to have recaptured both towns, although rebels still claimed control of Biltine. Government buildings and humanitarian aid offices in Abeche were said to have been looted. The Chadian government denied a warning issued by the French Embassy in N'Djamena that a group of rebels was making its way through the Batha Prefecture in central Chad. Chad insists that both rebel groups are supported by the Sudanese government.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 45,
"text": "Nearly 100 children at the center of an international scandal that left them stranded at an orphanage in remote eastern Chad returned home after nearly five months March 14, 2008. The 97 children were taken from their homes in October 2007 by a then-obscure French charity, Zoé's Ark, which claimed they were orphans from Sudan's war-torn Darfur region.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 46,
"text": "On Friday, February 1, 2008, rebels, an opposition alliance of leaders Mahamat Nouri, a former defense minister, and Timane Erdimi, a nephew of Idriss Déby who was his chief of staff, attacked the Chadian capital of Ndjamena - even surrounding the Presidential Palace. But Idris Deby with government troops fought back. French forces flew in ammunition for Chadian government troops but took no active part in the fighting. UN has said that up to 20,000 people left the region, taking refuge in nearby Cameroon and Nigeria. Hundreds of people were killed, mostly civilians. The rebels accuse Deby of corruption and embezzling millions in oil revenue. While many Chadians may share that assessment, the uprising appears to be a power struggle within the elite that has long controlled Chad. The French government believes that the opposition has regrouped east of the capital. Déby has blamed Sudan for the current unrest in Chad.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 47,
"text": "During the Déby era, Chad intervened in conflicts in Mali, Central African Republic, Niger and Nigeria.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 48,
"text": "In 2013, Chad sent 2000 men from its military to help France in Operation Serval during the Mali War. Later in the same year Chad sent 850 troops to Central African Republic to help peacekeeping operation MISCA, those troops withdrew in April 2014 after allegations of human rights violations.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 49,
"text": "During the Boko Haram insurgency, Chad multiple times sent troops to assist the fight against Boko Haram in Niger and Nigeria.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 50,
"text": "In August 2018, rebel fighters of the Military Command Council for the Salvation of the Republic (CCMSR) attacked government forces in northern Chad. Chad experienced threats from jihadists fleeing the Libyan conflict. Chad had been an ally of the West in the fight against Islamist militants in West Africa.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 51,
"text": "In January 2019, after 47 years, Chad restored diplomatic relations with Israel. It was announced during a visit to N’Djamena by Israeli Prime Minister Benjamin Netanyahu.",
"title": "The Idriss Déby era (1990–2021)"
},
{
"paragraph_id": 52,
"text": "In April 2021, Chad's army announced that President Idriss Déby had died of his injuries following clashes with rebels in the north of the country. Idriss Deby ruled the country for more than 30 years since 1990. It was also announced that a military council led by Déby's son, Mahamat Idriss Déby a 37-year-old four star general, will govern for the next 18 months.",
"title": "After Idriss Déby (2021–present)"
}
] | Chad, officially the Republic of Chad, is a landlocked country in Central Africa. It borders Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon and Nigeria to the southwest, and Niger to the west. Due to its distance from the sea and its largely desert climate, the country is sometimes referred to as the "Dead Heart of Africa". | 2001-04-23T17:45:00Z | 2023-11-21T04:54:46Z | [
"Template:Main",
"Template:History of Africa",
"Template:Lang-ar",
"Template:Lang-fr",
"Template:Cite web",
"Template:Chad topics",
"Template:No citations",
"Template:ISBN",
"Template:Citation",
"Template:Cite book",
"Template:Cite journal",
"Template:Infobox country",
"Template:See also",
"Template:Illm",
"Template:Reflist",
"Template:Cite news",
"Template:Former French colonies",
"Template:Short description",
"Template:History of Chad"
] | https://en.wikipedia.org/wiki/History_of_Chad |
5,330 | Geography of Chad | Chad is one of the 47 landlocked countries in the world and is located in North Central Africa, measuring 1,284,000 square kilometers (495,755 sq mi), nearly twice the size of France and slightly more than three times the size of California. Most of its ethnically and linguistically diverse population lives in the south, with densities ranging from 54 persons per square kilometer in the Logone River basin to 0.1 persons in the northern B.E.T. (Borkou-Ennedi-Tibesti) desert region, which itself is larger than France. The capital city of N'Djaména, situated at the confluence of the Chari and Logone Rivers, is cosmopolitan in nature, with a current population in excess of 700,000 people.
Chad has four climatic zones. The northernmost Saharan zone averages less than 200 mm (7.9 in) of rainfall annually. The sparse human population is largely nomadic, with some livestock, mostly small ruminants and camels. The central Sahelian zone receives between 200 and 700 mm (7.9 and 27.6 in) rainfall and has vegetation ranging from grass/shrub steppe to thorny, open savanna. The southern zone, often referred to as the Sudan zone, receives between 700 and 1,000 mm (27.6 and 39.4 in), with woodland savanna and deciduous forests for vegetation. Rainfall in the Guinea zone, located in Chad's southwestern tip, ranges between 1,000 and 1,200 mm (39.4 and 47.2 in).
The country's topography is generally flat, with the elevation gradually rising as one moves north and east away from Lake Chad. The highest point in Chad is Emi Koussi, a mountain that rises 3,100 m (10,171 ft) in the northern Tibesti Mountains. The Ennedi Plateau and the Ouaddaï highlands in the east complete the image of a gradually sloping basin, which descends towards Lake Chad. There are also central highlands in the Guera region rising to 1,500 m (4,921 ft).
Lake Chad is the second largest lake in west Africa and is one of the most important wetlands on the continent. Home to 120 species of fish and at least that many species of birds, the lake has shrunk dramatically in the last four decades due to increased water usage from an expanding population and low rainfall. Bordered by Chad, Niger, Nigeria, and Cameroon, Lake Chad currently covers only 1350 square kilometers, down from 25,000 square kilometers in 1963. The Chari and Logone Rivers, both of which originate in the Central African Republic and flow northward, provide most of the surface water entering Lake Chad. Chad is also next to Niger.
Located in north-central Africa, Chad stretches for about 1,800 kilometers from its northernmost point to its southern boundary. Except in the far northwest and south, where its borders converge, Chad's average width is about 800 kilometers. Its area of 1,284,000 square kilometers is roughly equal to the combined areas of Idaho, Wyoming, Utah, Nevada, and Arizona. Chad's neighbors include Libya to the north, Niger and Nigeria to the west, Sudan to the east, Central African Republic to the south, and Cameroon to the southwest.
Chad exhibits two striking geographical characteristics. First, the country is landlocked. N'Djamena, the capital, is located more than 1,100 kilometers northeast of the Atlantic Ocean; Abéché, a major city in the east, lies 2,650 kilometers from the Red Sea; and Faya-Largeau, a much smaller but strategically important center in the north, is in the middle of the Sahara Desert, 1,550 kilometers from the Mediterranean Sea. These vast distances from the sea have had a profound impact on Chad's historical and contemporary development.
The second noteworthy characteristic is that the country borders on very different parts of the African continent: North Africa, with its Islamic culture and economic orientation toward the Mediterranean Basin; and West Africa, with its diverse religions and cultures and its history of highly developed states and regional economies.
Chad also borders Northeast Africa, oriented toward the Nile Valley and the Red Sea region - and Central or Equatorial Africa, some of whose people have retained classical African religions while others have adopted Christianity, and whose economies were part of the great Congo River system. Although much of Chad's distinctiveness comes from this diversity of influences, since independence the diversity has also been an obstacle to the creation of a national identity.
Although Chadian society is economically, socially, and culturally fragmented, the country's geography is unified by the Lake Chad Basin. Once a huge inland sea (the Pale-Chadian Sea) whose only remnant is shallow Lake Chad, this vast depression extends west into Nigeria and Niger. The larger, northern portion of the basin is bounded within Chad by the Tibesti Mountains in the northwest, the Ennedi Plateau in the northeast, the Ouaddaï Highlands in the east along the border with Sudan, the Guéra Massif in central Chad, and the Mandara Mountains along Chad's southwestern border with Cameroon. The smaller, southern part of the basin falls almost exclusively in Chad. It is delimited in the north by the Guéra Massif, in the south by highlands 250 kilometers south of the border with Central African Republic, and in the southwest by the Mandara Mountains.
Lake Chad, located in the southwestern part of the basin at an altitude of 282 meters, surprisingly does not mark the basin's lowest point; instead, this is found in the Bodele and Djourab regions in the north-central and northeastern parts of the country, respectively. This oddity arises because the great stationary dunes (ergs) of the Kanem region create a dam, preventing lake waters from flowing to the basin's lowest point. At various times in the past, and as late as the 1870s, the Bahr el Ghazal Depression, which extends from the northeastern part of the lake to the Djourab, acted as an overflow canal; since independence, climatic conditions have made overflows impossible.
North and northeast of Lake Chad, the basin extends for more than 800 kilometers, passing through regions characterized by great rolling dunes separated by very deep depressions. Although vegetation holds the dunes in place in the Kanem region, farther north they are bare and have a fluid, rippling character. From its low point in the Djourab, the basin then rises to the plateaus and peaks of the Tibesti Mountains in the north. The summit of this formation—as well as the highest point in the Sahara Desert—is Emi Koussi, a dormant volcano that reaches 3,414 meters above sea level.
The basin's northeastern limit is the Ennedi Plateau, whose limestone bed rises in steps etched by erosion. East of the lake, the basin rises gradually to the Ouaddaï Highlands, which mark Chad's eastern border and also divide the Chad and Nile watersheds. These highland areas are part of the East Saharan montane xeric woodlands ecoregion.
Southeast of Lake Chad, the regular contours of the terrain are broken by the Guéra Massif, which divides the basin into its northern and southern parts. South of the lake lie the floodplains of the Chari and Logone rivers, much of which are inundated during the rainy season. Farther south, the basin floor slopes upward, forming a series of low sand and clay plateaus, called koros, which eventually climb to 615 meters above sea level. South of the Chadian border, the koros divide the Lake Chad Basin from the Ubangi-Zaire river system.
Permanent streams do not exist in northern or central Chad. Following infrequent rains in the Ennedi Plateau and Ouaddaï Highlands, water may flow through depressions called enneris and wadis. Often the result of flash floods, such streams usually dry out within a few days as the remaining puddles seep into the sandy clay soil. The most important of these streams is the Batha, which in the rainy season carries water west from the Ouaddaï Highlands and the Guéra Massif to Lake Fitri.
Chad's major rivers are the Chari and the Logone and their tributaries, which flow from the southeast into Lake Chad. Both river systems rise in the highlands of Central African Republic and Cameroon, regions that receive more than 1,250 millimeters of rainfall annually. Fed by rivers of Central African Republic, as well as by the Bahr Salamat, Bahr Aouk, and Bahr Sara rivers of southeastern Chad, the Chari River is about 1,200 kilometers long. From its origins near the city of Sarh, the middle course of the Chari makes its way through swampy terrain; the lower Chari is joined by the Logone River near N'Djamena. The Chari's volume varies greatly, from 17 cubic meters per second during the dry season to 340 cubic meters per second during the wettest part of the year.
The Logone River is formed by tributaries flowing from Cameroon and Central African Republic. Both shorter and smaller in volume than the Chari, it flows northeast for 960 kilometers; its volume ranges from five to eighty-five cubic meters per second. At N'Djamena the Logone empties into the Chari, and the combined rivers flow together for thirty kilometers through a large delta and into Lake Chad. At the end of the rainy season in the fall, the river overflows its banks and creates a huge floodplain in the delta.
The seventh largest lake in the world (and the fourth largest in Africa), Lake Chad is located in the sahelian zone, a region just south of the Sahara Desert. The Chari River contributes 95 percent of Lake Chad's water, an average annual volume of 40 billion cubic meters, 95% of which is lost to evaporation. The size of the lake is determined by rains in the southern highlands bordering the basin and by temperatures in the Sahel. Fluctuations in both cause the lake to change dramatically in size, from 9,800 square kilometers in the dry season to 25,500 at the end of the rainy season.
Lake Chad also changes greatly in size from one year to another. In 1870 its maximum area was 28,000 square kilometers. The measurement dropped to 12,700 in 1908. In the 1940s and 1950s, the lake remained small, but it grew again to 26,000 square kilometers in 1963. The droughts of the late 1960s, early 1970s, and mid-1980s caused Lake Chad to shrink once again, however. The only other lakes of importance in Chad are Lake Fitri, in Batha Prefecture, and Lake Iro, in the marshy southeast.
The Lake Chad Basin embraces a great range of tropical climates from north to south, although most of these climates tend to be dry. Apart from the far north, most regions are characterized by a cycle of alternating rainy and dry seasons. In any given year, the duration of each season is determined largely by the positions of two great air masses—a maritime mass over the Atlantic Ocean to the southwest and a much drier continental mass.
During the rainy season, winds from the southwest push the moister maritime system north over the African continent where it meets and slips under the continental mass along a front called the "intertropical convergence zone". At the height of the rainy season, the front may reach as far as Kanem Prefecture. By the middle of the dry season, the intertropical convergence zone moves south of Chad, taking the rain with it. This weather system contributes to the formation of three major regions of climate and vegetation.
The Saharan region covers roughly the northern half of the country, including Borkou-Ennedi-Tibesti Prefecture along with the northern parts of Kanem, Batha, and Biltine prefectures. Much of this area receives only traces of rain during the entire year; at Faya-Largeau, for example, annual rainfall averages less than 12 millimeters (0.47 in), and there are nearly 3800 hours of sunshine. Scattered small oases and occasional wells provide water for a few date palms or small plots of millet and garden crops.
In much of the north, the average daily maximum temperature is about 32 °C (89.6 °F) during January, the coolest month of the year, and about 45 °C (113 °F) during May, the hottest month. On occasion, strong winds from the northeast produce violent sandstorms. In northern Biltine Prefecture, a region called the Mortcha plays a major role in animal husbandry. Dry for eight months of the year, it receives 350 millimeters (13.8 in) or more of rain, mostly during July and August.
A carpet of green springs from the desert during this brief wet season, attracting herders from throughout the region who come to pasture their cattle and camels. Because very few wells and springs have water throughout the year, the herders leave with the end of the rains, turning over the land to the antelopes, gazelles, and ostriches that can survive with little groundwater. Northern Chad averages over 3500 hours of sunlight per year, the south somewhat less.
The semiarid sahelian zone, or Sahel, forms a belt about 500 kilometers (311 mi) wide that runs from Lac and Chari-Baguirmi prefectures eastward through Guéra, Ouaddaï, and northern Salamat prefectures to the Sudanese frontier. The climate in this transition zone between the desert and the southern sudanian zone is divided into a rainy season (from June to September) and a dry period (from October to May).
In the northern Sahel, thorny shrubs and acacia trees grow wild, while date palms, cereals, and garden crops are raised in scattered oases. Outside these settlements, nomads tend their flocks during the rainy season, moving southward as forage and surface water disappear with the onset of the dry part of the year. The central Sahel is characterized by drought-resistant grasses and small woods. Rainfall is more abundant there than in the Saharan region. For example, N'Djamena records a maximum annual average rainfall of 580 millimeters (22.8 in), while Ouaddaï Prefecture receives just a bit less.
During the hot season, in April and May, maximum temperatures frequently rise above 40 °C (104 °F). In the southern part of the Sahel, rainfall is sufficient to permit crop production on unirrigated land, and millet and sorghum are grown. Agriculture is also common in the marshlands east of Lake Chad and near swamps or wells. Many farmers in the region combine subsistence agriculture with the raising of cattle, sheep, goats, and poultry.
The humid sudanian zone includes the Sahel, the southern prefectures of Mayo-Kebbi, Tandjilé, Logone Occidental, Logone Oriental, Moyen-Chari, and southern Salamat. Between April and October, the rainy season brings between 750 and 1,250 millimeters (29.5 and 49.2 in) of precipitation. Temperatures are high throughout the year. Daytime readings in Moundou, the major city in the southwest, range from 27 °C (80.6 °F) in the middle of the cool season in January to about 40 °C (104 °F) in the hot months of March, April, and May.
The sudanian region is predominantly East Sudanian savanna, or plains covered with a mixture of tropical or subtropical grasses and woodlands. The growth is lush during the rainy season but turns brown and dormant during the five-month dry season between November and March. Over a large part of the region, however, natural vegetation has yielded to agriculture.
On 22 June, the temperature reached 47.6 °C (117.7 °F) in Faya, breaking a record set in 1961 at the same location. Similar temperature rises were also reported in Niger, which began to enter a famine situation.
On 26 July the heat reached near-record levels over Chad and Niger.
Area: total: 1.284 million km land: 1,259,200 km water: 24,800 km
Area - comparative: Canada: smaller than the Northwest Territories US: slightly more than three times the size of California
Land boundaries: total: 6,406 km border countries: Cameroon 1,116 km, Central African Republic 1,556 km, Libya 1,050 km, Niger 1,196 km, Nigeria 85 km, Sudan 1,403 km
Coastline: 0 km (landlocked)
Maritime claims: none (landlocked)
Elevation extremes: lowest point: Bodélé Depression 160 m highest point: Emi Koussi 3,415 m
Natural resources: petroleum, uranium, natron, kaolin, fish (Chari River, Logone River), gold, limestone, sand and gravel, salt
Land use: arable land: 3.89% permanent crops: 0.03% other: 96.08% (2012)
Irrigated land: 302.7 km (2003)
Total renewable water resources: 43 km (2011)
Freshwater withdrawal (domestic/industrial/agricultural): total: 0.88 km/yr (12%/12%/76%) per capita: 84.81 m/yr (2005)
Natural hazards: hot, dry, dusty, Harmattan winds occur in north; periodic droughts; locust plagues
Environment - current issues: inadequate supplies of potable water; improper waste disposal in rural areas contributes to soil and water pollution; desertification
This is a list of the extreme points of Chad, the points that are farther north, south, east or west than any other location.
*Note: technically Chad does not have an easternmost point, the easternmost section of the border being formed by the 24° of longitude | [
{
"paragraph_id": 0,
"text": "Chad is one of the 47 landlocked countries in the world and is located in North Central Africa, measuring 1,284,000 square kilometers (495,755 sq mi), nearly twice the size of France and slightly more than three times the size of California. Most of its ethnically and linguistically diverse population lives in the south, with densities ranging from 54 persons per square kilometer in the Logone River basin to 0.1 persons in the northern B.E.T. (Borkou-Ennedi-Tibesti) desert region, which itself is larger than France. The capital city of N'Djaména, situated at the confluence of the Chari and Logone Rivers, is cosmopolitan in nature, with a current population in excess of 700,000 people.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chad has four climatic zones. The northernmost Saharan zone averages less than 200 mm (7.9 in) of rainfall annually. The sparse human population is largely nomadic, with some livestock, mostly small ruminants and camels. The central Sahelian zone receives between 200 and 700 mm (7.9 and 27.6 in) rainfall and has vegetation ranging from grass/shrub steppe to thorny, open savanna. The southern zone, often referred to as the Sudan zone, receives between 700 and 1,000 mm (27.6 and 39.4 in), with woodland savanna and deciduous forests for vegetation. Rainfall in the Guinea zone, located in Chad's southwestern tip, ranges between 1,000 and 1,200 mm (39.4 and 47.2 in).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The country's topography is generally flat, with the elevation gradually rising as one moves north and east away from Lake Chad. The highest point in Chad is Emi Koussi, a mountain that rises 3,100 m (10,171 ft) in the northern Tibesti Mountains. The Ennedi Plateau and the Ouaddaï highlands in the east complete the image of a gradually sloping basin, which descends towards Lake Chad. There are also central highlands in the Guera region rising to 1,500 m (4,921 ft).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Lake Chad is the second largest lake in west Africa and is one of the most important wetlands on the continent. Home to 120 species of fish and at least that many species of birds, the lake has shrunk dramatically in the last four decades due to increased water usage from an expanding population and low rainfall. Bordered by Chad, Niger, Nigeria, and Cameroon, Lake Chad currently covers only 1350 square kilometers, down from 25,000 square kilometers in 1963. The Chari and Logone Rivers, both of which originate in the Central African Republic and flow northward, provide most of the surface water entering Lake Chad. Chad is also next to Niger.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Located in north-central Africa, Chad stretches for about 1,800 kilometers from its northernmost point to its southern boundary. Except in the far northwest and south, where its borders converge, Chad's average width is about 800 kilometers. Its area of 1,284,000 square kilometers is roughly equal to the combined areas of Idaho, Wyoming, Utah, Nevada, and Arizona. Chad's neighbors include Libya to the north, Niger and Nigeria to the west, Sudan to the east, Central African Republic to the south, and Cameroon to the southwest.",
"title": "Geographical placement"
},
{
"paragraph_id": 5,
"text": "Chad exhibits two striking geographical characteristics. First, the country is landlocked. N'Djamena, the capital, is located more than 1,100 kilometers northeast of the Atlantic Ocean; Abéché, a major city in the east, lies 2,650 kilometers from the Red Sea; and Faya-Largeau, a much smaller but strategically important center in the north, is in the middle of the Sahara Desert, 1,550 kilometers from the Mediterranean Sea. These vast distances from the sea have had a profound impact on Chad's historical and contemporary development.",
"title": "Geographical placement"
},
{
"paragraph_id": 6,
"text": "The second noteworthy characteristic is that the country borders on very different parts of the African continent: North Africa, with its Islamic culture and economic orientation toward the Mediterranean Basin; and West Africa, with its diverse religions and cultures and its history of highly developed states and regional economies.",
"title": "Geographical placement"
},
{
"paragraph_id": 7,
"text": "Chad also borders Northeast Africa, oriented toward the Nile Valley and the Red Sea region - and Central or Equatorial Africa, some of whose people have retained classical African religions while others have adopted Christianity, and whose economies were part of the great Congo River system. Although much of Chad's distinctiveness comes from this diversity of influences, since independence the diversity has also been an obstacle to the creation of a national identity.",
"title": "Geographical placement"
},
{
"paragraph_id": 8,
"text": "Although Chadian society is economically, socially, and culturally fragmented, the country's geography is unified by the Lake Chad Basin. Once a huge inland sea (the Pale-Chadian Sea) whose only remnant is shallow Lake Chad, this vast depression extends west into Nigeria and Niger. The larger, northern portion of the basin is bounded within Chad by the Tibesti Mountains in the northwest, the Ennedi Plateau in the northeast, the Ouaddaï Highlands in the east along the border with Sudan, the Guéra Massif in central Chad, and the Mandara Mountains along Chad's southwestern border with Cameroon. The smaller, southern part of the basin falls almost exclusively in Chad. It is delimited in the north by the Guéra Massif, in the south by highlands 250 kilometers south of the border with Central African Republic, and in the southwest by the Mandara Mountains.",
"title": "Land"
},
{
"paragraph_id": 9,
"text": "Lake Chad, located in the southwestern part of the basin at an altitude of 282 meters, surprisingly does not mark the basin's lowest point; instead, this is found in the Bodele and Djourab regions in the north-central and northeastern parts of the country, respectively. This oddity arises because the great stationary dunes (ergs) of the Kanem region create a dam, preventing lake waters from flowing to the basin's lowest point. At various times in the past, and as late as the 1870s, the Bahr el Ghazal Depression, which extends from the northeastern part of the lake to the Djourab, acted as an overflow canal; since independence, climatic conditions have made overflows impossible.",
"title": "Land"
},
{
"paragraph_id": 10,
"text": "North and northeast of Lake Chad, the basin extends for more than 800 kilometers, passing through regions characterized by great rolling dunes separated by very deep depressions. Although vegetation holds the dunes in place in the Kanem region, farther north they are bare and have a fluid, rippling character. From its low point in the Djourab, the basin then rises to the plateaus and peaks of the Tibesti Mountains in the north. The summit of this formation—as well as the highest point in the Sahara Desert—is Emi Koussi, a dormant volcano that reaches 3,414 meters above sea level.",
"title": "Land"
},
{
"paragraph_id": 11,
"text": "The basin's northeastern limit is the Ennedi Plateau, whose limestone bed rises in steps etched by erosion. East of the lake, the basin rises gradually to the Ouaddaï Highlands, which mark Chad's eastern border and also divide the Chad and Nile watersheds. These highland areas are part of the East Saharan montane xeric woodlands ecoregion.",
"title": "Land"
},
{
"paragraph_id": 12,
"text": "Southeast of Lake Chad, the regular contours of the terrain are broken by the Guéra Massif, which divides the basin into its northern and southern parts. South of the lake lie the floodplains of the Chari and Logone rivers, much of which are inundated during the rainy season. Farther south, the basin floor slopes upward, forming a series of low sand and clay plateaus, called koros, which eventually climb to 615 meters above sea level. South of the Chadian border, the koros divide the Lake Chad Basin from the Ubangi-Zaire river system.",
"title": "Land"
},
{
"paragraph_id": 13,
"text": "Permanent streams do not exist in northern or central Chad. Following infrequent rains in the Ennedi Plateau and Ouaddaï Highlands, water may flow through depressions called enneris and wadis. Often the result of flash floods, such streams usually dry out within a few days as the remaining puddles seep into the sandy clay soil. The most important of these streams is the Batha, which in the rainy season carries water west from the Ouaddaï Highlands and the Guéra Massif to Lake Fitri.",
"title": "Water systems"
},
{
"paragraph_id": 14,
"text": "Chad's major rivers are the Chari and the Logone and their tributaries, which flow from the southeast into Lake Chad. Both river systems rise in the highlands of Central African Republic and Cameroon, regions that receive more than 1,250 millimeters of rainfall annually. Fed by rivers of Central African Republic, as well as by the Bahr Salamat, Bahr Aouk, and Bahr Sara rivers of southeastern Chad, the Chari River is about 1,200 kilometers long. From its origins near the city of Sarh, the middle course of the Chari makes its way through swampy terrain; the lower Chari is joined by the Logone River near N'Djamena. The Chari's volume varies greatly, from 17 cubic meters per second during the dry season to 340 cubic meters per second during the wettest part of the year.",
"title": "Water systems"
},
{
"paragraph_id": 15,
"text": "The Logone River is formed by tributaries flowing from Cameroon and Central African Republic. Both shorter and smaller in volume than the Chari, it flows northeast for 960 kilometers; its volume ranges from five to eighty-five cubic meters per second. At N'Djamena the Logone empties into the Chari, and the combined rivers flow together for thirty kilometers through a large delta and into Lake Chad. At the end of the rainy season in the fall, the river overflows its banks and creates a huge floodplain in the delta.",
"title": "Water systems"
},
{
"paragraph_id": 16,
"text": "The seventh largest lake in the world (and the fourth largest in Africa), Lake Chad is located in the sahelian zone, a region just south of the Sahara Desert. The Chari River contributes 95 percent of Lake Chad's water, an average annual volume of 40 billion cubic meters, 95% of which is lost to evaporation. The size of the lake is determined by rains in the southern highlands bordering the basin and by temperatures in the Sahel. Fluctuations in both cause the lake to change dramatically in size, from 9,800 square kilometers in the dry season to 25,500 at the end of the rainy season.",
"title": "Water systems"
},
{
"paragraph_id": 17,
"text": "Lake Chad also changes greatly in size from one year to another. In 1870 its maximum area was 28,000 square kilometers. The measurement dropped to 12,700 in 1908. In the 1940s and 1950s, the lake remained small, but it grew again to 26,000 square kilometers in 1963. The droughts of the late 1960s, early 1970s, and mid-1980s caused Lake Chad to shrink once again, however. The only other lakes of importance in Chad are Lake Fitri, in Batha Prefecture, and Lake Iro, in the marshy southeast.",
"title": "Water systems"
},
{
"paragraph_id": 18,
"text": "The Lake Chad Basin embraces a great range of tropical climates from north to south, although most of these climates tend to be dry. Apart from the far north, most regions are characterized by a cycle of alternating rainy and dry seasons. In any given year, the duration of each season is determined largely by the positions of two great air masses—a maritime mass over the Atlantic Ocean to the southwest and a much drier continental mass.",
"title": "Climate"
},
{
"paragraph_id": 19,
"text": "During the rainy season, winds from the southwest push the moister maritime system north over the African continent where it meets and slips under the continental mass along a front called the \"intertropical convergence zone\". At the height of the rainy season, the front may reach as far as Kanem Prefecture. By the middle of the dry season, the intertropical convergence zone moves south of Chad, taking the rain with it. This weather system contributes to the formation of three major regions of climate and vegetation.",
"title": "Climate"
},
{
"paragraph_id": 20,
"text": "The Saharan region covers roughly the northern half of the country, including Borkou-Ennedi-Tibesti Prefecture along with the northern parts of Kanem, Batha, and Biltine prefectures. Much of this area receives only traces of rain during the entire year; at Faya-Largeau, for example, annual rainfall averages less than 12 millimeters (0.47 in), and there are nearly 3800 hours of sunshine. Scattered small oases and occasional wells provide water for a few date palms or small plots of millet and garden crops.",
"title": "Climate"
},
{
"paragraph_id": 21,
"text": "In much of the north, the average daily maximum temperature is about 32 °C (89.6 °F) during January, the coolest month of the year, and about 45 °C (113 °F) during May, the hottest month. On occasion, strong winds from the northeast produce violent sandstorms. In northern Biltine Prefecture, a region called the Mortcha plays a major role in animal husbandry. Dry for eight months of the year, it receives 350 millimeters (13.8 in) or more of rain, mostly during July and August.",
"title": "Climate"
},
{
"paragraph_id": 22,
"text": "A carpet of green springs from the desert during this brief wet season, attracting herders from throughout the region who come to pasture their cattle and camels. Because very few wells and springs have water throughout the year, the herders leave with the end of the rains, turning over the land to the antelopes, gazelles, and ostriches that can survive with little groundwater. Northern Chad averages over 3500 hours of sunlight per year, the south somewhat less.",
"title": "Climate"
},
{
"paragraph_id": 23,
"text": "The semiarid sahelian zone, or Sahel, forms a belt about 500 kilometers (311 mi) wide that runs from Lac and Chari-Baguirmi prefectures eastward through Guéra, Ouaddaï, and northern Salamat prefectures to the Sudanese frontier. The climate in this transition zone between the desert and the southern sudanian zone is divided into a rainy season (from June to September) and a dry period (from October to May).",
"title": "Climate"
},
{
"paragraph_id": 24,
"text": "In the northern Sahel, thorny shrubs and acacia trees grow wild, while date palms, cereals, and garden crops are raised in scattered oases. Outside these settlements, nomads tend their flocks during the rainy season, moving southward as forage and surface water disappear with the onset of the dry part of the year. The central Sahel is characterized by drought-resistant grasses and small woods. Rainfall is more abundant there than in the Saharan region. For example, N'Djamena records a maximum annual average rainfall of 580 millimeters (22.8 in), while Ouaddaï Prefecture receives just a bit less.",
"title": "Climate"
},
{
"paragraph_id": 25,
"text": "During the hot season, in April and May, maximum temperatures frequently rise above 40 °C (104 °F). In the southern part of the Sahel, rainfall is sufficient to permit crop production on unirrigated land, and millet and sorghum are grown. Agriculture is also common in the marshlands east of Lake Chad and near swamps or wells. Many farmers in the region combine subsistence agriculture with the raising of cattle, sheep, goats, and poultry.",
"title": "Climate"
},
{
"paragraph_id": 26,
"text": "The humid sudanian zone includes the Sahel, the southern prefectures of Mayo-Kebbi, Tandjilé, Logone Occidental, Logone Oriental, Moyen-Chari, and southern Salamat. Between April and October, the rainy season brings between 750 and 1,250 millimeters (29.5 and 49.2 in) of precipitation. Temperatures are high throughout the year. Daytime readings in Moundou, the major city in the southwest, range from 27 °C (80.6 °F) in the middle of the cool season in January to about 40 °C (104 °F) in the hot months of March, April, and May.",
"title": "Climate"
},
{
"paragraph_id": 27,
"text": "The sudanian region is predominantly East Sudanian savanna, or plains covered with a mixture of tropical or subtropical grasses and woodlands. The growth is lush during the rainy season but turns brown and dormant during the five-month dry season between November and March. Over a large part of the region, however, natural vegetation has yielded to agriculture.",
"title": "Climate"
},
{
"paragraph_id": 28,
"text": "On 22 June, the temperature reached 47.6 °C (117.7 °F) in Faya, breaking a record set in 1961 at the same location. Similar temperature rises were also reported in Niger, which began to enter a famine situation.",
"title": "Climate"
},
{
"paragraph_id": 29,
"text": "On 26 July the heat reached near-record levels over Chad and Niger.",
"title": "Climate"
},
{
"paragraph_id": 30,
"text": "Area: total: 1.284 million km land: 1,259,200 km water: 24,800 km",
"title": "Area"
},
{
"paragraph_id": 31,
"text": "Area - comparative: Canada: smaller than the Northwest Territories US: slightly more than three times the size of California",
"title": "Area"
},
{
"paragraph_id": 32,
"text": "Land boundaries: total: 6,406 km border countries: Cameroon 1,116 km, Central African Republic 1,556 km, Libya 1,050 km, Niger 1,196 km, Nigeria 85 km, Sudan 1,403 km",
"title": "Boundaries"
},
{
"paragraph_id": 33,
"text": "Coastline: 0 km (landlocked)",
"title": "Boundaries"
},
{
"paragraph_id": 34,
"text": "Maritime claims: none (landlocked)",
"title": "Boundaries"
},
{
"paragraph_id": 35,
"text": "Elevation extremes: lowest point: Bodélé Depression 160 m highest point: Emi Koussi 3,415 m",
"title": "Boundaries"
},
{
"paragraph_id": 36,
"text": "Natural resources: petroleum, uranium, natron, kaolin, fish (Chari River, Logone River), gold, limestone, sand and gravel, salt",
"title": "Land use and resources"
},
{
"paragraph_id": 37,
"text": "Land use: arable land: 3.89% permanent crops: 0.03% other: 96.08% (2012)",
"title": "Land use and resources"
},
{
"paragraph_id": 38,
"text": "Irrigated land: 302.7 km (2003)",
"title": "Land use and resources"
},
{
"paragraph_id": 39,
"text": "Total renewable water resources: 43 km (2011)",
"title": "Land use and resources"
},
{
"paragraph_id": 40,
"text": "Freshwater withdrawal (domestic/industrial/agricultural): total: 0.88 km/yr (12%/12%/76%) per capita: 84.81 m/yr (2005)",
"title": "Land use and resources"
},
{
"paragraph_id": 41,
"text": "Natural hazards: hot, dry, dusty, Harmattan winds occur in north; periodic droughts; locust plagues",
"title": "Environmental issues"
},
{
"paragraph_id": 42,
"text": "Environment - current issues: inadequate supplies of potable water; improper waste disposal in rural areas contributes to soil and water pollution; desertification",
"title": "Environmental issues"
},
{
"paragraph_id": 43,
"text": "This is a list of the extreme points of Chad, the points that are farther north, south, east or west than any other location.",
"title": "Extreme points"
},
{
"paragraph_id": 44,
"text": "*Note: technically Chad does not have an easternmost point, the easternmost section of the border being formed by the 24° of longitude",
"title": "Extreme points"
}
] | Chad is one of the 47 landlocked countries in the world and is located in North Central Africa, measuring 1,284,000 square kilometers (495,755 sq mi), nearly twice the size of France and slightly more than three times the size of California. Most of its ethnically and linguistically diverse population lives in the south, with densities ranging from 54 persons per square kilometer in the Logone River basin to 0.1 persons in the northern B.E.T. (Borkou-Ennedi-Tibesti) desert region, which itself is larger than France. The capital city of N'Djaména, situated at the confluence of the Chari and Logone Rivers, is cosmopolitan in nature, with a current population in excess of 700,000 people. Chad has four climatic zones. The northernmost Saharan zone averages less than 200 mm (7.9 in) of rainfall annually. The sparse human population is largely nomadic, with some livestock, mostly small ruminants and camels. The central Sahelian zone receives between 200 and 700 mm rainfall and has vegetation ranging from grass/shrub steppe to thorny, open savanna. The southern zone, often referred to as the Sudan zone, receives between 700 and 1,000 mm, with woodland savanna and deciduous forests for vegetation. Rainfall in the Guinea zone, located in Chad's southwestern tip, ranges between 1,000 and 1,200 mm. The country's topography is generally flat, with the elevation gradually rising as one moves north and east away from Lake Chad. The highest point in Chad is Emi Koussi, a mountain that rises 3,100 m (10,171 ft) in the northern Tibesti Mountains. The Ennedi Plateau and the Ouaddaï highlands in the east complete the image of a gradually sloping basin, which descends towards Lake Chad. There are also central highlands in the Guera region rising to 1,500 m (4,921 ft). Lake Chad is the second largest lake in west Africa and is one of the most important wetlands on the continent. Home to 120 species of fish and at least that many species of birds, the lake has shrunk dramatically in the last four decades due to increased water usage from an expanding population and low rainfall. Bordered by Chad, Niger, Nigeria, and Cameroon, Lake Chad currently covers only 1350 square kilometers, down from 25,000 square kilometers in 1963. The Chari and Logone Rivers, both of which originate in the Central African Republic and flow northward, provide most of the surface water entering Lake Chad. Chad is also next to Niger. | 2001-04-23T17:45:30Z | 2023-08-09T16:00:36Z | [
"Template:Use dmy dates",
"Template:Convert",
"Template:Reflist",
"Template:Chad topics",
"Template:Cite web",
"Template:Cite gvp",
"Template:CIA World Factbook",
"Template:Coord",
"Template:Short description",
"Template:More footnotes needed",
"Template:Further",
"Template:Unreferenced section",
"Template:Cite journal",
"Template:Geography of Africa",
"Template:See also",
"Template:Weather box",
"Template:Main",
"Template:Citation-attribution",
"Template:Cite book",
"Template:Africa topic"
] | https://en.wikipedia.org/wiki/Geography_of_Chad |
5,331 | Demographics of Chad | The people of Chad speak more than 100 languages and divide themselves into many ethnic groups. However, language and ethnicity are not the same. Moreover, neither element can be tied to a particular physical type.
Although the possession of a common language shows that its speakers have lived together and have a common history, peoples also change languages. This is particularly so in Chad, where the openness of the terrain, marginal rainfall, frequent drought and famine, and low population densities have encouraged physical and linguistic mobility. Slave raids among non-Muslim peoples, internal slave trade, and exports of captives northward from the ninth to the twentieth centuries also have resulted in language changes.
Anthropologists view ethnicity as being more than genetics. Like language, ethnicity implies a shared heritage, partly economic, where people of the same ethnic group may share a livelihood, and partly social, taking the form of shared ways of doing things and organizing relations among individuals and groups. Ethnicity also involves a cultural component made up of shared values and a common worldview. Like language, ethnicity is not immutable. Shared ways of doing things change over time and alter a group's perception of its own identity.
Not only do the social aspects of ethnic identity change but the biological composition (or gene pool) also may change over time. Although most ethnic groups emphasize intermarriage, people are often proscribed from seeking partners among close relatives—a prohibition that promotes biological variation. In all groups, the departure of some individuals or groups and the integration of others also changes the biological component.
The Chadian government has avoided official recognition of ethnicity. With the exception of a few surveys conducted shortly after independence, little data were available on this important aspect of Chadian society. Nonetheless, ethnic identity was a significant component of life in Chad.
The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa.
Chad's languages fall into ten major groups, each of which belongs to either the Nilo-Saharan, Afro-Asiatic, or Niger–Congo language family. These represent three of the four major language families in Africa; only the Khoisan languages of southern Africa are not represented. The presence of such different languages suggests that the Lake Chad Basin may have been an important point of dispersal in ancient times.
According to the 2022 revision of the World Population Prospects the total population was 17,179,740 in 2021, compared to only 2 429 000 in 1950. The proportion of children below the age of 15 in 2010 was 45.4%, 51.7% was between 15 and 65 years of age, while 2.9% was 65 years or the country is projected to have a population of 34 millions peoples in 2050 and 61 millions peoples in 2100 .
Population by Sex and Age Group (Census 20.V.2009):
Registration of vital events is in Chad not complete. The Population Departement of the United Nations prepared the following estimates.
Source: UN DESA, World Population Prospects, 2022
Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):
Fertility data as of 2014-2015 (DHS Program):
The separation of religion from social structure in Chad represents a false dichotomy, for they are perceived as two sides of the same coin. Three religious traditions coexist in Chad- classical African religions, Islam, and Christianity. None is monolithic. The first tradition includes a variety of ancestor and/or place-oriented religions whose expression is highly specific. Islam, although characterized by an orthodox set of beliefs and observances, also is expressed in diverse ways. Christianity arrived in Chad much more recently with the arrival of Europeans. Its followers are divided into Roman Catholics and Protestants (including several denominations); as with Chadian Islam, Chadian Christianity retains aspects of pre-Christian religious belief.
The number of followers of each tradition in Chad is unknown. Estimates made in 1962 suggested that 35 percent of Chadians practiced classical African religions, 55 percent were Muslims, and 10 percent were Christians. In the 1970s and 1980s, this distribution undoubtedly changed. Observers report that Islam has spread among the Hajerai and among other non-Muslim populations of the Saharan and sahelian zones. However, the proportion of Muslims may have fallen because the birthrate among the followers of traditional religions and Christians in southern Chad is thought to be higher than that among Muslims. In addition, the upheavals since the mid-1970s have resulted in the departure of some missionaries; whether or not Chadian Christians have been numerous enough and organized enough to have attracted more converts since that time is unknown.
Demographic statistics according to the World Population Review in 2022.
The following demographic statistics are from the CIA World Factbook.
Muslim 52.1%, Protestant 23.9%, Roman Catholic 20%, animist 0.3%, other Christian 0.2%, none 2.8%, unspecified 0.7% (2014-15 est.)
note: on 21 March 2022, the US Centers for Disease Control and Prevention (CDC) issued a Travel Alert for polio in Africa; Chad is currently considered a high risk to travelers for circulating vaccine-derived polioviruses (cVDPV); vaccine-derived poliovirus (VDPV) is a strain of the weakened poliovirus that was initially included in oral polio vaccine (OPV) and that has changed over time and behaves more like the wild or naturally occurring virus; this means it can be spread more easily to people who are unvaccinated against polio and who come in contact with the stool or respiratory secretions, such as from a sneeze, of an “infected” person who received oral polio vaccine; the CDC recommends that before any international travel, anyone unvaccinated, incompletely vaccinated, or with an unknown polio vaccination status should complete the routine polio vaccine series; before travel to any high-risk destination, CDC recommends that adults who previously completed the full, routine polio vaccine series receive a single, lifetime booster dose of polio vaccine
The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa.
About 5,000 French citizens live in Chad.
Attribution: This article incorporates public domain material from The World Factbook (2023 ed.). CIA. (Archived 2006 edition) | [
{
"paragraph_id": 0,
"text": "The people of Chad speak more than 100 languages and divide themselves into many ethnic groups. However, language and ethnicity are not the same. Moreover, neither element can be tied to a particular physical type.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Although the possession of a common language shows that its speakers have lived together and have a common history, peoples also change languages. This is particularly so in Chad, where the openness of the terrain, marginal rainfall, frequent drought and famine, and low population densities have encouraged physical and linguistic mobility. Slave raids among non-Muslim peoples, internal slave trade, and exports of captives northward from the ninth to the twentieth centuries also have resulted in language changes.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Anthropologists view ethnicity as being more than genetics. Like language, ethnicity implies a shared heritage, partly economic, where people of the same ethnic group may share a livelihood, and partly social, taking the form of shared ways of doing things and organizing relations among individuals and groups. Ethnicity also involves a cultural component made up of shared values and a common worldview. Like language, ethnicity is not immutable. Shared ways of doing things change over time and alter a group's perception of its own identity.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Not only do the social aspects of ethnic identity change but the biological composition (or gene pool) also may change over time. Although most ethnic groups emphasize intermarriage, people are often proscribed from seeking partners among close relatives—a prohibition that promotes biological variation. In all groups, the departure of some individuals or groups and the integration of others also changes the biological component.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Chadian government has avoided official recognition of ethnicity. With the exception of a few surveys conducted shortly after independence, little data were available on this important aspect of Chadian society. Nonetheless, ethnic identity was a significant component of life in Chad.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Chad's languages fall into ten major groups, each of which belongs to either the Nilo-Saharan, Afro-Asiatic, or Niger–Congo language family. These represent three of the four major language families in Africa; only the Khoisan languages of southern Africa are not represented. The presence of such different languages suggests that the Lake Chad Basin may have been an important point of dispersal in ancient times.",
"title": ""
},
{
"paragraph_id": 7,
"text": "According to the 2022 revision of the World Population Prospects the total population was 17,179,740 in 2021, compared to only 2 429 000 in 1950. The proportion of children below the age of 15 in 2010 was 45.4%, 51.7% was between 15 and 65 years of age, while 2.9% was 65 years or the country is projected to have a population of 34 millions peoples in 2050 and 61 millions peoples in 2100 .",
"title": "Population"
},
{
"paragraph_id": 8,
"text": "Population by Sex and Age Group (Census 20.V.2009):",
"title": "Population"
},
{
"paragraph_id": 9,
"text": "Registration of vital events is in Chad not complete. The Population Departement of the United Nations prepared the following estimates.",
"title": "Vital statistics"
},
{
"paragraph_id": 10,
"text": "Source: UN DESA, World Population Prospects, 2022",
"title": "Vital statistics"
},
{
"paragraph_id": 11,
"text": "Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):",
"title": "Vital statistics"
},
{
"paragraph_id": 12,
"text": "Fertility data as of 2014-2015 (DHS Program):",
"title": "Vital statistics"
},
{
"paragraph_id": 13,
"text": "The separation of religion from social structure in Chad represents a false dichotomy, for they are perceived as two sides of the same coin. Three religious traditions coexist in Chad- classical African religions, Islam, and Christianity. None is monolithic. The first tradition includes a variety of ancestor and/or place-oriented religions whose expression is highly specific. Islam, although characterized by an orthodox set of beliefs and observances, also is expressed in diverse ways. Christianity arrived in Chad much more recently with the arrival of Europeans. Its followers are divided into Roman Catholics and Protestants (including several denominations); as with Chadian Islam, Chadian Christianity retains aspects of pre-Christian religious belief.",
"title": "Religions"
},
{
"paragraph_id": 14,
"text": "The number of followers of each tradition in Chad is unknown. Estimates made in 1962 suggested that 35 percent of Chadians practiced classical African religions, 55 percent were Muslims, and 10 percent were Christians. In the 1970s and 1980s, this distribution undoubtedly changed. Observers report that Islam has spread among the Hajerai and among other non-Muslim populations of the Saharan and sahelian zones. However, the proportion of Muslims may have fallen because the birthrate among the followers of traditional religions and Christians in southern Chad is thought to be higher than that among Muslims. In addition, the upheavals since the mid-1970s have resulted in the departure of some missionaries; whether or not Chadian Christians have been numerous enough and organized enough to have attracted more converts since that time is unknown.",
"title": "Religions"
},
{
"paragraph_id": 15,
"text": "Demographic statistics according to the World Population Review in 2022.",
"title": "Other demographic statistics"
},
{
"paragraph_id": 16,
"text": "The following demographic statistics are from the CIA World Factbook.",
"title": "Other demographic statistics"
},
{
"paragraph_id": 17,
"text": "Muslim 52.1%, Protestant 23.9%, Roman Catholic 20%, animist 0.3%, other Christian 0.2%, none 2.8%, unspecified 0.7% (2014-15 est.)",
"title": "Other demographic statistics"
},
{
"paragraph_id": 18,
"text": "note: on 21 March 2022, the US Centers for Disease Control and Prevention (CDC) issued a Travel Alert for polio in Africa; Chad is currently considered a high risk to travelers for circulating vaccine-derived polioviruses (cVDPV); vaccine-derived poliovirus (VDPV) is a strain of the weakened poliovirus that was initially included in oral polio vaccine (OPV) and that has changed over time and behaves more like the wild or naturally occurring virus; this means it can be spread more easily to people who are unvaccinated against polio and who come in contact with the stool or respiratory secretions, such as from a sneeze, of an “infected” person who received oral polio vaccine; the CDC recommends that before any international travel, anyone unvaccinated, incompletely vaccinated, or with an unknown polio vaccination status should complete the routine polio vaccine series; before travel to any high-risk destination, CDC recommends that adults who previously completed the full, routine polio vaccine series receive a single, lifetime booster dose of polio vaccine",
"title": "Other demographic statistics"
},
{
"paragraph_id": 19,
"text": "The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa.",
"title": "Other demographic statistics"
},
{
"paragraph_id": 20,
"text": "About 5,000 French citizens live in Chad.",
"title": "Other demographic statistics"
},
{
"paragraph_id": 21,
"text": "",
"title": "Notes"
},
{
"paragraph_id": 22,
"text": "Attribution: This article incorporates public domain material from The World Factbook (2023 ed.). CIA. (Archived 2006 edition)",
"title": "References"
}
] | The people of Chad speak more than 100 languages and divide themselves into many ethnic groups. However, language and ethnicity are not the same. Moreover, neither element can be tied to a particular physical type. Although the possession of a common language shows that its speakers have lived together and have a common history, peoples also change languages. This is particularly so in Chad, where the openness of the terrain, marginal rainfall, frequent drought and famine, and low population densities have encouraged physical and linguistic mobility. Slave raids among non-Muslim peoples, internal slave trade, and exports of captives northward from the ninth to the twentieth centuries also have resulted in language changes. Anthropologists view ethnicity as being more than genetics. Like language, ethnicity implies a shared heritage, partly economic, where people of the same ethnic group may share a livelihood, and partly social, taking the form of shared ways of doing things and organizing relations among individuals and groups. Ethnicity also involves a cultural component made up of shared values and a common worldview. Like language, ethnicity is not immutable. Shared ways of doing things change over time and alter a group's perception of its own identity. Not only do the social aspects of ethnic identity change but the biological composition also may change over time. Although most ethnic groups emphasize intermarriage, people are often proscribed from seeking partners among close relatives—a prohibition that promotes biological variation. In all groups, the departure of some individuals or groups and the integration of others also changes the biological component. The Chadian government has avoided official recognition of ethnicity. With the exception of a few surveys conducted shortly after independence, little data were available on this important aspect of Chadian society. Nonetheless, ethnic identity was a significant component of life in Chad. The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa. Chad's languages fall into ten major groups, each of which belongs to either the
Nilo-Saharan, Afro-Asiatic, or Niger–Congo language family. These represent three of the four major language families in Africa; only the Khoisan languages of southern Africa are not represented. The presence of such different languages suggests that the Lake Chad Basin may have been an important point of dispersal in ancient times. | 2002-02-25T15:43:11Z | 2023-12-15T14:22:14Z | [
"Template:Cite web",
"Template:Cite journal",
"Template:Infobox place demographics",
"Template:Specify",
"Template:UN Population",
"Template:Reflist",
"Template:Main",
"Template:Citation",
"Template:Ethnic groups in Chad",
"Template:Africa in topic",
"Template:Efn-lr",
"Template:Commons category",
"Template:Source-attribution",
"Template:CIA World Factbook",
"Template:More citations needed",
"Template:Notelist-lr",
"Template:GraphChart",
"Template:Cite UN WPP"
] | https://en.wikipedia.org/wiki/Demographics_of_Chad |
5,332 | Politics of Chad | The Politics of Chad take place in a framework of a presidential republic, whereby the President of Chad is both head of state and head of government. Executive power is exercised by the government. Legislative power is vested in both the government and parliament. Chad is one of the most corrupt countries in the world.
In May 2013, security forces in Chad foiled a coup against the President Idriss Deby that had been in preparation for several months. In April 2021, President Déby was injured by the rebel group Front Pour l'Alternance et La Concorde au Tchad (FACT). He succumbed to his injuries on April 20, 2021. His presidency was taken by his family member Mahamat Déby in April of 2021. This resulted in both the National Assembly and Chadian Government being dissolved and replaced with a Transitional Military Council.
The National Transitional Council will oversee the transition to democracy.
Chad's executive branch is headed by the President and dominates the Chadian political system. Following the military overthrow of Hissène Habré in December 1990, Idriss Déby won the presidential elections in 1996 and 2001. The constitutional basis for the government is the 1996 constitution, under which the president was limited to two terms of office until Déby had that provision repealed in 2005. The president has the power to appoint the Council of State (or cabinet), and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's parastatal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly President and Council of State, may declare a state of emergency. Most of the key advisors for former president Déby were members of the Zaghawa clan, although some southern and opposition personalities were represented in his government.
According to the 1996 constitution, the National Assembly deputies are elected by universal suffrage for 4-year terms. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions as necessary and called by the prime minister. Deputies elect a president of the National Assembly every 2 years. Assembly deputies or members of the executive branch may introduce legislation; once passed by the Assembly, the president must take action to either sign or reject the law within 15 days. The National Assembly must approve the prime minister's plan of government and may force the prime minister to resign through a majority vote of no-confidence. However, if the National Assembly rejects the executive branch's program twice in one year, the president may disband the Assembly and call for new legislative elections. In practice, the president exercises considerable influence over the National Assembly through the MPS party structure.
Despite the constitution's guarantee of judicial independence from the executive branch, the president names most key judicial officials. The Supreme Court is made up of a chief justice, named by the president, and 15 councilors chosen by the president and National Assembly; appointments are for life. The Constitutional Council, with nine judges elected to 9-year terms, has the power to review all legislation, treaties and international agreements prior to their adoption. The constitution recognizes customary and traditional law in locales where it is recognized and to the extent it does not interfere with public order or constitutional guarantees of equality for all citizens.
ACCT, ACP, AfDB, AU, BDEAC, CEMAC, FAO, FZ, G-77, IBRD, ICAO, ICCt, ICFTU, ICRM, IDA, IDB, IFAD, IFC, IFRCS, ILO, IMF, Interpol, IOC, ITU, MIGA, NAM, OIC, ONUB, OPCW, UN, UNCTAD, UNESCO, UNIDO, UNOCI, UPU, WCL, WHO, WIPO, WMO, WToO, WTrO
On 20 April 2021, following the death of longtime Chad President Idriss Déby, the Military of Chad released a statement confirming that both the Government of Chad and the nation's National Assembly had been dissolved and that a Transitional Military Council led by Déby's son Mahamat would lead the nation for at least 18 months.
Following protests on 14 May 2022, the authorities in Chad detained several members of civil society organizations. The protests were organized in N’Djamena, and other cities across the country by Chadian civil society organizations, united under the coalition Wakit Tamma. | [
{
"paragraph_id": 0,
"text": "The Politics of Chad take place in a framework of a presidential republic, whereby the President of Chad is both head of state and head of government. Executive power is exercised by the government. Legislative power is vested in both the government and parliament. Chad is one of the most corrupt countries in the world.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In May 2013, security forces in Chad foiled a coup against the President Idriss Deby that had been in preparation for several months. In April 2021, President Déby was injured by the rebel group Front Pour l'Alternance et La Concorde au Tchad (FACT). He succumbed to his injuries on April 20, 2021. His presidency was taken by his family member Mahamat Déby in April of 2021. This resulted in both the National Assembly and Chadian Government being dissolved and replaced with a Transitional Military Council.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The National Transitional Council will oversee the transition to democracy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Chad's executive branch is headed by the President and dominates the Chadian political system. Following the military overthrow of Hissène Habré in December 1990, Idriss Déby won the presidential elections in 1996 and 2001. The constitutional basis for the government is the 1996 constitution, under which the president was limited to two terms of office until Déby had that provision repealed in 2005. The president has the power to appoint the Council of State (or cabinet), and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's parastatal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly President and Council of State, may declare a state of emergency. Most of the key advisors for former president Déby were members of the Zaghawa clan, although some southern and opposition personalities were represented in his government.",
"title": "Executive branch"
},
{
"paragraph_id": 4,
"text": "According to the 1996 constitution, the National Assembly deputies are elected by universal suffrage for 4-year terms. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions as necessary and called by the prime minister. Deputies elect a president of the National Assembly every 2 years. Assembly deputies or members of the executive branch may introduce legislation; once passed by the Assembly, the president must take action to either sign or reject the law within 15 days. The National Assembly must approve the prime minister's plan of government and may force the prime minister to resign through a majority vote of no-confidence. However, if the National Assembly rejects the executive branch's program twice in one year, the president may disband the Assembly and call for new legislative elections. In practice, the president exercises considerable influence over the National Assembly through the MPS party structure.",
"title": "Legislative branch"
},
{
"paragraph_id": 5,
"text": "Despite the constitution's guarantee of judicial independence from the executive branch, the president names most key judicial officials. The Supreme Court is made up of a chief justice, named by the president, and 15 councilors chosen by the president and National Assembly; appointments are for life. The Constitutional Council, with nine judges elected to 9-year terms, has the power to review all legislation, treaties and international agreements prior to their adoption. The constitution recognizes customary and traditional law in locales where it is recognized and to the extent it does not interfere with public order or constitutional guarantees of equality for all citizens.",
"title": "Judicial branch"
},
{
"paragraph_id": 6,
"text": "ACCT, ACP, AfDB, AU, BDEAC, CEMAC, FAO, FZ, G-77, IBRD, ICAO, ICCt, ICFTU, ICRM, IDA, IDB, IFAD, IFC, IFRCS, ILO, IMF, Interpol, IOC, ITU, MIGA, NAM, OIC, ONUB, OPCW, UN, UNCTAD, UNESCO, UNIDO, UNOCI, UPU, WCL, WHO, WIPO, WMO, WToO, WTrO",
"title": "International organization participation"
},
{
"paragraph_id": 7,
"text": "On 20 April 2021, following the death of longtime Chad President Idriss Déby, the Military of Chad released a statement confirming that both the Government of Chad and the nation's National Assembly had been dissolved and that a Transitional Military Council led by Déby's son Mahamat would lead the nation for at least 18 months.",
"title": "2021 government shakeup"
},
{
"paragraph_id": 8,
"text": "Following protests on 14 May 2022, the authorities in Chad detained several members of civil society organizations. The protests were organized in N’Djamena, and other cities across the country by Chadian civil society organizations, united under the coalition Wakit Tamma.",
"title": "2021 government shakeup"
}
] | The Politics of Chad take place in a framework of a presidential republic, whereby the President of Chad is both head of state and head of government. Executive power is exercised by the government. Legislative power is vested in both the government and parliament. Chad is one of the most corrupt countries in the world. In May 2013, security forces in Chad foiled a coup against the President Idriss Deby that had been in preparation for several months. In April 2021, President Déby was injured by the rebel group Front Pour l'Alternance et La Concorde au Tchad (FACT). He succumbed to his injuries on April 20, 2021. His presidency was taken by his family member Mahamat Déby in April of 2021. This resulted in both the National Assembly and Chadian Government being dissolved and replaced with a Transitional Military Council. The National Transitional Council will oversee the transition to democracy. | 2002-02-25T15:43:11Z | 2023-12-17T21:09:08Z | [
"Template:Office-table",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Chad topics",
"Template:Africa in topic",
"Template:More citations needed",
"Template:Politics of Chad",
"Template:Citation",
"Template:Elect",
"Template:Main"
] | https://en.wikipedia.org/wiki/Politics_of_Chad |
5,333 | Economy of Chad | The economy of Chad suffers from the landlocked country's geographic remoteness, drought, lack of infrastructure, and political turmoil. About 85% of the population depends on agriculture, including the herding of livestock. Of Africa's Francophone countries, Chad benefited least from the 50% devaluation of their currencies in January 1994. Financial aid from the World Bank, the African Development Bank, and other sources is directed largely at the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by ExxonMobil. In terms of gross domestic product, Chad ranks 147th globally with $11.051 billion dollars as of 2018.
Chad produced in 2018:
In addition to smaller productions of other agricultural products.
The following table shows the main economic indicators in 1980–2017.
GDP: purchasing power parity – $28.62 billion (2017 est.)
GDP – real growth rate: -3.1% (2017 est.)
GDP – per capita: $2,300 (2017 est.)
Gross national saving: 15.5% of GDP (2017 est.)
GDP – composition by sector: agriculture: 52.3% (2017 est.) industry: 14.7% (2017 est.) services: 33.1% (2017 est.)
Population below poverty line:: 46.7% (2011 est.)
Distribution of family income – Gini index: 43.3 (2011 est.)
Inflation rate (consumer prices): -0.9% (2017 est.)
Labor force: 5.654 million (2017 est.)
Labor force – by occupation: agriculture 80%, industry and services 20% (2006 est.)
Budget: revenues: 1.337 billion (2017 est.) expenditures: 1.481 billion (2017 est.)
Budget surplus (+) or deficit (-): -1.5% (of GDP) (2017 est.)
Public debt: 52.5% of GDP (2017 est.)
Industries: oil, cotton textiles, brewing, natron (sodium carbonate), soap, cigarettes, construction materials
Industrial production growth rate: -4% (2017 est.)
electrification: total population: 4% (2013)
electrification: urban areas: 14% (2013)
electrification: rural areas: 1% (2013)
Electricity – production: 224.3 million kWh (2016 est.)
Electricity – production by source: fossil fuel: 98% hydro: 0% nuclear: 0% other renewable: 3% (2017)
Electricity – consumption: 208.6 million kWh (2016 est.)
Electricity – exports: 0 kWh (2016 est.)
Electricity – imports: 0 kWh (2016 est.)
Agriculture – products: cotton, sorghum, millet, peanuts, sesame, corn, rice, potatoes, onions, cassava (manioc, tapioca), cattle, sheep, goats, camels
Exports: $2.464 billion (2017 est.)
Exports – commodities: oil, livestock, cotton, sesame, gum arabic, shea butter
Exports – partners: US 38.7%, China 16.6%, Netherlands 15.7%, UAE 12.2%, India 6.3% (2017)
Imports: $2.16 billion (2017 est.)
Imports – commodities: machinery and transportation equipment, industrial goods, foodstuffs, textiles
Imports – partners: China 19.9%, Cameroon 17.2%, France 17%, US 5.4%, India 4.9%, Senegal 4.5% (2017)
Debt – external: $1.724 billion (31 December 2017 est.)
Reserves of foreign exchange and gold: $22.9 million (31 December 2017 est.) | [
{
"paragraph_id": 0,
"text": "The economy of Chad suffers from the landlocked country's geographic remoteness, drought, lack of infrastructure, and political turmoil. About 85% of the population depends on agriculture, including the herding of livestock. Of Africa's Francophone countries, Chad benefited least from the 50% devaluation of their currencies in January 1994. Financial aid from the World Bank, the African Development Bank, and other sources is directed largely at the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by ExxonMobil. In terms of gross domestic product, Chad ranks 147th globally with $11.051 billion dollars as of 2018.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chad produced in 2018:",
"title": "Agriculture"
},
{
"paragraph_id": 2,
"text": "In addition to smaller productions of other agricultural products.",
"title": "Agriculture"
},
{
"paragraph_id": 3,
"text": "The following table shows the main economic indicators in 1980–2017.",
"title": "Macro-economic trend"
},
{
"paragraph_id": 4,
"text": "GDP: purchasing power parity – $28.62 billion (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 5,
"text": "GDP – real growth rate: -3.1% (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 6,
"text": "GDP – per capita: $2,300 (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 7,
"text": "Gross national saving: 15.5% of GDP (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 8,
"text": "GDP – composition by sector: agriculture: 52.3% (2017 est.) industry: 14.7% (2017 est.) services: 33.1% (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 9,
"text": "Population below poverty line:: 46.7% (2011 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 10,
"text": "Distribution of family income – Gini index: 43.3 (2011 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 11,
"text": "Inflation rate (consumer prices): -0.9% (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 12,
"text": "Labor force: 5.654 million (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 13,
"text": "Labor force – by occupation: agriculture 80%, industry and services 20% (2006 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 14,
"text": "Budget: revenues: 1.337 billion (2017 est.) expenditures: 1.481 billion (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 15,
"text": "Budget surplus (+) or deficit (-): -1.5% (of GDP) (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 16,
"text": "Public debt: 52.5% of GDP (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 17,
"text": "Industries: oil, cotton textiles, brewing, natron (sodium carbonate), soap, cigarettes, construction materials",
"title": "Other statistics"
},
{
"paragraph_id": 18,
"text": "Industrial production growth rate: -4% (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 19,
"text": "electrification: total population: 4% (2013)",
"title": "Other statistics"
},
{
"paragraph_id": 20,
"text": "electrification: urban areas: 14% (2013)",
"title": "Other statistics"
},
{
"paragraph_id": 21,
"text": "electrification: rural areas: 1% (2013)",
"title": "Other statistics"
},
{
"paragraph_id": 22,
"text": "Electricity – production: 224.3 million kWh (2016 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 23,
"text": "Electricity – production by source: fossil fuel: 98% hydro: 0% nuclear: 0% other renewable: 3% (2017)",
"title": "Other statistics"
},
{
"paragraph_id": 24,
"text": "Electricity – consumption: 208.6 million kWh (2016 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 25,
"text": "Electricity – exports: 0 kWh (2016 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 26,
"text": "Electricity – imports: 0 kWh (2016 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 27,
"text": "Agriculture – products: cotton, sorghum, millet, peanuts, sesame, corn, rice, potatoes, onions, cassava (manioc, tapioca), cattle, sheep, goats, camels",
"title": "Other statistics"
},
{
"paragraph_id": 28,
"text": "Exports: $2.464 billion (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 29,
"text": "Exports – commodities: oil, livestock, cotton, sesame, gum arabic, shea butter",
"title": "Other statistics"
},
{
"paragraph_id": 30,
"text": "Exports – partners: US 38.7%, China 16.6%, Netherlands 15.7%, UAE 12.2%, India 6.3% (2017)",
"title": "Other statistics"
},
{
"paragraph_id": 31,
"text": "Imports: $2.16 billion (2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 32,
"text": "Imports – commodities: machinery and transportation equipment, industrial goods, foodstuffs, textiles",
"title": "Other statistics"
},
{
"paragraph_id": 33,
"text": "Imports – partners: China 19.9%, Cameroon 17.2%, France 17%, US 5.4%, India 4.9%, Senegal 4.5% (2017)",
"title": "Other statistics"
},
{
"paragraph_id": 34,
"text": "Debt – external: $1.724 billion (31 December 2017 est.)",
"title": "Other statistics"
},
{
"paragraph_id": 35,
"text": "Reserves of foreign exchange and gold: $22.9 million (31 December 2017 est.)",
"title": "Other statistics"
}
] | The economy of Chad suffers from the landlocked country's geographic remoteness, drought, lack of infrastructure, and political turmoil. About 85% of the population depends on agriculture, including the herding of livestock. Of Africa's Francophone countries, Chad benefited least from the 50% devaluation of their currencies in January 1994. Financial aid from the World Bank, the African Development Bank, and other sources is directed largely at the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by ExxonMobil. In terms of gross domestic product, Chad ranks 147th globally with $11.051 billion dollars as of 2018. | 2002-06-15T12:27:55Z | 2023-12-18T17:39:04Z | [
"Template:Reflist",
"Template:Cite web",
"Template:Chad topics",
"Template:Africa in topic",
"Template:Curlie",
"Template:Use dmy dates",
"Template:Short description",
"Template:Multiple",
"Template:Infobox economy",
"Template:Main",
"Template:CIA World Factbook"
] | https://en.wikipedia.org/wiki/Economy_of_Chad |
5,334 | Telecommunications in Chad | Telecommunications in Chad include radio, television, fixed and mobile telephones, and the Internet.
Radio stations:
Radios: 1.7 million (1997).
Television stations:
Television sets: 10,000 (1997).
Radio is the most important medium of mass communication. State-run Radiodiffusion Nationale Tchadienne operates national and regional radio stations. Around a dozen private radio stations are on the air, despite high licensing fees, some run by religious or other non-profit groups. The BBC World Service (FM 90.6) and Radio France Internationale (RFI) broadcast in the capital, N'Djamena. The only television station, Tele Tchad, is state-owned.
State control of many broadcasting outlets allows few dissenting views. Journalists are harassed and attacked. On rare occasions journalists are warned in writing by the High Council for Communication to produce more "responsible" journalism or face fines. Some journalists and publishers practice self-censorship. On 10 October 2012, the High Council on Communications issued a formal warning to La Voix du Paysan, claiming that the station's live broadcast on 30 September incited the public to "insurrection against the government." The station had broadcast a sermon by a bishop who criticized the government for allegedly failing to use oil wealth to benefit the region.
Calling code: +235
International call prefix: 00
Main lines:
Mobile cellular:
Telephone system: inadequate system of radiotelephone communication stations with high costs and low telephone density; fixed-line connections for less than 1 per 100 persons coupled with mobile-cellular subscribership base of only about 35 per 100 persons (2011).
Satellite earth stations: 1 Intelsat (Atlantic Ocean) (2011).
Top-level domain: .td
Internet users:
Fixed broadband: 18,000 subscriptions, 132nd in the world; 0.2% of the population, 161st in the world (2012).
Wireless broadband: Unknown (2012).
Internet hosts:
IPv4: 4,096 addresses allocated, less than 0.05% of the world total, 0.4 addresses per 1000 people (2012).
There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms.
The constitution provides for freedom of opinion, expression, and press, but the government does not always respect these rights. Private individuals are generally free to criticize the government without reprisal, but reporters and publishers risk harassment from authorities when publishing critical articles. The 2010 media law abolished prison sentences for defamation and insult, but prohibits "inciting racial, ethnic, or religious hatred," which is punishable by one to two years in prison and a fine of one to three million CFA francs ($2,000 to $6,000). | [
{
"paragraph_id": 0,
"text": "Telecommunications in Chad include radio, television, fixed and mobile telephones, and the Internet.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Radio stations:",
"title": "Radio and television"
},
{
"paragraph_id": 2,
"text": "Radios: 1.7 million (1997).",
"title": "Radio and television"
},
{
"paragraph_id": 3,
"text": "Television stations:",
"title": "Radio and television"
},
{
"paragraph_id": 4,
"text": "Television sets: 10,000 (1997).",
"title": "Radio and television"
},
{
"paragraph_id": 5,
"text": "Radio is the most important medium of mass communication. State-run Radiodiffusion Nationale Tchadienne operates national and regional radio stations. Around a dozen private radio stations are on the air, despite high licensing fees, some run by religious or other non-profit groups. The BBC World Service (FM 90.6) and Radio France Internationale (RFI) broadcast in the capital, N'Djamena. The only television station, Tele Tchad, is state-owned.",
"title": "Radio and television"
},
{
"paragraph_id": 6,
"text": "State control of many broadcasting outlets allows few dissenting views. Journalists are harassed and attacked. On rare occasions journalists are warned in writing by the High Council for Communication to produce more \"responsible\" journalism or face fines. Some journalists and publishers practice self-censorship. On 10 October 2012, the High Council on Communications issued a formal warning to La Voix du Paysan, claiming that the station's live broadcast on 30 September incited the public to \"insurrection against the government.\" The station had broadcast a sermon by a bishop who criticized the government for allegedly failing to use oil wealth to benefit the region.",
"title": "Radio and television"
},
{
"paragraph_id": 7,
"text": "Calling code: +235",
"title": "Telephones"
},
{
"paragraph_id": 8,
"text": "International call prefix: 00",
"title": "Telephones"
},
{
"paragraph_id": 9,
"text": "Main lines:",
"title": "Telephones"
},
{
"paragraph_id": 10,
"text": "Mobile cellular:",
"title": "Telephones"
},
{
"paragraph_id": 11,
"text": "Telephone system: inadequate system of radiotelephone communication stations with high costs and low telephone density; fixed-line connections for less than 1 per 100 persons coupled with mobile-cellular subscribership base of only about 35 per 100 persons (2011).",
"title": "Telephones"
},
{
"paragraph_id": 12,
"text": "Satellite earth stations: 1 Intelsat (Atlantic Ocean) (2011).",
"title": "Telephones"
},
{
"paragraph_id": 13,
"text": "Top-level domain: .td",
"title": "Internet"
},
{
"paragraph_id": 14,
"text": "Internet users:",
"title": "Internet"
},
{
"paragraph_id": 15,
"text": "Fixed broadband: 18,000 subscriptions, 132nd in the world; 0.2% of the population, 161st in the world (2012).",
"title": "Internet"
},
{
"paragraph_id": 16,
"text": "Wireless broadband: Unknown (2012).",
"title": "Internet"
},
{
"paragraph_id": 17,
"text": "Internet hosts:",
"title": "Internet"
},
{
"paragraph_id": 18,
"text": "IPv4: 4,096 addresses allocated, less than 0.05% of the world total, 0.4 addresses per 1000 people (2012).",
"title": "Internet"
},
{
"paragraph_id": 19,
"text": "There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms.",
"title": "Internet"
},
{
"paragraph_id": 20,
"text": "The constitution provides for freedom of opinion, expression, and press, but the government does not always respect these rights. Private individuals are generally free to criticize the government without reprisal, but reporters and publishers risk harassment from authorities when publishing critical articles. The 2010 media law abolished prison sentences for defamation and insult, but prohibits \"inciting racial, ethnic, or religious hatred,\" which is punishable by one to two years in prison and a fine of one to three million CFA francs ($2,000 to $6,000).",
"title": "Internet"
}
] | Telecommunications in Chad include radio, television, fixed and mobile telephones, and the Internet. | 2023-01-31T17:14:34Z | [
"Template:See also",
"Template:CIA World Factbook",
"Template:Reflist",
"Template:Economy of Chad",
"Template:Chad topics",
"Template:Update after",
"Template:US DOS",
"Template:Webarchive",
"Template:-",
"Template:Africa topic",
"Template:Telecommunications",
"Template:Internet censorship by country",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Telecommunications_in_Chad |
|
5,335 | Transport in Chad | Transport infrastructure within Chad is generally poor, especially in the north and east of the country. River transport is limited to the south-west corner. As of 2011 Chad had no railways though two lines are planned - from the capital to the Sudanese and Cameroonian borders during the wet season, especially in the southern half of the country. In the north, roads are merely tracks across the desert and land mines continue to present a danger. Draft animals (horses, donkeys and camels) remain important in much of the country.
Fuel supplies can be erratic, even in the south-west of the country, and are expensive. Elsewhere they are practically non-existent.
As of 2011 Chad had no railways. Two lines were planned to Sudan and Cameroon from the capital, with construction expected to start in 2012. No operative lines were listed as of 2019.
In 2021, an ADB study was funded for that rail link from Cameroon to Chad.
As at 2018 Chad had a total of 44,000 km of roads of which approximately 260 km are paved. Some, but not all of the roads in the capital N'Djamena are paved. Outside of N'Djamena there is one paved road which runs from Massakory in the north, through N'Djamena and then south, through the cities of Guélengdeng, Bongor, Kélo and Moundou, with a short spur leading in the direction of Kousseri, Cameroon, near N'Djamena. Expansion of the road towards Cameroon through Pala and Léré is reportedly in the preparatory stages.
As at 2012, Chari and Logone Rivers were navigable only in wet season (2002). Both flow northwards, from the south of Chad, into Lake Chad.
Since 2003, a 1,070 km pipeline has been used to export crude oil from the oil fields around Doba to offshore oil-loading facilities on Cameroon's Atlantic coast at Kribi. The CIA World Factbook however cites only 582 km of pipeline in Chad itself as at 2013.
None (landlocked).
Chad's main routes to the sea are:
In colonial times, the main access was by road to Bangui, in the Central African Republic, then by river boat to Brazzaville, and onwards by rail from Brazzaville to Pointe Noire, on Congo's Atlantic coast. This route is now little used.
There is also a route across Sudan, to the Red Sea, but very little trade goes this way.
Links with Niger, north of Lake Chad, are practically nonexistent; it is easier to reach Niger via Cameroon and Nigeria.
As of 2012 Chad had an estimated 58 airports, only 9 of which had paved runways. In 2015, scheduled airlines in Chad carried approximately 28,332 passengers.
Statistics on airports with paved runways as of 2017:
List of airports with paved runways:
Statistics on airports with unpaved runways as of 2013:
SAGA Airline of Chad - see http://www.airsaga.com
The Ministry is represented at the regional level by the Regional Delegations, which have jurisdiction over a part of the National Territory as defined by Decree No. 003 / PCE / CTPT / 91. Their organization and responsibilities are defined by Order No. 006 / MTPT / SE / DG / 92.
The Regional Delegations are:
Each Regional Delegation is organized into regional services, namely: the Regional Roads Service, the Regional Transport Service, the Civilian Buildings Regional Service and, as needed, other regional services may be established in one or more Delegations .
This article incorporates public domain material from The World Factbook. CIA. | [
{
"paragraph_id": 0,
"text": "Transport infrastructure within Chad is generally poor, especially in the north and east of the country. River transport is limited to the south-west corner. As of 2011 Chad had no railways though two lines are planned - from the capital to the Sudanese and Cameroonian borders during the wet season, especially in the southern half of the country. In the north, roads are merely tracks across the desert and land mines continue to present a danger. Draft animals (horses, donkeys and camels) remain important in much of the country.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fuel supplies can be erratic, even in the south-west of the country, and are expensive. Elsewhere they are practically non-existent.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As of 2011 Chad had no railways. Two lines were planned to Sudan and Cameroon from the capital, with construction expected to start in 2012. No operative lines were listed as of 2019.",
"title": "Railways"
},
{
"paragraph_id": 3,
"text": "In 2021, an ADB study was funded for that rail link from Cameroon to Chad.",
"title": "Railways"
},
{
"paragraph_id": 4,
"text": "As at 2018 Chad had a total of 44,000 km of roads of which approximately 260 km are paved. Some, but not all of the roads in the capital N'Djamena are paved. Outside of N'Djamena there is one paved road which runs from Massakory in the north, through N'Djamena and then south, through the cities of Guélengdeng, Bongor, Kélo and Moundou, with a short spur leading in the direction of Kousseri, Cameroon, near N'Djamena. Expansion of the road towards Cameroon through Pala and Léré is reportedly in the preparatory stages.",
"title": "Highways"
},
{
"paragraph_id": 5,
"text": "As at 2012, Chari and Logone Rivers were navigable only in wet season (2002). Both flow northwards, from the south of Chad, into Lake Chad.",
"title": "Waterways"
},
{
"paragraph_id": 6,
"text": "Since 2003, a 1,070 km pipeline has been used to export crude oil from the oil fields around Doba to offshore oil-loading facilities on Cameroon's Atlantic coast at Kribi. The CIA World Factbook however cites only 582 km of pipeline in Chad itself as at 2013.",
"title": "Pipelines"
},
{
"paragraph_id": 7,
"text": "None (landlocked).",
"title": "Seaports and harbors"
},
{
"paragraph_id": 8,
"text": "Chad's main routes to the sea are:",
"title": "Seaports and harbors"
},
{
"paragraph_id": 9,
"text": "In colonial times, the main access was by road to Bangui, in the Central African Republic, then by river boat to Brazzaville, and onwards by rail from Brazzaville to Pointe Noire, on Congo's Atlantic coast. This route is now little used.",
"title": "Seaports and harbors"
},
{
"paragraph_id": 10,
"text": "There is also a route across Sudan, to the Red Sea, but very little trade goes this way.",
"title": "Seaports and harbors"
},
{
"paragraph_id": 11,
"text": "Links with Niger, north of Lake Chad, are practically nonexistent; it is easier to reach Niger via Cameroon and Nigeria.",
"title": "Seaports and harbors"
},
{
"paragraph_id": 12,
"text": "As of 2012 Chad had an estimated 58 airports, only 9 of which had paved runways. In 2015, scheduled airlines in Chad carried approximately 28,332 passengers.",
"title": "Airports"
},
{
"paragraph_id": 13,
"text": "Statistics on airports with paved runways as of 2017:",
"title": "Airports"
},
{
"paragraph_id": 14,
"text": "List of airports with paved runways:",
"title": "Airports"
},
{
"paragraph_id": 15,
"text": "Statistics on airports with unpaved runways as of 2013:",
"title": "Airports"
},
{
"paragraph_id": 16,
"text": "SAGA Airline of Chad - see http://www.airsaga.com",
"title": "Airports"
},
{
"paragraph_id": 17,
"text": "The Ministry is represented at the regional level by the Regional Delegations, which have jurisdiction over a part of the National Territory as defined by Decree No. 003 / PCE / CTPT / 91. Their organization and responsibilities are defined by Order No. 006 / MTPT / SE / DG / 92.",
"title": "Ministry of Transport"
},
{
"paragraph_id": 18,
"text": "The Regional Delegations are:",
"title": "Ministry of Transport"
},
{
"paragraph_id": 19,
"text": "Each Regional Delegation is organized into regional services, namely: the Regional Roads Service, the Regional Transport Service, the Civilian Buildings Regional Service and, as needed, other regional services may be established in one or more Delegations .",
"title": "Ministry of Transport"
},
{
"paragraph_id": 20,
"text": "This article incorporates public domain material from The World Factbook. CIA.",
"title": "External links"
}
] | Transport infrastructure within Chad is generally poor, especially in the north and east of the country. River transport is limited to the south-west corner. As of 2011 Chad had no railways though two lines are planned - from the capital to the Sudanese and Cameroonian borders during the wet season, especially in the southern half of the country. In the north, roads are merely tracks across the desert and land mines continue to present a danger. Draft animals remain important in much of the country. Fuel supplies can be erratic, even in the south-west of the country, and are expensive. Elsewhere they are practically non-existent. | 2002-02-25T15:43:11Z | 2023-08-17T07:42:38Z | [
"Template:Citation needed",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Main",
"Template:Chad topics",
"Template:More citations needed",
"Template:See also",
"Template:Reflist",
"Template:CIA World Factbook",
"Template:Africa in topic",
"Template:Dead link",
"Template:Convert",
"Template:Cite web",
"Template:Economy of Chad",
"Template:As of"
] | https://en.wikipedia.org/wiki/Transport_in_Chad |
5,336 | Chad National Army | The Chad National Army (Arabic: الجيش الوطني التشادي, romanized: Al-Jaish al-Watani at-Tshadi; French: Armée nationale tchadienne, ANT) consists of the five Defence and Security Forces listed in Article 185 of the Chadian Constitution that came into effect on 4 May 2018. These are the National Army ((including Ground Forces, and Air Force), the National Gendarmerie), the National Police, the National and Nomadic Guard (GNNT) and the Judicial Police. Article 188 of the Constitution specifies that National Defence is the responsibility of the Army, Gendarmerie and GNNT, whilst the maintenance of public order and security is the responsibility of the Police, Gendarmerie and GNNT.
From independence through the period of the presidency of Félix Malloum (1975–79), the official national army was known as the Chadian Armed Forces (Forces Armées Tchadiennes—FAT). Composed mainly of soldiers from southern Chad, FAT had its roots in the army recruited by France and had military traditions dating back to World War I. FAT lost its status as the legal state army when Malloum's civil and military administration disintegrated in 1979. Although it remained a distinct military body for several years, FAT was eventually reduced to the status of a regional army representing the south.
After Habré consolidated his authority and assumed the presidency in 1982, his victorious army, the Armed Forces of the North (Forces Armées du Nord—FAN), became the nucleus of a new national army. The force was officially constituted in January 1983, when the various pro-Habré contingents were merged and renamed the Chadian National Armed Forces (Forces Armées Nationales Tchadiennes—FANT).
The Military of Chad was dominated by members of Toubou, Zaghawa, Kanembou, Hadjerai, and Massa ethnic groups during the presidency of Hissène Habré. Later Chadian president Idriss Déby revolted and fled to the Sudan, taking with him many Zaghawa and Hadjerai soldiers in 1989.
Chad's armed forces numbered about 36,000 at the end of the Habré regime, but swelled to an estimated 50,000 in the early days of Déby's rule. With French support, a reorganization of the armed forces was initiated early in 1991 with the goal of reducing its numbers and making its ethnic composition reflective of the country as a whole. Neither of these goals was achieved, and the military is still dominated by the Zaghawa.
In 2004, the government discovered that many of the soldiers it was paying did not exist and that there were only about 19,000 soldiers in the army, as opposed to the 24,000 that had been previously believed. Government crackdowns against the practice are thought to have been a factor in a failed military mutiny in May 2004.
Renewed conflict, in which the Chadian military is involved, came in the form of a civil war against Sudanese-backed rebels. Chad successfully managed to repel many rebel movements, albeit with some losses (see Battle of N'Djamena (2008)). The army used its artillery systems and tanks, but well-equipped insurgents probably managed to destroy over 20 of Chad's 60 T-55 tanks, and probably shot down a Mi-24 Hind gunship, which bombed enemy positions near the border with Sudan. In November 2006 Libya supplied Chad with four Aermacchi SF.260W light attack planes. They were used to strike enemy positions by the Chadian Air Force, but one was shot down by rebels. During the 2008 battle of N'Djamena, gunships and tanks were put to good use, pushing armed militia forces back from the Presidential palace. The battle impacted the highest levels of the army leadership, as Daoud Soumain, its Chief of Staff, was killed.
On March 23, 2020 a Chadian army base was ambushed by fighters of the jihadist insurgent group Boko Haram. The army lost 92 servicemen in one day. In response, President Déby launched an operation dubbed "Wrath of Boma". According to Canadian counter terrorism St-Pierre, numerous external operations and rising insecurity in the neighboring countries had recently overstretched the capacities of the Chadian armed forces.
After the death of President Idriss Déby on 19 April 2021 in fighting with FACT rebels, his son General Mahamat Idriss Déby was named interim president and head of the armed forces.
The CIA World Factbook estimates the military budget of Chad to be 4.2% of GDP as of 2006.. Given the then GDP ($7.095 bln) of the country, military spending was estimated to be about $300 million. This estimate however dropped after the end of the Civil war in Chad (2005–2010) to 2.0% as estimated by the World Bank for the year 2011. There aren't any more recent estimates available.
Chad participated in a peace mission under the authority of African Union in the neighboring Central African Republic to try to pacify the recent conflict, but has chosen to withdraw after its soldiers were accused of shooting into a marketplace, unprovoked, according to BBC.
This article incorporates public domain material from The World Factbook. CIA.
Pages à modifier : https://en.wikip | [
{
"paragraph_id": 0,
"text": "The Chad National Army (Arabic: الجيش الوطني التشادي, romanized: Al-Jaish al-Watani at-Tshadi; French: Armée nationale tchadienne, ANT) consists of the five Defence and Security Forces listed in Article 185 of the Chadian Constitution that came into effect on 4 May 2018. These are the National Army ((including Ground Forces, and Air Force), the National Gendarmerie), the National Police, the National and Nomadic Guard (GNNT) and the Judicial Police. Article 188 of the Constitution specifies that National Defence is the responsibility of the Army, Gendarmerie and GNNT, whilst the maintenance of public order and security is the responsibility of the Police, Gendarmerie and GNNT.",
"title": ""
},
{
"paragraph_id": 1,
"text": "From independence through the period of the presidency of Félix Malloum (1975–79), the official national army was known as the Chadian Armed Forces (Forces Armées Tchadiennes—FAT). Composed mainly of soldiers from southern Chad, FAT had its roots in the army recruited by France and had military traditions dating back to World War I. FAT lost its status as the legal state army when Malloum's civil and military administration disintegrated in 1979. Although it remained a distinct military body for several years, FAT was eventually reduced to the status of a regional army representing the south.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "After Habré consolidated his authority and assumed the presidency in 1982, his victorious army, the Armed Forces of the North (Forces Armées du Nord—FAN), became the nucleus of a new national army. The force was officially constituted in January 1983, when the various pro-Habré contingents were merged and renamed the Chadian National Armed Forces (Forces Armées Nationales Tchadiennes—FANT).",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The Military of Chad was dominated by members of Toubou, Zaghawa, Kanembou, Hadjerai, and Massa ethnic groups during the presidency of Hissène Habré. Later Chadian president Idriss Déby revolted and fled to the Sudan, taking with him many Zaghawa and Hadjerai soldiers in 1989.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Chad's armed forces numbered about 36,000 at the end of the Habré regime, but swelled to an estimated 50,000 in the early days of Déby's rule. With French support, a reorganization of the armed forces was initiated early in 1991 with the goal of reducing its numbers and making its ethnic composition reflective of the country as a whole. Neither of these goals was achieved, and the military is still dominated by the Zaghawa.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 2004, the government discovered that many of the soldiers it was paying did not exist and that there were only about 19,000 soldiers in the army, as opposed to the 24,000 that had been previously believed. Government crackdowns against the practice are thought to have been a factor in a failed military mutiny in May 2004.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Renewed conflict, in which the Chadian military is involved, came in the form of a civil war against Sudanese-backed rebels. Chad successfully managed to repel many rebel movements, albeit with some losses (see Battle of N'Djamena (2008)). The army used its artillery systems and tanks, but well-equipped insurgents probably managed to destroy over 20 of Chad's 60 T-55 tanks, and probably shot down a Mi-24 Hind gunship, which bombed enemy positions near the border with Sudan. In November 2006 Libya supplied Chad with four Aermacchi SF.260W light attack planes. They were used to strike enemy positions by the Chadian Air Force, but one was shot down by rebels. During the 2008 battle of N'Djamena, gunships and tanks were put to good use, pushing armed militia forces back from the Presidential palace. The battle impacted the highest levels of the army leadership, as Daoud Soumain, its Chief of Staff, was killed.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "On March 23, 2020 a Chadian army base was ambushed by fighters of the jihadist insurgent group Boko Haram. The army lost 92 servicemen in one day. In response, President Déby launched an operation dubbed \"Wrath of Boma\". According to Canadian counter terrorism St-Pierre, numerous external operations and rising insecurity in the neighboring countries had recently overstretched the capacities of the Chadian armed forces.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "After the death of President Idriss Déby on 19 April 2021 in fighting with FACT rebels, his son General Mahamat Idriss Déby was named interim president and head of the armed forces.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The CIA World Factbook estimates the military budget of Chad to be 4.2% of GDP as of 2006.. Given the then GDP ($7.095 bln) of the country, military spending was estimated to be about $300 million. This estimate however dropped after the end of the Civil war in Chad (2005–2010) to 2.0% as estimated by the World Bank for the year 2011. There aren't any more recent estimates available.",
"title": "Budget"
},
{
"paragraph_id": 10,
"text": "Chad participated in a peace mission under the authority of African Union in the neighboring Central African Republic to try to pacify the recent conflict, but has chosen to withdraw after its soldiers were accused of shooting into a marketplace, unprovoked, according to BBC.",
"title": "External deployments"
},
{
"paragraph_id": 11,
"text": "This article incorporates public domain material from The World Factbook. CIA.",
"title": "Notes"
},
{
"paragraph_id": 12,
"text": "Pages à modifier : https://en.wikip",
"title": "References"
}
] | The Chad National Army consists of the five Defence and Security Forces listed in Article 185 of the Chadian Constitution that came into effect on 4 May 2018. These are the National Army, the National Police, the National and Nomadic Guard (GNNT) and the Judicial Police. Article 188 of the Constitution specifies that National Defence is the responsibility of the Army, Gendarmerie and GNNT, whilst the maintenance of public order and security is the responsibility of the Police, Gendarmerie and GNNT. | 2001-04-23T17:48:35Z | 2023-09-19T01:47:32Z | [
"Template:Multiple issues",
"Template:Lang-fr",
"Template:Cite news",
"Template:CIA World Factbook",
"Template:Short description",
"Template:Infobox national military",
"Template:Lang-ar",
"Template:Cite web",
"Template:Chad topics",
"Template:Reflist",
"Template:Webarchive",
"Template:Military of Africa",
"Template:Main",
"Template:Citation-attribution",
"Template:ISBN",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Chad_National_Army |
5,337 | Foreign relations of Chad | The foreign relations of Chad are significantly influenced by the desire for oil revenue and investment in Chadian oil industry and support for former Chadian President Idriss Déby. Chad is officially non-aligned but has close relations with France, the former colonial power. Relations with neighbouring Libya, and Sudan vary periodically. Lately, the Idris Déby regime waged an intermittent proxy war with Sudan. Aside from those two countries, Chad generally enjoys good relations with its neighbouring states.
Although relations with Libya improved with the presidency of Idriss Déby, strains persist. Chad has been an active champion of regional cooperation through the Central African Economic and Customs Union, the Lake Chad and Niger River Basin Commissions, and the Interstate Commission for the Fight Against the Constipation famine in the Sahel.
Delimitation of international boundaries in the vicinity of Lake Chad, the lack of which led to border incidents in the past, has been completed and awaits ratification by Cameroon, Chad, Niger, and Nigeria.
Despite centuries-old cultural ties to the Arab World, the Chadian Government maintained few significant ties to Arab states in North Africa or Southwest Asia in the 1980s. Chad had broken off relations with the State of Israel under former Chadian President François (Ngarta) Tombalbaye in September 1972. President Habré hoped to pursue closer relations with Arab states as a potential opportunity to break out of his Chad's post-imperial dependence on France, and to assert Chad's unwillingness to serve as an arena for superpower rivalries. In addition, as a northern Muslim, Habré represented a constituency that favored Afro-Arab solidarity, and he hoped Islam would provide a basis for national unity in the long term. For these reasons, he was expected to seize opportunities during the 1990s to pursue closer ties with the Arab World. In 1988, Chad recognized the State of Palestine, which maintains a mission in N'Djamena. In November 2018, President Deby visited Israel and announced his intention to restore diplomatic relations. Chad and Israel re-established diplomatic relations in January 2019. In February 2023, Chad opened an embassy in Israel.
During the 1980s, Arab opinion on the Chadian-Libyan conflict over the Aouzou Strip was divided. Several Arab states supported Libyan territorial claims to the Strip, among the most outspoken of which was Algeria, which provided training for anti-Habré forces, although most recruits for its training programs were from Nigeria or Cameroon, recruited and flown to Algeria by Libya. Lebanon's Progressive Socialist Party also sent troops to support Qadhafi's efforts against Chad in 1987. In contrast, numerous other Arab states opposed the Libyan actions, and expressed their desire to see the dispute over the Aouzou Strip settled peacefully. By the end of 1987, Algiers and N'Djamena were negotiating to improve relations and Algeria helped mediate the end of the Aouzou Strip conflict
Chad is officially non-aligned but has close relations with France, the former colonial power, which has about 1,200 troops stationed in the capital N'Djamena. It receives economic aid from countries of the European Community, the United States, and various international organizations. Libya supplies aid and has an ambassador resident in N'Djamena. Traditionally strong ties with the Western community have weakened over the past two years due to a dispute between the Government of Chad and the World Bank over how the profits from Chad's petroleum reserves are allocated. Although oil output to the West has resumed and the dispute has officially been resolved, resentment towards what the Déby administration considered foreign meddling lingers.
Chad belongs to the following international organizations: | [
{
"paragraph_id": 0,
"text": "The foreign relations of Chad are significantly influenced by the desire for oil revenue and investment in Chadian oil industry and support for former Chadian President Idriss Déby. Chad is officially non-aligned but has close relations with France, the former colonial power. Relations with neighbouring Libya, and Sudan vary periodically. Lately, the Idris Déby regime waged an intermittent proxy war with Sudan. Aside from those two countries, Chad generally enjoys good relations with its neighbouring states.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Although relations with Libya improved with the presidency of Idriss Déby, strains persist. Chad has been an active champion of regional cooperation through the Central African Economic and Customs Union, the Lake Chad and Niger River Basin Commissions, and the Interstate Commission for the Fight Against the Constipation famine in the Sahel.",
"title": "Africa"
},
{
"paragraph_id": 2,
"text": "Delimitation of international boundaries in the vicinity of Lake Chad, the lack of which led to border incidents in the past, has been completed and awaits ratification by Cameroon, Chad, Niger, and Nigeria.",
"title": "Africa"
},
{
"paragraph_id": 3,
"text": "Despite centuries-old cultural ties to the Arab World, the Chadian Government maintained few significant ties to Arab states in North Africa or Southwest Asia in the 1980s. Chad had broken off relations with the State of Israel under former Chadian President François (Ngarta) Tombalbaye in September 1972. President Habré hoped to pursue closer relations with Arab states as a potential opportunity to break out of his Chad's post-imperial dependence on France, and to assert Chad's unwillingness to serve as an arena for superpower rivalries. In addition, as a northern Muslim, Habré represented a constituency that favored Afro-Arab solidarity, and he hoped Islam would provide a basis for national unity in the long term. For these reasons, he was expected to seize opportunities during the 1990s to pursue closer ties with the Arab World. In 1988, Chad recognized the State of Palestine, which maintains a mission in N'Djamena. In November 2018, President Deby visited Israel and announced his intention to restore diplomatic relations. Chad and Israel re-established diplomatic relations in January 2019. In February 2023, Chad opened an embassy in Israel.",
"title": "Asia"
},
{
"paragraph_id": 4,
"text": "During the 1980s, Arab opinion on the Chadian-Libyan conflict over the Aouzou Strip was divided. Several Arab states supported Libyan territorial claims to the Strip, among the most outspoken of which was Algeria, which provided training for anti-Habré forces, although most recruits for its training programs were from Nigeria or Cameroon, recruited and flown to Algeria by Libya. Lebanon's Progressive Socialist Party also sent troops to support Qadhafi's efforts against Chad in 1987. In contrast, numerous other Arab states opposed the Libyan actions, and expressed their desire to see the dispute over the Aouzou Strip settled peacefully. By the end of 1987, Algiers and N'Djamena were negotiating to improve relations and Algeria helped mediate the end of the Aouzou Strip conflict",
"title": "Asia"
},
{
"paragraph_id": 5,
"text": "Chad is officially non-aligned but has close relations with France, the former colonial power, which has about 1,200 troops stationed in the capital N'Djamena. It receives economic aid from countries of the European Community, the United States, and various international organizations. Libya supplies aid and has an ambassador resident in N'Djamena. Traditionally strong ties with the Western community have weakened over the past two years due to a dispute between the Government of Chad and the World Bank over how the profits from Chad's petroleum reserves are allocated. Although oil output to the West has resumed and the dispute has officially been resolved, resentment towards what the Déby administration considered foreign meddling lingers.",
"title": "Europe"
},
{
"paragraph_id": 6,
"text": "Chad belongs to the following international organizations:",
"title": "Membership of international organizations"
}
] | The foreign relations of Chad are significantly influenced by the desire for oil revenue and investment in Chadian oil industry and support for former Chadian President Idriss Déby. Chad is officially non-aligned but has close relations with France, the former colonial power. Relations with neighbouring Libya, and Sudan vary periodically. Lately, the Idris Déby regime waged an intermittent proxy war with Sudan. Aside from those two countries, Chad generally enjoys good relations with its neighbouring states. | 2001-04-23T17:49:02Z | 2023-12-30T18:32:19Z | [
"Template:Update inline",
"Template:Citation",
"Template:Navboxes",
"Template:Africa in topic",
"Template:Col-2",
"Template:Col-end",
"Template:Cite book",
"Template:Chadian-Sudanese conflict",
"Template:Cite web",
"Template:Citation-attribution",
"Template:Use mdy dates",
"Template:Politics of Chad",
"Template:Primary source inline",
"Template:Asof",
"Template:Col-begin",
"Template:Short description",
"Template:Flag",
"Template:Reflist",
"Template:Cite news",
"Template:Foreign relations of Chad"
] | https://en.wikipedia.org/wiki/Foreign_relations_of_Chad |
5,342 | Commentary | Commentary or commentaries may refer to: | [
{
"paragraph_id": 0,
"text": "Commentary or commentaries may refer to:",
"title": ""
}
] | Commentary or commentaries may refer to: | 2022-11-20T19:41:42Z | [
"Template:Wiktionary",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Commentary |
|
5,346 | Colloid | A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre.
Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.
Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861.
Colloid: Short synonym for colloidal system.
Colloidal: State of subdivision such that the molecules or polymolecular particles dispersed in a medium have at least one dimension between approximately 1 nm and 1 μm, or that in a system discontinuities are found at distances of that order.
Colloids can be classified as follows:
Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols.
Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber.
The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness.
Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin.
A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na and Cl ions are surrounded by water molecules. However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution.
The following forces play an important role in the interaction of colloid particles:
The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement.
The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force:
where
and v {\displaystyle v} is the sedimentation or creaming velocity.
The mass of the colloidal particle is found using:
where
and ρ 1 − ρ 2 {\displaystyle \rho _{1}-\rho _{2}} is the difference in mass density between the colloidal particle and the suspension medium.
By rearranging, the sedimentation or creaming velocity is:
There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension.
The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion.
There are two principal ways to prepare colloids:
The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system.
A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension.
If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension.
Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation.
A combination of the two mechanisms is also possible (electrosteric stabilization).
A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum.
Destabilization can be accomplished by different methods:
Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied.
The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids.
Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles.
The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times. Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration.
In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.
A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave.
Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids.
The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation.
Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates.
Colloidal particles can also serve as transport vector of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks (e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "eigencolloid" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations because of the process of ultrafiltration occurring in dense clay membrane. The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH.
Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids. | [
{
"paragraph_id": 0,
"text": "A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Colloid: Short synonym for colloidal system.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Colloidal: State of subdivision such that the molecules or polymolecular particles dispersed in a medium have at least one dimension between approximately 1 nm and 1 μm, or that in a system discontinuities are found at distances of that order.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Colloids can be classified as follows:",
"title": "Classification of colloids"
},
{
"paragraph_id": 6,
"text": "Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols.",
"title": "Classification of colloids"
},
{
"paragraph_id": 7,
"text": "Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively \"soluble\" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber.",
"title": "Hydrocolloids"
},
{
"paragraph_id": 8,
"text": "The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness.",
"title": "Hydrocolloids"
},
{
"paragraph_id": 9,
"text": "Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin.",
"title": "Hydrocolloids"
},
{
"paragraph_id": 10,
"text": "A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na and Cl ions are surrounded by water molecules. However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution.",
"title": "Colloid compared with solution"
},
{
"paragraph_id": 11,
"text": "The following forces play an important role in the interaction of colloid particles:",
"title": "Interaction between particles"
},
{
"paragraph_id": 12,
"text": "The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement.",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 13,
"text": "The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force:",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 14,
"text": "where",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 15,
"text": "and v {\\displaystyle v} is the sedimentation or creaming velocity.",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 16,
"text": "The mass of the colloidal particle is found using:",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 17,
"text": "where",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 18,
"text": "and ρ 1 − ρ 2 {\\displaystyle \\rho _{1}-\\rho _{2}} is the difference in mass density between the colloidal particle and the suspension medium.",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 19,
"text": "By rearranging, the sedimentation or creaming velocity is:",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 20,
"text": "There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension.",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 21,
"text": "The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion.",
"title": "Sedimentation velocity"
},
{
"paragraph_id": 22,
"text": "There are two principal ways to prepare colloids:",
"title": "Preparation"
},
{
"paragraph_id": 23,
"text": "The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system.",
"title": "Preparation"
},
{
"paragraph_id": 24,
"text": "A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension.",
"title": "Preparation"
},
{
"paragraph_id": 25,
"text": "If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension.",
"title": "Preparation"
},
{
"paragraph_id": 26,
"text": "Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation.",
"title": "Preparation"
},
{
"paragraph_id": 27,
"text": "A combination of the two mechanisms is also possible (electrosteric stabilization).",
"title": "Preparation"
},
{
"paragraph_id": 28,
"text": "A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum.",
"title": "Preparation"
},
{
"paragraph_id": 29,
"text": "Destabilization can be accomplished by different methods:",
"title": "Preparation"
},
{
"paragraph_id": 30,
"text": "Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied.",
"title": "Preparation"
},
{
"paragraph_id": 31,
"text": "The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids.",
"title": "Preparation"
},
{
"paragraph_id": 32,
"text": "Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles.",
"title": "Preparation"
},
{
"paragraph_id": 33,
"text": "The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times. Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration.",
"title": "Preparation"
},
{
"paragraph_id": 34,
"text": "In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.",
"title": "As a model system for atoms"
},
{
"paragraph_id": 35,
"text": "A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave.",
"title": "Crystals"
},
{
"paragraph_id": 36,
"text": "Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids.",
"title": "Crystals"
},
{
"paragraph_id": 37,
"text": "The large number of experiments exploring the physics and chemistry of these so-called \"colloidal crystals\" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation.",
"title": "Crystals"
},
{
"paragraph_id": 38,
"text": "Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates.",
"title": "In biology"
},
{
"paragraph_id": 39,
"text": "Colloidal particles can also serve as transport vector of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks (e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term \"eigencolloid\" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations because of the process of ultrafiltration occurring in dense clay membrane. The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.",
"title": "In the environment"
},
{
"paragraph_id": 40,
"text": "In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH.",
"title": "In the environment"
},
{
"paragraph_id": 41,
"text": "Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids.",
"title": "Intravenous therapy"
}
] | A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture. A colloid has a dispersed phase and a continuous phase. The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre. Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color. Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861. | 2001-03-27T08:53:43Z | 2023-12-15T09:47:33Z | [
"Template:Short description",
"Template:Condensed matter physics",
"Template:Main",
"Template:Cite journal",
"Template:Cite web",
"Template:Authority control",
"Template:Quote box",
"Template:Nobr",
"Template:Reflist",
"Template:Cite news",
"Template:Phase of matter",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Unknown",
"Template:Chemical solutions"
] | https://en.wikipedia.org/wiki/Colloid |
5,347 | Chinese | Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011. | [
{
"paragraph_id": 0,
"text": "Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.",
"title": ""
}
] | Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011.Zǎoshang hǎo zhōngguó xiànzài wǒ yǒu BING CHILLING wǒ hěn xǐhuān BING CHILLING dànshì sùdù yǔ jīqíng 9 bǐ BING CHILLING sùdù yǔ jīqíng sùdù yǔ jīqíng 9 wǒ zuì xǐhuān suǒyǐ…xiànzài shì yīnyuè shíjiān zhǔnbèi 1 2 3 liǎng gè lǐbài yǐhòu sùdù yǔ jīqíng 9 ×3 bùyào wàngjì bùyào cu òguò jìdé qù diànyǐngyuàn kàn sùdù yǔ jīqíng 9 yīn wéi fēicháng hǎo diànyǐng dòngzuò fēicháng hǎo chàbùduō yīyàng BING CHILLING zàijiàn 2011. | 2001-10-25T10:37:16Z | 2023-10-21T22:34:30Z | [] | https://en.wikipedia.org/wiki/Chinese |
5,350 | Riding shotgun | "Riding shotgun" was a phrase used to describe the bodyguard who rides alongside a stagecoach driver, typically armed with a break-action shotgun, called a coach gun, to ward off bandits or hostile Native Americans. In modern use, it refers to the practice of sitting alongside the driver in a moving vehicle. The coining of this phrase dates to 1905 at the latest.
The expression "riding shotgun" is derived from "shotgun messenger", a colloquial term for "express messenger", when stagecoach travel was popular during the American Wild West and the Colonial period in Australia. The person rode alongside the driver. The first known use of the phrase "riding shotgun" was in the 1905 novel The Sunset Trail by Alfred Henry Lewis.
Wyatt and Morgan Earp were in the service of The Express Company. They went often as guards—"riding shotgun," it was called—when the stage bore unusual treasure.
It was later used in print and especially film depiction of stagecoaches and wagons in the Old West in danger of being robbed or attacked by bandits. A special armed employee of the express service using the stage for transportation of bullion or cash would sit beside the driver, carrying a short shotgun (or alternatively a rifle), to provide an armed response in case of threat to the cargo, which was usually a strongbox. Absence of an armed person in that position often signaled that the stage was not carrying a strongbox, but only passengers.
On the evening of March 15, 1881, a Kinnear & Company stagecoach carrying US$26,000 in silver bullion (equivalent to $788,000 in 2022) was en route from the boom town of Tombstone, Arizona Territory to Benson, Arizona, the nearest freight terminal. Bob Paul, who had run for Pima County Sheriff and was contesting the election he lost due to ballot-stuffing, was temporarily working once again as the Wells Fargo shotgun messenger. He had taken the reins and driver's seat in Contention City because the usual driver, a well-known and popular man named Eli "Budd" Philpot, was ill. Philpot was riding shotgun.
Near Drew's Station, just outside Contention City, a man stepped into the road and commanded them to "Hold!" Three cowboys attempted to rob the stage. Paul, in the driver's seat, fired his shotgun and emptied his revolver at the robbers, wounding a cowboy later identified as Bill Leonard in the groin. Philpot, riding shotgun, and passenger Peter Roerig, riding in the rear dickey seat, were both shot and killed. The horses spooked and Paul wasn't able to bring the stage under control for almost a mile, leaving the robbers with nothing. Paul, who normally rode shotgun, later said he thought the first shot killing Philpot had been meant for him.
When Wyatt Earp first arrived in Tombstone in December 1879, he initially took a job as a stagecoach shotgun messenger for Wells Fargo, guarding shipments of silver bullion. When Earp was appointed Pima County Deputy Sheriff on July 27, 1881, his brother Morgan Earp took over his job.
When Wells, Fargo & Co. began regular stagecoach service from Tipton, Missouri to San Francisco, California in 1858, they issued shotguns to its drivers and guards for defense along the perilous 2,800 mile route. The guard was called a shotgun messenger and they were issued a Coach gun, typically a 10-gauge or 12-gauge, short, double-barreled shotgun.
More recently, the term has been applied to a game, usually played by groups of friends to determine who rides beside the driver in a car. Typically, this involves claiming the right to ride shotgun by being the first person to call out "shotgun" when everyone is in view of the vehicle; in some regions, calling shotgun too early disqualifies one from the game. Variable rules may apply such as users needing to be within view of the car, or having to be on the same level as the car (the same parking lot, garage, etc.). The game creates an environment that is fair by forgetting and leaving out most seniority except that parents and significant others automatically get shotgun, and this meanwhile leaves out any conflicts that may have previously occurred when deciding who gets to ride shotgun. | [
{
"paragraph_id": 0,
"text": "\"Riding shotgun\" was a phrase used to describe the bodyguard who rides alongside a stagecoach driver, typically armed with a break-action shotgun, called a coach gun, to ward off bandits or hostile Native Americans. In modern use, it refers to the practice of sitting alongside the driver in a moving vehicle. The coining of this phrase dates to 1905 at the latest.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The expression \"riding shotgun\" is derived from \"shotgun messenger\", a colloquial term for \"express messenger\", when stagecoach travel was popular during the American Wild West and the Colonial period in Australia. The person rode alongside the driver. The first known use of the phrase \"riding shotgun\" was in the 1905 novel The Sunset Trail by Alfred Henry Lewis.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "Wyatt and Morgan Earp were in the service of The Express Company. They went often as guards—\"riding shotgun,\" it was called—when the stage bore unusual treasure.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "It was later used in print and especially film depiction of stagecoaches and wagons in the Old West in danger of being robbed or attacked by bandits. A special armed employee of the express service using the stage for transportation of bullion or cash would sit beside the driver, carrying a short shotgun (or alternatively a rifle), to provide an armed response in case of threat to the cargo, which was usually a strongbox. Absence of an armed person in that position often signaled that the stage was not carrying a strongbox, but only passengers.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "On the evening of March 15, 1881, a Kinnear & Company stagecoach carrying US$26,000 in silver bullion (equivalent to $788,000 in 2022) was en route from the boom town of Tombstone, Arizona Territory to Benson, Arizona, the nearest freight terminal. Bob Paul, who had run for Pima County Sheriff and was contesting the election he lost due to ballot-stuffing, was temporarily working once again as the Wells Fargo shotgun messenger. He had taken the reins and driver's seat in Contention City because the usual driver, a well-known and popular man named Eli \"Budd\" Philpot, was ill. Philpot was riding shotgun.",
"title": "Historical examples"
},
{
"paragraph_id": 5,
"text": "Near Drew's Station, just outside Contention City, a man stepped into the road and commanded them to \"Hold!\" Three cowboys attempted to rob the stage. Paul, in the driver's seat, fired his shotgun and emptied his revolver at the robbers, wounding a cowboy later identified as Bill Leonard in the groin. Philpot, riding shotgun, and passenger Peter Roerig, riding in the rear dickey seat, were both shot and killed. The horses spooked and Paul wasn't able to bring the stage under control for almost a mile, leaving the robbers with nothing. Paul, who normally rode shotgun, later said he thought the first shot killing Philpot had been meant for him.",
"title": "Historical examples"
},
{
"paragraph_id": 6,
"text": "When Wyatt Earp first arrived in Tombstone in December 1879, he initially took a job as a stagecoach shotgun messenger for Wells Fargo, guarding shipments of silver bullion. When Earp was appointed Pima County Deputy Sheriff on July 27, 1881, his brother Morgan Earp took over his job.",
"title": "Historical examples"
},
{
"paragraph_id": 7,
"text": "When Wells, Fargo & Co. began regular stagecoach service from Tipton, Missouri to San Francisco, California in 1858, they issued shotguns to its drivers and guards for defense along the perilous 2,800 mile route. The guard was called a shotgun messenger and they were issued a Coach gun, typically a 10-gauge or 12-gauge, short, double-barreled shotgun.",
"title": "Historical examples"
},
{
"paragraph_id": 8,
"text": "More recently, the term has been applied to a game, usually played by groups of friends to determine who rides beside the driver in a car. Typically, this involves claiming the right to ride shotgun by being the first person to call out \"shotgun\" when everyone is in view of the vehicle; in some regions, calling shotgun too early disqualifies one from the game. Variable rules may apply such as users needing to be within view of the car, or having to be on the same level as the car (the same parking lot, garage, etc.). The game creates an environment that is fair by forgetting and leaving out most seniority except that parents and significant others automatically get shotgun, and this meanwhile leaves out any conflicts that may have previously occurred when deciding who gets to ride shotgun.",
"title": "Modern usage"
}
] | "Riding shotgun" was a phrase used to describe the bodyguard who rides alongside a stagecoach driver, typically armed with a break-action shotgun, called a coach gun, to ward off bandits or hostile Native Americans. In modern use, it refers to the practice of sitting alongside the driver in a moving vehicle. The coining of this phrase dates to 1905 at the latest. | 2001-03-27T21:34:12Z | 2023-12-11T00:26:44Z | [
"Template:Other uses",
"Template:Infobox phrase",
"Template:Inflation",
"Template:Reflist",
"Template:Cite book",
"Template:Cite magazine",
"Template:Use mdy dates",
"Template:Blockquote",
"Template:Wiktionary",
"Template:Cite web",
"Template:Cite AV media",
"Template:Permanent dead link",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Riding_shotgun |
5,355 | Cooking | Cooking, also known as cookery or professionally as the culinary arts, is the art, science and craft of using heat to make food more palatable, digestible, nutritious, or safe. Cooking techniques and ingredients vary widely, from grilling food over an open fire to using electric stoves, to baking in various types of ovens, reflecting local conditions.
Types of cooking also depend on the skill levels and training of the cooks. Cooking is done both by people in their own dwellings and by professional cooks and chefs in restaurants and other food establishments.
Preparing food with heat or fire is an activity unique to humans. Archeological evidence of cooking fires from at least 300,000 years ago exists, but some estimate that humans started cooking up to 2 million years ago.
The expansion of agriculture, commerce, trade, and transportation between civilizations in different regions offered cooks many new ingredients. New inventions and technologies, such as the invention of pottery for holding and boiling of water, expanded cooking techniques. Some modern cooks apply advanced scientific techniques to food preparation to further enhance the flavor of the dish served.
Phylogenetic analysis suggests that early hominids may have adopted cooking 1 million to 2 million years ago. Re-analysis of burnt bone fragments and plant ashes from the Wonderwerk Cave in South Africa has provided evidence supporting control of fire by early humans by 1 million years ago. In his seminal work Catching Fire: How Cooking Made Us Human, Richard Wrangham suggested that evolution of bipedalism and a large cranial capacity meant that early Homo habilis regularly cooked food. However, unequivocal evidence in the archaeological record for the controlled use of fire begins at 400,000 BCE, long after Homo erectus. Archaeological evidence from 300,000 years ago, in the form of ancient hearths, earth ovens, burnt animal bones, and flint, are found across Europe and the Middle East. The oldest evidence (via heated fish teeth from a deep cave) of controlled use of fire to cook food by archaic humans was dated to ~780,000 years ago. Anthropologists think that widespread cooking fires began about 250,000 years ago when hearths first appeared.
Recently, the earliest hearths have been reported to be at least 790,000 years old.
Communication between the Old World and the New World in the Columbian Exchange influenced the history of cooking. The movement of foods across the Atlantic from the New World, such as potatoes, tomatoes, maize, beans, bell pepper, chili pepper, vanilla, pumpkin, cassava, avocado, peanut, pecan, cashew, pineapple, blueberry, sunflower, chocolate, gourds, and squash, had a profound effect on Old World cooking. The movement of foods across the Atlantic from the Old World, such as cattle, sheep, pigs, wheat, oats, barley, rice, apples, pears, peas, chickpeas, green beans, mustard, and carrots, similarly changed New World cooking.
In the 17th and 18th centuries, food was a classic marker of identity in Europe. In the 19th-century "Age of Nationalism", cuisine became a defining symbol of national identity.
The Industrial Revolution brought mass-production, mass-marketing, and standardization of food. Factories processed, preserved, canned, and packaged a wide variety of foods, and processed cereals quickly became a defining feature of the American breakfast. In the 1920s, freezing methods, cafeterias, and fast food restaurants emerged.
Most ingredients in cooking are derived from living organisms. Vegetables, fruits, grains and nuts as well as herbs and spices come from plants, while meat, eggs, and dairy products come from animals. Mushrooms and the yeast used in baking are kinds of fungi. Cooks also use water and minerals such as salt. Cooks can also use wine or spirits.
Naturally occurring ingredients contain various amounts of molecules called proteins, carbohydrates and fats. They also contain water and minerals. Cooking involves a manipulation of the chemical properties of these molecules.
Carbohydrates include the common sugar, sucrose (table sugar), a disaccharide, and such simple sugars as glucose (made by enzymatic splitting of sucrose) and fructose (from fruit), and starches from sources such as cereal flour, rice, arrowroot and potato.
The interaction of heat and carbohydrate is complex. Long-chain sugars such as starch tend to break down into more digestible simpler sugars. If the sugars are heated so that all water of crystallisation is driven off, caramelization starts, with the sugar undergoing thermal decomposition with the formation of carbon, and other breakdown products producing caramel. Similarly, the heating of sugars and proteins causes the Maillard reaction, a basic flavor-enhancing technique.
An emulsion of starch with fat or water can, when gently heated, provide thickening to the dish being cooked. In European cooking, a mixture of butter and flour called a roux is used to thicken liquids to make stews or sauces. In Asian cooking, a similar effect is obtained from a mixture of rice or corn starch and water. These techniques rely on the properties of starches to create simpler mucilaginous saccharides during cooking, which causes the familiar thickening of sauces. This thickening will break down, however, under additional heat.
Types of fat include vegetable oils, animal products such as butter and lard, as well as fats from grains, including maize and flax oils. Fats are used in a number of ways in cooking and baking. To prepare stir fries, grilled cheese or pancakes, the pan or griddle is often coated with fat or oil. Fats are also used as an ingredient in baked goods such as cookies, cakes and pies. Fats can reach temperatures higher than the boiling point of water, and are often used to conduct high heat to other ingredients, such as in frying, deep frying or sautéing. Fats are used to add flavor to food (e.g., butter or bacon fat), prevent food from sticking to pans and create a desirable texture.
Fats are one of the three main macronutrient groups in human diet, along with carbohydrates and proteins, and the main components of common food products like milk, butter, tallow, lard, salt pork, and cooking oils. They are a major and dense source of food energy for many animals and play important structural and metabolic functions, in most living beings, including energy storage, waterproofing, and thermal insulation. The human body can produce the fat it requires from other food ingredients, except for a few essential fatty acids that must be included in the diet. Dietary fats are also the carriers of some flavor and aroma ingredients and vitamins that are not water-soluble.
Edible animal material, including muscle, offal, milk, eggs and egg whites, contains substantial amounts of protein. Almost all vegetable matter (in particular legumes and seeds) also includes proteins, although generally in smaller amounts. Mushrooms have high protein content. Any of these may be sources of essential amino acids. When proteins are heated they become denatured (unfolded) and change texture. In many cases, this causes the structure of the material to become softer or more friable – meat becomes cooked and is more friable and less flexible. In some cases, proteins can form more rigid structures, such as the coagulation of albumen in egg whites. The formation of a relatively rigid but flexible matrix from egg white provides an important component in baking cakes, and also underpins many desserts based on meringue.
Cooking often involves water, and water-based liquids. These can be added in order to immerse the substances being cooked (this is typically done with water, stock or wine). Alternatively, the foods themselves can release water. A favorite method of adding flavor to dishes is to save the liquid for use in other recipes. Liquids are so important to cooking that the name of the cooking method used is often based on how the liquid is combined with the food, as in steaming, simmering, boiling, braising and blanching. Heating liquid in an open container results in rapidly increased evaporation, which concentrates the remaining flavor and ingredients; this is a critical component of both stewing and sauce making.
Vitamins and minerals are required for normal metabolism; and what the body cannot manufacture itself must come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking. The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking.
There are many methods of cooking, most of which have been known since antiquity. These include baking, roasting, frying, grilling, barbecuing, smoking, boiling, steaming and braising. A more recent innovation is microwaving. Various methods use differing levels of heat and moisture and vary in cooking time. The method chosen greatly affects the result because some foods are more appropriate to some methods than others. Some major hot cooking techniques include:
As of 2021, over 2.6 billion people cook using open fires or inefficient stoves using kerosene, biomass, and coal as fuel. These cooking practices use fuels and technologies that produce high levels of household air pollution, causing 3.8 million premature deaths annually. Of these deaths, 27% are from pneumonia, 27% from ischaemic heart disease, 20% from chronic obstructive pulmonary disease, 18% from stroke, and 8% from lung cancer. Women and young children are disproportionately affected, since they spend the most time near the hearth.
Hazards while cooking can include
To prevent those injuries there are protections such as cooking clothing, anti-slip shoes, fire extinguisher and more.
Cooking can prevent many foodborne illnesses that would otherwise occur if raw food is consumed. When heat is used in the preparation of food, it can kill or inactivate harmful organisms, such as bacteria and viruses, as well as various parasites such as tapeworms and Toxoplasma gondii. Food poisoning and other illness from uncooked or poorly prepared food may be caused by bacteria such as pathogenic strains of Escherichia coli, Salmonella typhimurium and Campylobacter, viruses such as noroviruses, and protozoa such as Entamoeba histolytica. Bacteria, viruses and parasites may be introduced through salad, meat that is uncooked or done rare, and unboiled water.
The sterilizing effect of cooking depends on temperature, cooking time, and technique used. Some food spoilage bacteria such as Clostridium botulinum or Bacillus cereus can form spores that survive boiling, which then germinate and regrow after the food has cooled. This makes it unsafe to reheat cooked food more than once.
Cooking increases the digestibility of many foods which are inedible or poisonous when raw. For example, raw cereal grains are hard to digest, while kidney beans are toxic when raw or improperly cooked due to the presence of phytohaemagglutinin, which is inactivated by cooking for at least ten minutes at 100 °C (212 °F).
Food safety depends on the safe preparation, handling, and storage of food. Food spoilage bacteria proliferate in the "Danger zone" temperature range from 40 to 140 °F (4 to 60 °C), food therefore should not be stored in this temperature range. Washing of hands and surfaces, especially when handling different meats, and keeping raw food separate from cooked food to avoid cross-contamination, are good practices in food preparation. Foods prepared on plastic cutting boards may be less likely to harbor bacteria than wooden ones. Washing and disinfecting cutting boards, especially after use with raw meat, poultry, or seafood, reduces the risk of contamination.
Proponents of raw foodism argue that cooking food increases the risk of some of the detrimental effects on food or health. They point out that during cooking of vegetables and fruit containing vitamin C, the vitamin elutes into the cooking water and becomes degraded through oxidation. Peeling vegetables can also substantially reduce the vitamin C content, especially in the case of potatoes where most vitamin C is in the skin. However, research has shown that in the specific case of carotenoids a greater proportion is absorbed from cooked vegetables than from raw vegetables.
Sulforaphane, a glucosinolate breakdown product, is present in vegetables such as broccoli, and is mostly destroyed when the vegetable is boiled. Although there has been some basic research on how sulforaphane might exert beneficial effects in vivo, there is no high-quality evidence for its efficacy against human diseases.
The United States Department of Agriculture has studied retention data for 16 vitamins, 8 minerals, and alcohol for approximately 290 foods across various cooking methods.
In a human epidemiological analysis by Richard Doll and Richard Peto in 1981, diet was estimated to cause a large percentage of cancers. Studies suggest that around 32% of cancer deaths may be avoidable by changes to the diet. Some of these cancers may be caused by carcinogens in food generated during the cooking process, although it is often difficult to identify the specific components in diet that serve to increase cancer risk.
Several studies published since 1990 indicate that cooking meat at high temperature creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While avoiding meat or eating meat raw may be the only ways to avoid HCAs in meat fully, the National Cancer Institute states that cooking meat below 212 °F (100 °C) creates "negligible amounts" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90% by reducing the time needed for the meat to be cooked at high heat. Nitrosamines are found in some food, and may be produced by some cooking processes from proteins or from nitrites used as food preservatives; cured meat such as bacon has been found to be carcinogenic, with links to colon cancer. Ascorbate, which is added to cured meat, however, reduces nitrosamine formation.
Baking, grilling or broiling food, especially starchy foods, until a toasted crust is formed generates significant concentrations of acrylamide. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth".
The scientific study of cooking has become known as molecular gastronomy. This is a subdiscipline of food science concerning the physical and chemical transformations that occur during cooking.
Important contributions have been made by scientists, chefs and authors such as Hervé This (chemist), Nicholas Kurti (physicist), Peter Barham (physicist), Harold McGee (author), Shirley Corriher (biochemist, author), Robert Wolke (chemist, author.) It is different for the application of scientific knowledge to cooking, that is "molecular cooking"( (for the technique) or "molecular cuisine" (for a culinary style), for which chefs such as Raymond Blanc, Philippe and Christian Conticini, Ferran Adria, Heston Blumenthal, Pierre Gagnaire (chef).
Chemical processes central to cooking include hydrolysis (in particular beta elimination of pectins, during the thermal treatment of plant tissues), pyrolysis, and glycation reactions wrongly named Maillard reactions.
Cooking foods with heat depends on many factors: the specific heat of an object, thermal conductivity, and (perhaps most significantly) the difference in temperature between the two objects. Thermal diffusivity is the combination of specific heat, conductivity and density that determines how long it will take for the food to reach a certain temperature.
Home cooking has traditionally been a process carried out informally in a home or around a communal fire, and can be enjoyed by all members of the family, although in many cultures women bear primary responsibility. Cooking is also often carried out outside of personal quarters, for example at restaurants, or schools. Bakeries were one of the earliest forms of cooking outside the home, and bakeries in the past often offered the cooking of pots of food provided by their customers as an additional service. In the present day, factory food preparation has become common, with many "ready-to-eat" as well as "ready-to-cook" foods being prepared and cooked in factories and home cooks using a mixture of scratch made, and factory made foods together to make a meal. The nutritional value of including more commercially prepared foods has been found to be inferior to home-made foods. Home-cooked meals tend to be healthier with fewer calories, and less saturated fat, cholesterol and sodium on a per calorie basis while providing more fiber, calcium, and iron. The ingredients are also directly sourced, so there is control over authenticity, taste, and nutritional value. The superior nutritional quality of home-cooking could therefore play a role in preventing chronic disease. Cohort studies following the elderly over 10 years show that adults who cook their own meals have significantly lower mortality, even when controlling for confounding variables.
"Home-cooking" may be associated with comfort food, and some commercially produced foods and restaurant meals are presented through advertising or packaging as having been "home-cooked", regardless of their actual origin. This trend began in the 1920s and is attributed to people in urban areas of the U.S. wanting homestyle food even though their schedules and smaller kitchens made cooking harder. | [
{
"paragraph_id": 0,
"text": "Cooking, also known as cookery or professionally as the culinary arts, is the art, science and craft of using heat to make food more palatable, digestible, nutritious, or safe. Cooking techniques and ingredients vary widely, from grilling food over an open fire to using electric stoves, to baking in various types of ovens, reflecting local conditions.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Types of cooking also depend on the skill levels and training of the cooks. Cooking is done both by people in their own dwellings and by professional cooks and chefs in restaurants and other food establishments.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Preparing food with heat or fire is an activity unique to humans. Archeological evidence of cooking fires from at least 300,000 years ago exists, but some estimate that humans started cooking up to 2 million years ago.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The expansion of agriculture, commerce, trade, and transportation between civilizations in different regions offered cooks many new ingredients. New inventions and technologies, such as the invention of pottery for holding and boiling of water, expanded cooking techniques. Some modern cooks apply advanced scientific techniques to food preparation to further enhance the flavor of the dish served.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Phylogenetic analysis suggests that early hominids may have adopted cooking 1 million to 2 million years ago. Re-analysis of burnt bone fragments and plant ashes from the Wonderwerk Cave in South Africa has provided evidence supporting control of fire by early humans by 1 million years ago. In his seminal work Catching Fire: How Cooking Made Us Human, Richard Wrangham suggested that evolution of bipedalism and a large cranial capacity meant that early Homo habilis regularly cooked food. However, unequivocal evidence in the archaeological record for the controlled use of fire begins at 400,000 BCE, long after Homo erectus. Archaeological evidence from 300,000 years ago, in the form of ancient hearths, earth ovens, burnt animal bones, and flint, are found across Europe and the Middle East. The oldest evidence (via heated fish teeth from a deep cave) of controlled use of fire to cook food by archaic humans was dated to ~780,000 years ago. Anthropologists think that widespread cooking fires began about 250,000 years ago when hearths first appeared.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Recently, the earliest hearths have been reported to be at least 790,000 years old.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Communication between the Old World and the New World in the Columbian Exchange influenced the history of cooking. The movement of foods across the Atlantic from the New World, such as potatoes, tomatoes, maize, beans, bell pepper, chili pepper, vanilla, pumpkin, cassava, avocado, peanut, pecan, cashew, pineapple, blueberry, sunflower, chocolate, gourds, and squash, had a profound effect on Old World cooking. The movement of foods across the Atlantic from the Old World, such as cattle, sheep, pigs, wheat, oats, barley, rice, apples, pears, peas, chickpeas, green beans, mustard, and carrots, similarly changed New World cooking.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the 17th and 18th centuries, food was a classic marker of identity in Europe. In the 19th-century \"Age of Nationalism\", cuisine became a defining symbol of national identity.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Industrial Revolution brought mass-production, mass-marketing, and standardization of food. Factories processed, preserved, canned, and packaged a wide variety of foods, and processed cereals quickly became a defining feature of the American breakfast. In the 1920s, freezing methods, cafeterias, and fast food restaurants emerged.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Most ingredients in cooking are derived from living organisms. Vegetables, fruits, grains and nuts as well as herbs and spices come from plants, while meat, eggs, and dairy products come from animals. Mushrooms and the yeast used in baking are kinds of fungi. Cooks also use water and minerals such as salt. Cooks can also use wine or spirits.",
"title": "Ingredients"
},
{
"paragraph_id": 10,
"text": "Naturally occurring ingredients contain various amounts of molecules called proteins, carbohydrates and fats. They also contain water and minerals. Cooking involves a manipulation of the chemical properties of these molecules.",
"title": "Ingredients"
},
{
"paragraph_id": 11,
"text": "Carbohydrates include the common sugar, sucrose (table sugar), a disaccharide, and such simple sugars as glucose (made by enzymatic splitting of sucrose) and fructose (from fruit), and starches from sources such as cereal flour, rice, arrowroot and potato.",
"title": "Ingredients"
},
{
"paragraph_id": 12,
"text": "The interaction of heat and carbohydrate is complex. Long-chain sugars such as starch tend to break down into more digestible simpler sugars. If the sugars are heated so that all water of crystallisation is driven off, caramelization starts, with the sugar undergoing thermal decomposition with the formation of carbon, and other breakdown products producing caramel. Similarly, the heating of sugars and proteins causes the Maillard reaction, a basic flavor-enhancing technique.",
"title": "Ingredients"
},
{
"paragraph_id": 13,
"text": "An emulsion of starch with fat or water can, when gently heated, provide thickening to the dish being cooked. In European cooking, a mixture of butter and flour called a roux is used to thicken liquids to make stews or sauces. In Asian cooking, a similar effect is obtained from a mixture of rice or corn starch and water. These techniques rely on the properties of starches to create simpler mucilaginous saccharides during cooking, which causes the familiar thickening of sauces. This thickening will break down, however, under additional heat.",
"title": "Ingredients"
},
{
"paragraph_id": 14,
"text": "Types of fat include vegetable oils, animal products such as butter and lard, as well as fats from grains, including maize and flax oils. Fats are used in a number of ways in cooking and baking. To prepare stir fries, grilled cheese or pancakes, the pan or griddle is often coated with fat or oil. Fats are also used as an ingredient in baked goods such as cookies, cakes and pies. Fats can reach temperatures higher than the boiling point of water, and are often used to conduct high heat to other ingredients, such as in frying, deep frying or sautéing. Fats are used to add flavor to food (e.g., butter or bacon fat), prevent food from sticking to pans and create a desirable texture.",
"title": "Ingredients"
},
{
"paragraph_id": 15,
"text": "Fats are one of the three main macronutrient groups in human diet, along with carbohydrates and proteins, and the main components of common food products like milk, butter, tallow, lard, salt pork, and cooking oils. They are a major and dense source of food energy for many animals and play important structural and metabolic functions, in most living beings, including energy storage, waterproofing, and thermal insulation. The human body can produce the fat it requires from other food ingredients, except for a few essential fatty acids that must be included in the diet. Dietary fats are also the carriers of some flavor and aroma ingredients and vitamins that are not water-soluble.",
"title": "Ingredients"
},
{
"paragraph_id": 16,
"text": "Edible animal material, including muscle, offal, milk, eggs and egg whites, contains substantial amounts of protein. Almost all vegetable matter (in particular legumes and seeds) also includes proteins, although generally in smaller amounts. Mushrooms have high protein content. Any of these may be sources of essential amino acids. When proteins are heated they become denatured (unfolded) and change texture. In many cases, this causes the structure of the material to become softer or more friable – meat becomes cooked and is more friable and less flexible. In some cases, proteins can form more rigid structures, such as the coagulation of albumen in egg whites. The formation of a relatively rigid but flexible matrix from egg white provides an important component in baking cakes, and also underpins many desserts based on meringue.",
"title": "Ingredients"
},
{
"paragraph_id": 17,
"text": "Cooking often involves water, and water-based liquids. These can be added in order to immerse the substances being cooked (this is typically done with water, stock or wine). Alternatively, the foods themselves can release water. A favorite method of adding flavor to dishes is to save the liquid for use in other recipes. Liquids are so important to cooking that the name of the cooking method used is often based on how the liquid is combined with the food, as in steaming, simmering, boiling, braising and blanching. Heating liquid in an open container results in rapidly increased evaporation, which concentrates the remaining flavor and ingredients; this is a critical component of both stewing and sauce making.",
"title": "Ingredients"
},
{
"paragraph_id": 18,
"text": "Vitamins and minerals are required for normal metabolism; and what the body cannot manufacture itself must come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking. The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking.",
"title": "Ingredients"
},
{
"paragraph_id": 19,
"text": "There are many methods of cooking, most of which have been known since antiquity. These include baking, roasting, frying, grilling, barbecuing, smoking, boiling, steaming and braising. A more recent innovation is microwaving. Various methods use differing levels of heat and moisture and vary in cooking time. The method chosen greatly affects the result because some foods are more appropriate to some methods than others. Some major hot cooking techniques include:",
"title": "Methods"
},
{
"paragraph_id": 20,
"text": "As of 2021, over 2.6 billion people cook using open fires or inefficient stoves using kerosene, biomass, and coal as fuel. These cooking practices use fuels and technologies that produce high levels of household air pollution, causing 3.8 million premature deaths annually. Of these deaths, 27% are from pneumonia, 27% from ischaemic heart disease, 20% from chronic obstructive pulmonary disease, 18% from stroke, and 8% from lung cancer. Women and young children are disproportionately affected, since they spend the most time near the hearth.",
"title": "Health and safety"
},
{
"paragraph_id": 21,
"text": "Hazards while cooking can include",
"title": "Health and safety"
},
{
"paragraph_id": 22,
"text": "To prevent those injuries there are protections such as cooking clothing, anti-slip shoes, fire extinguisher and more.",
"title": "Health and safety"
},
{
"paragraph_id": 23,
"text": "Cooking can prevent many foodborne illnesses that would otherwise occur if raw food is consumed. When heat is used in the preparation of food, it can kill or inactivate harmful organisms, such as bacteria and viruses, as well as various parasites such as tapeworms and Toxoplasma gondii. Food poisoning and other illness from uncooked or poorly prepared food may be caused by bacteria such as pathogenic strains of Escherichia coli, Salmonella typhimurium and Campylobacter, viruses such as noroviruses, and protozoa such as Entamoeba histolytica. Bacteria, viruses and parasites may be introduced through salad, meat that is uncooked or done rare, and unboiled water.",
"title": "Health and safety"
},
{
"paragraph_id": 24,
"text": "The sterilizing effect of cooking depends on temperature, cooking time, and technique used. Some food spoilage bacteria such as Clostridium botulinum or Bacillus cereus can form spores that survive boiling, which then germinate and regrow after the food has cooled. This makes it unsafe to reheat cooked food more than once.",
"title": "Health and safety"
},
{
"paragraph_id": 25,
"text": "Cooking increases the digestibility of many foods which are inedible or poisonous when raw. For example, raw cereal grains are hard to digest, while kidney beans are toxic when raw or improperly cooked due to the presence of phytohaemagglutinin, which is inactivated by cooking for at least ten minutes at 100 °C (212 °F).",
"title": "Health and safety"
},
{
"paragraph_id": 26,
"text": "Food safety depends on the safe preparation, handling, and storage of food. Food spoilage bacteria proliferate in the \"Danger zone\" temperature range from 40 to 140 °F (4 to 60 °C), food therefore should not be stored in this temperature range. Washing of hands and surfaces, especially when handling different meats, and keeping raw food separate from cooked food to avoid cross-contamination, are good practices in food preparation. Foods prepared on plastic cutting boards may be less likely to harbor bacteria than wooden ones. Washing and disinfecting cutting boards, especially after use with raw meat, poultry, or seafood, reduces the risk of contamination.",
"title": "Health and safety"
},
{
"paragraph_id": 27,
"text": "Proponents of raw foodism argue that cooking food increases the risk of some of the detrimental effects on food or health. They point out that during cooking of vegetables and fruit containing vitamin C, the vitamin elutes into the cooking water and becomes degraded through oxidation. Peeling vegetables can also substantially reduce the vitamin C content, especially in the case of potatoes where most vitamin C is in the skin. However, research has shown that in the specific case of carotenoids a greater proportion is absorbed from cooked vegetables than from raw vegetables.",
"title": "Health and safety"
},
{
"paragraph_id": 28,
"text": "Sulforaphane, a glucosinolate breakdown product, is present in vegetables such as broccoli, and is mostly destroyed when the vegetable is boiled. Although there has been some basic research on how sulforaphane might exert beneficial effects in vivo, there is no high-quality evidence for its efficacy against human diseases.",
"title": "Health and safety"
},
{
"paragraph_id": 29,
"text": "The United States Department of Agriculture has studied retention data for 16 vitamins, 8 minerals, and alcohol for approximately 290 foods across various cooking methods.",
"title": "Health and safety"
},
{
"paragraph_id": 30,
"text": "In a human epidemiological analysis by Richard Doll and Richard Peto in 1981, diet was estimated to cause a large percentage of cancers. Studies suggest that around 32% of cancer deaths may be avoidable by changes to the diet. Some of these cancers may be caused by carcinogens in food generated during the cooking process, although it is often difficult to identify the specific components in diet that serve to increase cancer risk.",
"title": "Health and safety"
},
{
"paragraph_id": 31,
"text": "Several studies published since 1990 indicate that cooking meat at high temperature creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While avoiding meat or eating meat raw may be the only ways to avoid HCAs in meat fully, the National Cancer Institute states that cooking meat below 212 °F (100 °C) creates \"negligible amounts\" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90% by reducing the time needed for the meat to be cooked at high heat. Nitrosamines are found in some food, and may be produced by some cooking processes from proteins or from nitrites used as food preservatives; cured meat such as bacon has been found to be carcinogenic, with links to colon cancer. Ascorbate, which is added to cured meat, however, reduces nitrosamine formation.",
"title": "Health and safety"
},
{
"paragraph_id": 32,
"text": "Baking, grilling or broiling food, especially starchy foods, until a toasted crust is formed generates significant concentrations of acrylamide. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a \"myth\".",
"title": "Health and safety"
},
{
"paragraph_id": 33,
"text": "The scientific study of cooking has become known as molecular gastronomy. This is a subdiscipline of food science concerning the physical and chemical transformations that occur during cooking.",
"title": "Scientific aspects"
},
{
"paragraph_id": 34,
"text": "Important contributions have been made by scientists, chefs and authors such as Hervé This (chemist), Nicholas Kurti (physicist), Peter Barham (physicist), Harold McGee (author), Shirley Corriher (biochemist, author), Robert Wolke (chemist, author.) It is different for the application of scientific knowledge to cooking, that is \"molecular cooking\"( (for the technique) or \"molecular cuisine\" (for a culinary style), for which chefs such as Raymond Blanc, Philippe and Christian Conticini, Ferran Adria, Heston Blumenthal, Pierre Gagnaire (chef).",
"title": "Scientific aspects"
},
{
"paragraph_id": 35,
"text": "Chemical processes central to cooking include hydrolysis (in particular beta elimination of pectins, during the thermal treatment of plant tissues), pyrolysis, and glycation reactions wrongly named Maillard reactions.",
"title": "Scientific aspects"
},
{
"paragraph_id": 36,
"text": "Cooking foods with heat depends on many factors: the specific heat of an object, thermal conductivity, and (perhaps most significantly) the difference in temperature between the two objects. Thermal diffusivity is the combination of specific heat, conductivity and density that determines how long it will take for the food to reach a certain temperature.",
"title": "Scientific aspects"
},
{
"paragraph_id": 37,
"text": "Home cooking has traditionally been a process carried out informally in a home or around a communal fire, and can be enjoyed by all members of the family, although in many cultures women bear primary responsibility. Cooking is also often carried out outside of personal quarters, for example at restaurants, or schools. Bakeries were one of the earliest forms of cooking outside the home, and bakeries in the past often offered the cooking of pots of food provided by their customers as an additional service. In the present day, factory food preparation has become common, with many \"ready-to-eat\" as well as \"ready-to-cook\" foods being prepared and cooked in factories and home cooks using a mixture of scratch made, and factory made foods together to make a meal. The nutritional value of including more commercially prepared foods has been found to be inferior to home-made foods. Home-cooked meals tend to be healthier with fewer calories, and less saturated fat, cholesterol and sodium on a per calorie basis while providing more fiber, calcium, and iron. The ingredients are also directly sourced, so there is control over authenticity, taste, and nutritional value. The superior nutritional quality of home-cooking could therefore play a role in preventing chronic disease. Cohort studies following the elderly over 10 years show that adults who cook their own meals have significantly lower mortality, even when controlling for confounding variables.",
"title": "Home-cooking and commercial cooking"
},
{
"paragraph_id": 38,
"text": "\"Home-cooking\" may be associated with comfort food, and some commercially produced foods and restaurant meals are presented through advertising or packaging as having been \"home-cooked\", regardless of their actual origin. This trend began in the 1920s and is attributed to people in urban areas of the U.S. wanting homestyle food even though their schedules and smaller kitchens made cooking harder.",
"title": "Home-cooking and commercial cooking"
}
] | Cooking, also known as cookery or professionally as the culinary arts, is the art, science and craft of using heat to make food more palatable, digestible, nutritious, or safe. Cooking techniques and ingredients vary widely, from grilling food over an open fire to using electric stoves, to baking in various types of ovens, reflecting local conditions. Types of cooking also depend on the skill levels and training of the cooks. Cooking is done both by people in their own dwellings and by professional cooks and chefs in restaurants and other food establishments. Preparing food with heat or fire is an activity unique to humans. Archeological evidence of cooking fires from at least 300,000 years ago exists, but some estimate that humans started cooking up to 2 million years ago. The expansion of agriculture, commerce, trade, and transportation between civilizations in different regions offered cooks many new ingredients. New inventions and technologies, such as the invention of pottery for holding and boiling of water, expanded cooking techniques. Some modern cooks apply advanced scientific techniques to food preparation to further enhance the flavor of the dish served. | 2001-11-09T16:33:23Z | 2023-12-26T14:09:14Z | [
"Template:Div col end",
"Template:Cite news",
"Template:Doi",
"Template:Dead link",
"Template:Prehistoric technology",
"Template:Main",
"Template:See also",
"Template:Reflist",
"Template:Clarify",
"Template:Portal",
"Template:Cite book",
"Template:Cooking Techniques",
"Template:Authority control",
"Template:Cite magazine",
"Template:Div col",
"Template:Isbn",
"Template:Cuisine",
"Template:Meals wide",
"Template:Citation needed",
"Template:Failed verification",
"Template:Unreferenced section",
"Template:Cite web",
"Template:Cite journal",
"Template:Use dmy dates",
"Template:Nowrap",
"Template:Convert",
"Template:Webarchive",
"Template:Cite EB1911",
"Template:Subject bar",
"Template:Short description",
"Template:About",
"Template:Citation"
] | https://en.wikipedia.org/wiki/Cooking |
5,360 | Card game | A card game is any game using playing cards as the primary device with which the game is played, be they traditional or game-specific. Countless card games exist, including families of related games (such as poker). A small number of card games played with traditional decks have formally standardized rules with international tournaments being held, but most are folk games whose rules may vary by region, culture, location or from circle to circle.
Traditional card games are played with a deck or pack of playing cards which are identical in size and shape. Each card has two sides, the face and the back. Normally the backs of the cards are indistinguishable. The faces of the cards may all be unique, or there can be duplicates. The composition of a deck is known to each player. In some cases several decks are shuffled together to form a single pack or shoe. Modern card games usually have bespoke decks, often with a vast amount of cards, and can include number or action cards. This type of game is generally regarded as part of the board game hobby.
Games using playing cards exploit the fact that cards are individually identifiable from one side only, so that each player knows only the cards they hold and not those held by anyone else. For this reason card games are often characterized as games of chance or "imperfect information"—as distinct from games of strategy or perfect information, where the current position is fully visible to all players throughout the game. Many games that are not generally placed in the family of card games do in fact use cards for some aspect of their play.
Some games that are placed in the card game genre involve a board. The distinction is that the play in a card game chiefly depends on the use of the cards by players (the board is a guide for scorekeeping or for card placement), while board games (the principal non-card game genre to use cards) generally focus on the players' positions on the board, and use the cards for some secondary purpose.
Despite the presence of playing cards in Europe being recorded from around 1370, it is not until 1408 that the first card game is described in a document about the exploits of two card sharps; although it is evidently very simple, the game is not named. In fact the earliest games to be mentioned by name are Gleek, Ronfa and Condemnade, the latter being the game played by the aforementioned card cheats. All three are recorded during the 15th century, along with Karnöffel, first mentioned in 1426 and which is still played in several forms today, including Bruus, Knüffeln, Kaiserspiel and Styrivolt.
Since the arrival of trick-taking games in Europe in the late 14th century, there have only been two major innovations. The first was the introduction of trump cards with the power to beat all cards in other suits. Such cards were initially called trionfi and first appeared with the advent of Tarot cards in which there is a separate, permament trump suit comprising a number of picture cards. The first known example of such cards was ordered by the Duke of Milan around 1420 and included 16 trumps with images of Greek and Roman gods. Thus games played with Tarot cards appeared very early on and spread to most parts of Europe with the notable exceptions of the British Isles, the Iberian Peninsula, and the Balkans. However, we do not know the rules of the early Tarot games; the earliest detailed description in any language being those published by the Abbé de Marolles in Nevers in 1637.
The concept of trumps was sufficiently powerful that it was soon transferred to games played with far cheaper ordinary packs of cards, as opposed to expensive Tarot cards. The first of these was Triomphe, the name simply being the French equivalent of the Italian trionfi. Although not testified before 1538, its first rules were written by a Spaniard who left his native country for Milan in 1509 never to return; thus the game may date to the late 15th century.
Others games that may well date to the 15th century are Pochen – the game of Bocken or Boeckels being attested in Strasbourg in 1441 – and Thirty-One, which is first mentioned in a French translation of a 1440 sermon by the Italian, Saint Bernadine, the name actually referring to two different card games: one like Pontoon and one like Commerce.
In the 16th century printed documents replace handwritten sources and card games become a popular topic with preachers, autobiographists and writers in general. A key source of the games in vogue in France and Europe at that time is François Rabelais, whose fictional character Gargantua played no less than 30 card games, many of which are recognisable. They include: Aluette, Bête, Cent, Coquimbert, Coucou, Flush or Flux, Gé (Pairs), Gleek, Lansquenet, Piquet, Post and Pair, Primero, Ronfa, Triomphe, Sequence, Speculation, Tarot and Trente-et-Un; possibly Rams, Mouche and Brandeln as well. Girolamo Cardano also provides invaluable information including the earliest rules of Trappola. Among the most popular were the games of Flusso and Primiera, which originated in Italy and spread throughout Europe, becoming known in England as Flush and Primero.
In Britain the earliest known European fishing game was recorded in 1522. Another first was Losing Loadum, noted by Florio in 1591, which is the earliest known English point-trick game. In Scotland, the game of Mawe, testified in the 1550s, evolved from a country game into one played at the royal Scottish court, becoming a favourite of James VI. The ancestor of Cribbage – a game called Noddy – is mentioned for the first time in 1589, "Noddy" being the Knave turned for trump at the start of play.
The 17th century saw an upsurge in the number of new games being reported as well as the first sets of rules, those for Piquet appearing in 1632 and Reversis in 1634. The first French games compendium, La Maison Académique, appeared in 1654 and it was followed in 1674 by Charles Cotton's The Compleat Gamester, although an earlier manuscript of games by Francis Willughby was written sometime between 1665 and 1670. Cotton records the first rules for the classic English games of Cribbage, a descendant of Noddy, and Whist, a development of English Trump or Ruff ('ruff' then meaning 'rob') in which four players were dealt 12 cards each and the dealer 'robbed' from the remaining stock of 4 cards.
Piquet was a two-player, trick-taking game that originated in France, probably in the 16th century and was initially played with 36 cards before, around 1690, the pack reduced to the 32 cards that gives the Piquet pack its name. Reversis is a reverse game in which players avoid taking tricks and appears to be an Italian invention that came to France around 1600 and spread rapidly to other countries in Europe.
In the mid-17th century, a certain game named after Cardinal Mazarin, prime minister to King Louis XIV, became very popular at the French royal court. Called Hoc Mazarin, it had three phases, the final one of which evolved into a much simpler game called Manille that was renamed Comète on the appearance of Halley's Comet in 1682. In Comète the aim is to be first to shed all one's hand cards to sequences laid out in rows on the table. However, there are certain cards known as 'stops' or hocs: cards that end a sequence and give the one who played it the advantage of being able to start a new sequence. This concept spread to other 17th and 18th century games including Poque, Comete, Emprunt, Manille, Nain Jaune and Lindor, all except Emprunt being still played in some form today.
It was the 17th century that saw the second of the two great innovations being introduced into trick-taking games: the concept of bidding. This first emerged in the Spanish game of Ombre, an evolution of Triomphe that "in its time, was the most successful card game ever invented." Ombre's origins are unclear and obfuscated by the existence of a game called Homme or Bête in France, ombre and homme being respectively Spanish and French for 'man'. In Ombre, the player who won the bidding became the "Man" and played alone against the other two. The game spread rapidly across Europe, spawning variants for different numbers of players and known as Quadrille, Quintille, Médiateur and Solo. Quadrille went on to become highly fashionable in England during the 18th century and is mentioned several times, for example, in Jane Austen's Pride and Prejudice.
The first rules of any game in the German language were those for Rümpffen published in 1608 and later expanded in several subsequent editions. In addition, the first German games compendium, Palamedes Redivivus appeared in 1678, containing the rules for Hoick (Hoc), Ombre, Picquet (sic), Rümpffen and Thurnspiel.
The evolution of card games continued apace, with notable national games emerging like Briscola and Tressette (Italy), Schafkopf (Bavaria), Jass (Switzerland), Mariage, the ancestor of Austria's Schnapsen and Germany's Sixty-Six, and Tapp Tarock, the progenitor of most modern central European Tarot games. Whist spread to the continent becoming very popular in the north and west. In France, Comet appeared, a game that later evolved into Nain Jaune and the Victorian game of Pope Joan.
Card games may be classified in different ways: by their objective, by the equipment used (e.g. number of cards and type of suits), by country of origin or by mechanism (how the game is played). Parlett and McLeod predominantly group cards games by mechanism of which there are five categories: outplay, card exchange, hand comparison, layout and a miscellaneous category that includes combat and compendium games. These are described in the following sections.
Easily the largest category of games in which players have a hand of cards and must play them out to the table. Play ends when players have played all their cards.
Trick-taking games are the largest category of outplay games. Players typically receive an equal number of cards and a trick involves each player playing a card face up to the table – the rules of play dictating what cards may be played and who wins the trick.
There are two main types of trick-taking game with different objectives. Both are based on the play of multiple tricks, in each of which each player plays a single card from their hand, and based on the values of played cards one player wins or "takes" the trick. In plain-trick games the aim is to win a number of tricks, a specific trick or as many tricks as possible, without regard to the actual cards. In point-trick games, the number of tricks is immaterial; what counts is the value, in points, of the cards captured.
Many common Anglo-American games fall into the category of plain-trick games. The usual objective is to take the most tricks, but variations taking all tricks, making as few tricks (or penalty cards) as possible or taking an exact number of tricks. Bridge, Whist and Spades are popular examples. Hearts, Black Lady and Black Maria are examples of reverse games in which the aim is to avoid certain cards. Plain-trick games may be divided into the following 11 groups:
Point-trick games are all European or of European origin and include the Tarot card games. Individual cards have specific point values and the objective is usually to amass the majority of points by taking tricks, especially those with higher value cards. There are around nine main groups:
In beating games the idea is to beat the card just played if possible, otherwise it must be picked up, either alone or together with other cards, and added to the hand. In many beating games the objective is to shed all one's cards, in which case they are also "shedding games". Well known examples include Crazy Eights and Mau Mau.
This is a small group whose ancestor is Noddy, now extinct, but which generated the far more interesting games of Costly Colours and Cribbage. Players play in turn and add the values of the cards as they go. The aim is to reach or avoid certain totals and also to score for certain combinations.
In fishing games, cards from the hand are played against cards in a layout on the table, capturing table cards if they match. Fishing games are popular in many nations, including China, where there are many diverse fishing games. Scopa is considered one of the national card games of Italy. Cassino is the only fishing game to be widely played in English-speaking countries. Zwicker has been described as a "simpler and jollier version of Cassino", played in Germany. Tablanet (tablić) is a fishing-style game popular in Balkans.
The object of a matching (or sometimes "melding") game is to acquire particular groups of matching cards before an opponent can do so. In Rummy, this is done through drawing and discarding, and the groups are called melds. Mahjong is a very similar game played with tiles instead of cards. Non-Rummy examples of match-type games generally fall into the "fishing" genre and include the children's games Go Fish and Old Maid.
In games of the war group, also called "catch and collect games" or "accumulating games", the object is to acquire all cards in the deck. Examples include most War type games, and games involving slapping a discard pile such as Slapjack. Egyptian Ratscrew has both of these features.
Climbing games are an Oriental family in which the idea is to play a higher card or combination of cards that the one just played. Alternatively a player must pass or may choose to pass even if able to beat. The sole Western example is the game of President, which is probably derived from an Asian game.
Card exchange games form another large category in which players exchange a card or cards from their hands with table cards or with other players with the aim, typically, of collecting specific cards or card combinations. Games of the rummy family are the best known.
In these games players draw a card from stock, make a move if possible or desired, and then discard a card to a discard pile. Almost all the games of this group are in the rummy family, but Golf is a non-rummy example.
As the name might suggest, players exchange hand cards with a common pool of cards on the table. Examples include Schwimmen, Kemps. James Bond and Whisky Poker. They originated in the old European games of Thirty-One and Commerce.
A very old round game played in different forms in different countries. Players are dealt just one card and may try and swap it with a neighbour to avoid having the lowest card or, sometimes, certain penalty cards. The old French game is Coucou and its later English cousin is Ranter Go Round, also called Chase the Ace and Screw Your Neighbour.
A family of such games played with special cards includes Italian Cucù, Scandinavian Gnav, Austrian Hexenspiel and German Vogelspiel.
Games involving collecting sets of cards, the best known of which is Happy Families. Highly successful is its German equivalent, Quartett, which may be played with a Skat pack, but is much more commonly played with proprietary packs.
Games involving passing cards to your neighbours. The classic game is Old Maid which may, however, be derived from German Black Peter and related to the French game of Vieux Garçon. Pig, with its variations of Donkey and Spoons, is also popular.
Most patience or card solitaire games are designed to be played by one player, but some are designed for two or more players to compete.
Patience games originated in northern Europe and were designed for a single player, hence its subsequent North American name of solitaire. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or stock by moving all cards to one or more discard or foundation piles.
In competitive patiences, two or more players compete to be first to complete a patience or solitaire-like tableau. Some use a common layout; in others each player has a separate layout. Popular examples include Spite and Malice, Racing Demon or Nerts, Spit, Speed and Russian Bank.
The most common of these is Card Dominoes also known as Fan Tan or Parliament in which the idea is to build the four suits in sequence from a central card (the 7 in 52-card games or the Unter in 32-card packs). The winner is the first out and the loser the last left in holding cards.
Hand comparison games, also called comparing card games, are mostly gambling games that use cards. Players lay their initial stakes, are dealt cards, may or may not be able to exchange or add to them, and may or may not be able to raise their stakes, and the outcome is decided by some form of comparison of card values or combinations. The main groups are vying and banking games. A smaller mainly Oriental group are partition games in which players divide their hands before comparing.
Vying games, are those in which players bet or "vie" on who has the best hand. The player with the best combination of hand cards in a "showdown", or the player able to bluff the others into folding, wins the hand. Easily the best known of the group around the world is Poker, which itself is a family of games with over 100 variants. Other examples include English Brag and the old Basque game of Mus. Most may be classified as gambling games and, while they may involve skill in terms of bluffing and memorising and assessing odds, they involve little or no card playing skill.
Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence.
These are gambling games played for money or chips in which players compete, not against one another, but against a banker. They are commonly played in casinos, but many have become domesticised, played at home for sweets, matchsticks or points. In casino games, the banker will have a 'house advantage' that ensures a profit for the casino. Popular casino games include Blackjack and Baccarat, while Pontoon is a cousin of Blackjack that emerged from the trenches of the First World War to become a popular British family game.
These games do not fit into any of the foregoing categories. The only traditional games in this group are the compendium games, which date back at least 200 years, and Speculation, a 19th century trading game.
Compendium games consist of a sequence of different contracts played in succession. A common pattern is for a number of reverse deals to be played, in which the aim is to avoid certain cards, followed by a final contract which is a domino-type game. Examples include: Barbu, Herzeln, Lorum and Rosbiratschka. In other games, such as Quodlibet and Rumpel, there is a range of widely varying contracts.
A new genre not recorded before 1970, most of which use proprietary cards of the collectible card game type (see below). The earliest and best known is Magic: The Gathering.
Another broad way of classifying card games is by objective. There are four main types as well as a handful of games that have miscellaneous objectives.
In these games the objective is to capture cards or to avoid capturing them. These break down into the following:
In a shedding game, also called an accumulating game, players start with a hand of cards, and the object of the game is to be the first player to discard all cards from one's hand. Common shedding games include Crazy Eights (commercialized by Mattel as Uno) and Daihinmin. Some matching-type games are also shedding-type games; some variants of Rummy such as Paskahousu, Phase 10, Rummikub, the bluffing game I Doubt It, and the children's games Musta Maija and Old Maid, fall into both categories.
In many games, the aim is to form combinations of cards: by addition, by matching sets or forming sequences. All Rummy games are based on the last two principles, although in the basic variants, the end objective is to shed cards which makes them shedding games (see above). However, meld scoring variants such as Canasta or Rommé are true combination games.
Comparing card games are those where hand values are compared to determine the winner, also known as "vying" or "showdown" games. Poker, blackjack, mus, and baccarat are examples of comparing card games. As seen, nearly all of these games are designed as gambling games.
Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are ordinary card games with the establishment of "drinking rules"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games.
These are card games played with a dedicated deck. Many other card games have been designed and published on a commercial or amateur basis. In a few cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer.
Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such.
Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two or more players. Each player has their own deck constructed from a very large pool of unique cards in the commercial market. The cards have different effects, costs, and art. New card sets are released periodically and sold as starter decks or booster packs. Obtaining the different cards makes the game a collectible card game, and cards are sold or traded on the secondary market. Magic: The Gathering, Pokémon, and Yu-Gi-Oh! are well-known collectible card games.
Living card games (LCGs) are similar to collectible card games (CCGs), with their most distinguishing feature being a fixed distribution method, which breaks away from the traditional collectible card game format. While new cards for CCGs are usually sold in the form of starter decks or booster packs (the latter being often randomized), LCGs thrive on a model that requires players to acquire one core set in order to play the game, which players can further customize by acquiring extra sets or expansions featuring new content in the form of cards or scenarios. No randomization is involved in the process, thus players that get the same sets or expansions will get the exact same content. The term was popularized by Fantasy Flight Games (FFG) and mainly applies to its products, however some tabletop gaming companies can be seen using a very similar model.
A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football.
Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible.
Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as solitaire or patience card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice.
In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation.
One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order.
Another way of extending a two-player game to more players is as a cut-throat or individual game, in which all players play for themselves, and win or lose alone. Most such card games are round games, i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all.
For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are solo games, i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others.
The players of a card game normally form a circle around a table or other space that can hold cards. The game orientation or direction of play, which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. (In real-time card games, there may be no need for a direction of play.) Most regions have a traditional direction of play, such as:
Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (counterclockwise).
Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules.
Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the dealer, i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation.
As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In the case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general, any method can be used, such as tossing a coin in case of a two-player game, drawing cards until one player draws an ace, or rolling dice.
A hand, also called a deal, is a unit of the game that begins with the dealer shuffling and dealing the cards as described below, and ends with the players scoring and the next dealer being determined. The set of cards that each player receives and holds in his or her hands is also known as that player's hand.
The hand is over when the players have finished playing their hands. Most often this occurs when one player (or all) has no cards left. The player who sits after the dealer in the direction of play is known as eldest hand (or in two-player games as elder hand) or forehand. A game round consists of as many hands as there are players. After each hand, the deal is passed on in the direction of play, i.e. the previous eldest hand becomes the new dealer. Normally players score points after each hand. A game may consist of a fixed number of rounds. Alternatively it can be played for a fixed number of points. In this case it is over with the hand in which a player reaches the target score.
Shuffling is the process of bringing the cards of a pack into a random order. There are a large number of techniques with various advantages and disadvantages. Riffle shuffling is a method in which the deck is divided into two roughly equal-sized halves that are bent and then released, so that the cards interlace. Repeating this process several times randomizes the deck well, but the method is harder to learn than some others and may damage the cards. The overhand shuffle and the Hindu shuffle are two techniques that work by taking batches of cards from the top of the deck and reassembling them in the opposite order. They are easier to learn but must be repeated more often. A method suitable for small children consists in spreading the cards on a large surface and moving them around before picking up the deck again. This is also the most common method for shuffling tiles such as dominoes.
For casino games that are played for large sums it is vital that the cards be properly randomized, but for many games this is less critical, and in fact player experience can suffer when the cards are shuffled too well. The official skat rules stipulate that the cards are shuffled well, but according to a decision of the German skat court, a one-handed player should ask another player to do the shuffling, rather than use a shuffling machine, as it would shuffle the cards too well. French belote rules go so far as to prescribe that the deck never be shuffled between hands.
The dealer takes all of the cards in the pack, arranges them so that they are in a uniform stack, and shuffles them. In strict play, the dealer then offers the deck to the previous player (in the sense of the game direction) for cutting. If the deal is clockwise, this is the player to the dealer's right; if counterclockwise, it is the player to the dealer's left. The invitation to cut is made by placing the pack, face downward, on the table near the player who is to cut: who then lifts the upper portion of the pack clear of the lower portion and places it alongside. (Normally the two portions have about equal size. Strict rules often indicate that each portion must contain a certain minimum number of cards, such as three or five.) The formerly lower portion is then replaced on top of the formerly upper portion. Instead of cutting, one may also knock on the deck to indicate that one trusts the dealer to have shuffled fairly.
The actual deal (distribution of cards) is done in the direction of play, beginning with eldest hand. The dealer holds the pack, face down, in one hand, and removes cards from the top of it with his or her other hand to distribute to the players, placing them face down on the table in front of the players to whom they are dealt. The cards may be dealt one at a time, or in batches of more than one card; and either the entire pack or a determined number of cards are dealt out. The undealt cards, if any, are left face down in the middle of the table, forming the stock (also called the talon, widow, skat or kitty depending on the game and region).
Throughout the shuffle, cut, and deal, the dealer should prevent the players from seeing the faces of any of the cards. The players should not try to see any of the faces. Should a player accidentally see a card, other than one's own, proper etiquette would be to admit this. It is also dishonest to try to see cards as they are dealt, or to take advantage of having seen a card. Should a card accidentally become exposed, (visible to all), any player can demand a redeal (all the cards are gathered up, and the shuffle, cut, and deal are repeated) or that the card be replaced randomly into the deck ("burning" it) and a replacement dealt from the top to the player who was to receive the revealed card.
When the deal is complete, all players pick up their cards, or "hand", and hold them in such a way that the faces can be seen by the holder of the cards but not the other players, or vice versa depending on the game. It is helpful to fan one's cards out so that if they have corner indices all their values can be seen at once. In most games, it is also useful to sort one's hand, rearranging the cards in a way appropriate to the game. For example, in a trick-taking game it may be easier to have all one's cards of the same suit together, whereas in a rummy game one might sort them by rank or by potential combinations.
Normally communication between partners about tactics or the cards in their hands is forbidden. However, in a small number of games communication and/or signalling is permitted and very much part of the play. Most of these games are very old and, often, have rules of play that allow any card to be played at any time. Such games include:
A new card game starts in a small way, either as someone's invention, or as a modification of an existing game. Those playing it may agree to change the rules as they wish. The rules that they agree on become the "house rules" under which they play the game. A set of house rules may be accepted as valid by a group of players wherever they play, as it may also be accepted as governing all play within a particular house, café, or club.
When a game becomes sufficiently popular, so that people often play it with strangers, there is a need for a generally accepted set of rules. This need is often met when a particular set of house rules becomes generally recognized. For example, when Whist became popular in 18th-century England, players in the Portland Club agreed on a set of house rules for use on its premises. Players in some other clubs then agreed to follow the "Portland Club" rules, rather than go to the trouble of codifying and printing their own sets of rules. The Portland Club rules eventually became generally accepted throughout England and Western cultures.
There is nothing static or "official" about this process. For the majority of games, there is no one set of universal rules by which the game is played, and the most common ruleset is no more or less than that. Many widely played card games, such as Canasta and Pinochle, have no official regulating body. The most common ruleset is often determined by the most popular distribution of rulebooks for card games. Perhaps the original compilation of popular playing card games was collected by Edmund Hoyle, a self-made authority on many popular parlor games. The U.S. Playing Card Company now owns the eponymous Hoyle brand, and publishes a series of rulebooks for various families of card games that have largely standardized the games' rules in countries and languages where the rulebooks are widely distributed. However, players are free to, and often do, invent "house rules" to supplement or even largely replace the "standard" rules.
If there is a sense in which a card game can have an official set of rules, it is when that card game has an "official" governing body. For example, the rules of tournament bridge are governed by the World Bridge Federation, and by local bodies in various countries such as the American Contract Bridge League in the U.S., and the English Bridge Union in England. The rules of skat are governed by The International Skat Players Association and, in Germany, by the Deutscher Skatverband which publishes the Skatordnung. The rules of French tarot are governed by the Fédération Française de Tarot. The rules of Schafkopf are laid down by the Schafkopfschule in Munich. Even in these cases, the rules must only be followed at games sanctioned by these governing bodies or where the tournament organisers specify them. Players in informal settings are free to implement agreed supplemental or substitute rules. For example, in Schafkopf there are numerous local variants sometimes known as "impure" Schafkopf and specified by assuming the official rules and describing the additions e.g. "with Geier and Bettel, tariff 5/10 cents".
An infraction is any action which is against the rules of the game, such as playing a card when it is not one's turn to play or the accidental exposure of a card, informally known as "bleeding."
In many official sets of rules for card games, the rules specifying the penalties for various infractions occupy more pages than the rules specifying how to play correctly. This is tedious but necessary for games that are played seriously. Players who intend to play a card game at a high level generally ensure before beginning that all agree on the penalties to be used. When playing privately, this will normally be a question of agreeing house rules. In a tournament, there will probably be a tournament director who will enforce the rules when required and arbitrate in cases of doubt.
If a player breaks the rules of a game deliberately, this is cheating. The rest of this section is therefore about accidental infractions, caused by ignorance, clumsiness, inattention, etc.
As the same game is played repeatedly among a group of players, precedents build up about how a particular infraction of the rules should be handled. For example, "Sheila just led a card when it wasn't her turn. Last week when Jo did that, we agreed ... etc." Sets of such precedents tend to become established among groups of players, and to be regarded as part of the house rules. Sets of house rules may become formalized, as described in the previous section. Therefore, for some games, there is a "proper" way of handling infractions of the rules. But for many games, without governing bodies, there is no standard way of handling infractions.
In many circumstances, there is no need for special rules dealing with what happens after an infraction. As a general principle, the person who broke a rule should not benefit from it, and the other players should not lose by it. An exception to this may be made in games with fixed partnerships, in which it may be felt that the partner(s) of the person who broke a rule should also not benefit. The penalty for an accidental infraction should be as mild as reasonable, consistent with there being a possible benefit to the person responsible.
The oldest surviving reference to the card game in world history is from the 9th century China, when the Collection of Miscellanea at Duyang, written by Tang-dynasty writer Su E, described Princess Tongchang (daughter of Emperor Yizong of Tang) playing the "leaf game" with members of the Wei clan (the family of the princess's husband) in 868 . The Song dynasty statesman and historian Ouyang Xiu has noted that paper playing cards arose in connection to an earlier development in the book format from scrolls to pages.
Playing cards first appeared in Europe in the last quarter of the 14th century. The earliest European references speak of a Saracen or Moorish game called naib, and in fact an almost complete Mamluk Egyptian deck of 52 cards in a distinct oriental design has survived from around the same time, with the four suits swords, polo sticks, cups and coins and the ranks king, governor, second governor, and ten to one.
The 1430s in Italy saw the invention of the tarot deck, a full Latin-suited deck augmented by suitless cards with painted motifs that played a special role as trumps. Tarot card games are still played with (subsets of) these decks in parts of Central Europe. A full tarot deck contains 14 cards in each suit; low cards labeled 1–10, and court cards valet (jack), chevalier (cavalier/knight), dame (queen), and roi (king), plus the fool or excuse card, and 21 trump cards. In the 18th century the card images of the traditional Italian tarot decks became popular in cartomancy and evolved into "esoteric" decks used primarily for the purpose; today most tarot decks sold in North America are the occult type, and are closely associated with fortune telling. In Europe, "playing tarot" decks remain popular for games, and have evolved since the 18th century to use regional suits (spades, hearts, diamonds and clubs in France; leaves, hearts, bells and acorns in Germany) as well as other familiar aspects of the English-pattern pack such as corner card indices and "stamped" card symbols for non-court cards. Decks differ regionally based on the number of cards needed to play the games; the French tarot consists of the "full" 78 cards, while Germanic, Spanish and Italian Tarot variants remove certain values (usually low suited cards) from the deck, creating a deck with as few as 32 cards.
The French suits were introduced around 1480 and, in France, mostly replaced the earlier Latin suits of swords, clubs, cups and coins. (which are still common in Spanish- and Portuguese-speaking countries as well as in some northern regions of Italy) The suit symbols, being very simple and single-color, could be stamped onto the playing cards to create a deck, thus only requiring special full-color card art for the court cards. This drastically simplifies the production of a deck of cards versus the traditional Italian deck, which used unique full-color art for each card in the deck. The French suits became popular in English playing cards in the 16th century (despite historic animosity between France and England), and from there were introduced to British colonies including North America. The rise of Western culture has led to the near-universal popularity and availability of French-suited playing cards even in areas with their own regional card art.
In Japan, a distinct 48-card hanafuda deck is popular. It is derived from 16th-century Portuguese decks, after undergoing a long evolution driven by laws enacted by the Tokugawa shogunate attempting to ban the use of playing cards
The best-known deck internationally is the English pattern of the 52-card French deck, also called the International or Anglo-American pattern, used for such games as poker and contract bridge. It contains one card for each unique combination of thirteen ranks and the four French suits spades, hearts, diamonds, and clubs. The ranks (from highest to lowest in bridge and poker) are ace, king, queen, jack (or knave), and the numbers from ten down to two (or deuce). The trump cards and knight cards from the French playing tarot are not included.
Originally the term knave was more common than "jack"; the card had been called a jack as part of the terminology of All-Fours since the 17th century, but the word was considered vulgar. (Note the exclamation by Estella in Charles Dickens's novel Great Expectations: "He calls the knaves, Jacks, this boy!") However, because the card abbreviation for knave ("Kn") was so close to that of the king, it was very easy to confuse them, especially after suits and rankings were moved to the corners of the card in order to enable people to fan them in one hand and still see all the values. (The earliest known deck to place suits and rankings in the corner of the card is from 1693, but these cards did not become common until after 1864 when Hart reintroduced them along with the knave-to-jack change.) However, books of card games published in the third quarter of the 19th century evidently still referred to the "knave", and the term with this definition is still recognized in the United Kingdom.
In the 17th century, a French, five-trick, gambling game called Bête became popular and spread to Germany, where it was called La Bete and England where it was named Beast. It was a derivative of Triomphe and was the first card game in history to introduce the concept of bidding.
Chinese handmade mother-of-pearl gaming counters were used in scoring and bidding of card games in the West during the approximate period of 1700–1840. The gaming counters would bear an engraving such as a coat of arms or a monogram to identify a family or individual. Many of the gaming counters also depict Chinese scenes, flowers or animals. Queen Charlotte is one prominent British individual who is known to have played with the Chinese gaming counters. Card games such as Ombre, Quadrille and Pope Joan were popular at the time and required counters for scoring. The production of counters declined after Whist, with its different scoring method, became the most popular card game in the West.
Based on the association of card games and gambling, Pope Benedict XIV banned card games on October 17, 1750. | [
{
"paragraph_id": 0,
"text": "A card game is any game using playing cards as the primary device with which the game is played, be they traditional or game-specific. Countless card games exist, including families of related games (such as poker). A small number of card games played with traditional decks have formally standardized rules with international tournaments being held, but most are folk games whose rules may vary by region, culture, location or from circle to circle.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Traditional card games are played with a deck or pack of playing cards which are identical in size and shape. Each card has two sides, the face and the back. Normally the backs of the cards are indistinguishable. The faces of the cards may all be unique, or there can be duplicates. The composition of a deck is known to each player. In some cases several decks are shuffled together to form a single pack or shoe. Modern card games usually have bespoke decks, often with a vast amount of cards, and can include number or action cards. This type of game is generally regarded as part of the board game hobby.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Games using playing cards exploit the fact that cards are individually identifiable from one side only, so that each player knows only the cards they hold and not those held by anyone else. For this reason card games are often characterized as games of chance or \"imperfect information\"—as distinct from games of strategy or perfect information, where the current position is fully visible to all players throughout the game. Many games that are not generally placed in the family of card games do in fact use cards for some aspect of their play.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Some games that are placed in the card game genre involve a board. The distinction is that the play in a card game chiefly depends on the use of the cards by players (the board is a guide for scorekeeping or for card placement), while board games (the principal non-card game genre to use cards) generally focus on the players' positions on the board, and use the cards for some secondary purpose.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Despite the presence of playing cards in Europe being recorded from around 1370, it is not until 1408 that the first card game is described in a document about the exploits of two card sharps; although it is evidently very simple, the game is not named. In fact the earliest games to be mentioned by name are Gleek, Ronfa and Condemnade, the latter being the game played by the aforementioned card cheats. All three are recorded during the 15th century, along with Karnöffel, first mentioned in 1426 and which is still played in several forms today, including Bruus, Knüffeln, Kaiserspiel and Styrivolt.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Since the arrival of trick-taking games in Europe in the late 14th century, there have only been two major innovations. The first was the introduction of trump cards with the power to beat all cards in other suits. Such cards were initially called trionfi and first appeared with the advent of Tarot cards in which there is a separate, permament trump suit comprising a number of picture cards. The first known example of such cards was ordered by the Duke of Milan around 1420 and included 16 trumps with images of Greek and Roman gods. Thus games played with Tarot cards appeared very early on and spread to most parts of Europe with the notable exceptions of the British Isles, the Iberian Peninsula, and the Balkans. However, we do not know the rules of the early Tarot games; the earliest detailed description in any language being those published by the Abbé de Marolles in Nevers in 1637.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The concept of trumps was sufficiently powerful that it was soon transferred to games played with far cheaper ordinary packs of cards, as opposed to expensive Tarot cards. The first of these was Triomphe, the name simply being the French equivalent of the Italian trionfi. Although not testified before 1538, its first rules were written by a Spaniard who left his native country for Milan in 1509 never to return; thus the game may date to the late 15th century.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Others games that may well date to the 15th century are Pochen – the game of Bocken or Boeckels being attested in Strasbourg in 1441 – and Thirty-One, which is first mentioned in a French translation of a 1440 sermon by the Italian, Saint Bernadine, the name actually referring to two different card games: one like Pontoon and one like Commerce.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the 16th century printed documents replace handwritten sources and card games become a popular topic with preachers, autobiographists and writers in general. A key source of the games in vogue in France and Europe at that time is François Rabelais, whose fictional character Gargantua played no less than 30 card games, many of which are recognisable. They include: Aluette, Bête, Cent, Coquimbert, Coucou, Flush or Flux, Gé (Pairs), Gleek, Lansquenet, Piquet, Post and Pair, Primero, Ronfa, Triomphe, Sequence, Speculation, Tarot and Trente-et-Un; possibly Rams, Mouche and Brandeln as well. Girolamo Cardano also provides invaluable information including the earliest rules of Trappola. Among the most popular were the games of Flusso and Primiera, which originated in Italy and spread throughout Europe, becoming known in England as Flush and Primero.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In Britain the earliest known European fishing game was recorded in 1522. Another first was Losing Loadum, noted by Florio in 1591, which is the earliest known English point-trick game. In Scotland, the game of Mawe, testified in the 1550s, evolved from a country game into one played at the royal Scottish court, becoming a favourite of James VI. The ancestor of Cribbage – a game called Noddy – is mentioned for the first time in 1589, \"Noddy\" being the Knave turned for trump at the start of play.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The 17th century saw an upsurge in the number of new games being reported as well as the first sets of rules, those for Piquet appearing in 1632 and Reversis in 1634. The first French games compendium, La Maison Académique, appeared in 1654 and it was followed in 1674 by Charles Cotton's The Compleat Gamester, although an earlier manuscript of games by Francis Willughby was written sometime between 1665 and 1670. Cotton records the first rules for the classic English games of Cribbage, a descendant of Noddy, and Whist, a development of English Trump or Ruff ('ruff' then meaning 'rob') in which four players were dealt 12 cards each and the dealer 'robbed' from the remaining stock of 4 cards.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Piquet was a two-player, trick-taking game that originated in France, probably in the 16th century and was initially played with 36 cards before, around 1690, the pack reduced to the 32 cards that gives the Piquet pack its name. Reversis is a reverse game in which players avoid taking tricks and appears to be an Italian invention that came to France around 1600 and spread rapidly to other countries in Europe.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the mid-17th century, a certain game named after Cardinal Mazarin, prime minister to King Louis XIV, became very popular at the French royal court. Called Hoc Mazarin, it had three phases, the final one of which evolved into a much simpler game called Manille that was renamed Comète on the appearance of Halley's Comet in 1682. In Comète the aim is to be first to shed all one's hand cards to sequences laid out in rows on the table. However, there are certain cards known as 'stops' or hocs: cards that end a sequence and give the one who played it the advantage of being able to start a new sequence. This concept spread to other 17th and 18th century games including Poque, Comete, Emprunt, Manille, Nain Jaune and Lindor, all except Emprunt being still played in some form today.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "It was the 17th century that saw the second of the two great innovations being introduced into trick-taking games: the concept of bidding. This first emerged in the Spanish game of Ombre, an evolution of Triomphe that \"in its time, was the most successful card game ever invented.\" Ombre's origins are unclear and obfuscated by the existence of a game called Homme or Bête in France, ombre and homme being respectively Spanish and French for 'man'. In Ombre, the player who won the bidding became the \"Man\" and played alone against the other two. The game spread rapidly across Europe, spawning variants for different numbers of players and known as Quadrille, Quintille, Médiateur and Solo. Quadrille went on to become highly fashionable in England during the 18th century and is mentioned several times, for example, in Jane Austen's Pride and Prejudice.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The first rules of any game in the German language were those for Rümpffen published in 1608 and later expanded in several subsequent editions. In addition, the first German games compendium, Palamedes Redivivus appeared in 1678, containing the rules for Hoick (Hoc), Ombre, Picquet (sic), Rümpffen and Thurnspiel.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The evolution of card games continued apace, with notable national games emerging like Briscola and Tressette (Italy), Schafkopf (Bavaria), Jass (Switzerland), Mariage, the ancestor of Austria's Schnapsen and Germany's Sixty-Six, and Tapp Tarock, the progenitor of most modern central European Tarot games. Whist spread to the continent becoming very popular in the north and west. In France, Comet appeared, a game that later evolved into Nain Jaune and the Victorian game of Pope Joan.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Card games may be classified in different ways: by their objective, by the equipment used (e.g. number of cards and type of suits), by country of origin or by mechanism (how the game is played). Parlett and McLeod predominantly group cards games by mechanism of which there are five categories: outplay, card exchange, hand comparison, layout and a miscellaneous category that includes combat and compendium games. These are described in the following sections.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "Easily the largest category of games in which players have a hand of cards and must play them out to the table. Play ends when players have played all their cards.",
"title": "Outplay games"
},
{
"paragraph_id": 18,
"text": "Trick-taking games are the largest category of outplay games. Players typically receive an equal number of cards and a trick involves each player playing a card face up to the table – the rules of play dictating what cards may be played and who wins the trick.",
"title": "Outplay games"
},
{
"paragraph_id": 19,
"text": "There are two main types of trick-taking game with different objectives. Both are based on the play of multiple tricks, in each of which each player plays a single card from their hand, and based on the values of played cards one player wins or \"takes\" the trick. In plain-trick games the aim is to win a number of tricks, a specific trick or as many tricks as possible, without regard to the actual cards. In point-trick games, the number of tricks is immaterial; what counts is the value, in points, of the cards captured.",
"title": "Outplay games"
},
{
"paragraph_id": 20,
"text": "Many common Anglo-American games fall into the category of plain-trick games. The usual objective is to take the most tricks, but variations taking all tricks, making as few tricks (or penalty cards) as possible or taking an exact number of tricks. Bridge, Whist and Spades are popular examples. Hearts, Black Lady and Black Maria are examples of reverse games in which the aim is to avoid certain cards. Plain-trick games may be divided into the following 11 groups:",
"title": "Outplay games"
},
{
"paragraph_id": 21,
"text": "",
"title": "Outplay games"
},
{
"paragraph_id": 22,
"text": "Point-trick games are all European or of European origin and include the Tarot card games. Individual cards have specific point values and the objective is usually to amass the majority of points by taking tricks, especially those with higher value cards. There are around nine main groups:",
"title": "Outplay games"
},
{
"paragraph_id": 23,
"text": "",
"title": "Outplay games"
},
{
"paragraph_id": 24,
"text": "In beating games the idea is to beat the card just played if possible, otherwise it must be picked up, either alone or together with other cards, and added to the hand. In many beating games the objective is to shed all one's cards, in which case they are also \"shedding games\". Well known examples include Crazy Eights and Mau Mau.",
"title": "Outplay games"
},
{
"paragraph_id": 25,
"text": "",
"title": "Outplay games"
},
{
"paragraph_id": 26,
"text": "This is a small group whose ancestor is Noddy, now extinct, but which generated the far more interesting games of Costly Colours and Cribbage. Players play in turn and add the values of the cards as they go. The aim is to reach or avoid certain totals and also to score for certain combinations.",
"title": "Outplay games"
},
{
"paragraph_id": 27,
"text": "In fishing games, cards from the hand are played against cards in a layout on the table, capturing table cards if they match. Fishing games are popular in many nations, including China, where there are many diverse fishing games. Scopa is considered one of the national card games of Italy. Cassino is the only fishing game to be widely played in English-speaking countries. Zwicker has been described as a \"simpler and jollier version of Cassino\", played in Germany. Tablanet (tablić) is a fishing-style game popular in Balkans.",
"title": "Outplay games"
},
{
"paragraph_id": 28,
"text": "The object of a matching (or sometimes \"melding\") game is to acquire particular groups of matching cards before an opponent can do so. In Rummy, this is done through drawing and discarding, and the groups are called melds. Mahjong is a very similar game played with tiles instead of cards. Non-Rummy examples of match-type games generally fall into the \"fishing\" genre and include the children's games Go Fish and Old Maid.",
"title": "Outplay games"
},
{
"paragraph_id": 29,
"text": "",
"title": "Outplay games"
},
{
"paragraph_id": 30,
"text": "In games of the war group, also called \"catch and collect games\" or \"accumulating games\", the object is to acquire all cards in the deck. Examples include most War type games, and games involving slapping a discard pile such as Slapjack. Egyptian Ratscrew has both of these features.",
"title": "Outplay games"
},
{
"paragraph_id": 31,
"text": "",
"title": "Outplay games"
},
{
"paragraph_id": 32,
"text": "Climbing games are an Oriental family in which the idea is to play a higher card or combination of cards that the one just played. Alternatively a player must pass or may choose to pass even if able to beat. The sole Western example is the game of President, which is probably derived from an Asian game.",
"title": "Outplay games"
},
{
"paragraph_id": 33,
"text": "",
"title": "Outplay games"
},
{
"paragraph_id": 34,
"text": "Card exchange games form another large category in which players exchange a card or cards from their hands with table cards or with other players with the aim, typically, of collecting specific cards or card combinations. Games of the rummy family are the best known.",
"title": "Card exchange games"
},
{
"paragraph_id": 35,
"text": "",
"title": "Card exchange games"
},
{
"paragraph_id": 36,
"text": "In these games players draw a card from stock, make a move if possible or desired, and then discard a card to a discard pile. Almost all the games of this group are in the rummy family, but Golf is a non-rummy example.",
"title": "Card exchange games"
},
{
"paragraph_id": 37,
"text": "",
"title": "Card exchange games"
},
{
"paragraph_id": 38,
"text": "As the name might suggest, players exchange hand cards with a common pool of cards on the table. Examples include Schwimmen, Kemps. James Bond and Whisky Poker. They originated in the old European games of Thirty-One and Commerce.",
"title": "Card exchange games"
},
{
"paragraph_id": 39,
"text": "",
"title": "Card exchange games"
},
{
"paragraph_id": 40,
"text": "A very old round game played in different forms in different countries. Players are dealt just one card and may try and swap it with a neighbour to avoid having the lowest card or, sometimes, certain penalty cards. The old French game is Coucou and its later English cousin is Ranter Go Round, also called Chase the Ace and Screw Your Neighbour.",
"title": "Card exchange games"
},
{
"paragraph_id": 41,
"text": "A family of such games played with special cards includes Italian Cucù, Scandinavian Gnav, Austrian Hexenspiel and German Vogelspiel.",
"title": "Card exchange games"
},
{
"paragraph_id": 42,
"text": "",
"title": "Card exchange games"
},
{
"paragraph_id": 43,
"text": "Games involving collecting sets of cards, the best known of which is Happy Families. Highly successful is its German equivalent, Quartett, which may be played with a Skat pack, but is much more commonly played with proprietary packs.",
"title": "Card exchange games"
},
{
"paragraph_id": 44,
"text": "",
"title": "Card exchange games"
},
{
"paragraph_id": 45,
"text": "Games involving passing cards to your neighbours. The classic game is Old Maid which may, however, be derived from German Black Peter and related to the French game of Vieux Garçon. Pig, with its variations of Donkey and Spoons, is also popular.",
"title": "Card exchange games"
},
{
"paragraph_id": 46,
"text": "Most patience or card solitaire games are designed to be played by one player, but some are designed for two or more players to compete.",
"title": "Layout games"
},
{
"paragraph_id": 47,
"text": "Patience games originated in northern Europe and were designed for a single player, hence its subsequent North American name of solitaire. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or stock by moving all cards to one or more discard or foundation piles.",
"title": "Layout games"
},
{
"paragraph_id": 48,
"text": "In competitive patiences, two or more players compete to be first to complete a patience or solitaire-like tableau. Some use a common layout; in others each player has a separate layout. Popular examples include Spite and Malice, Racing Demon or Nerts, Spit, Speed and Russian Bank.",
"title": "Layout games"
},
{
"paragraph_id": 49,
"text": "The most common of these is Card Dominoes also known as Fan Tan or Parliament in which the idea is to build the four suits in sequence from a central card (the 7 in 52-card games or the Unter in 32-card packs). The winner is the first out and the loser the last left in holding cards.",
"title": "Layout games"
},
{
"paragraph_id": 50,
"text": "Hand comparison games, also called comparing card games, are mostly gambling games that use cards. Players lay their initial stakes, are dealt cards, may or may not be able to exchange or add to them, and may or may not be able to raise their stakes, and the outcome is decided by some form of comparison of card values or combinations. The main groups are vying and banking games. A smaller mainly Oriental group are partition games in which players divide their hands before comparing.",
"title": "Hand comparison games"
},
{
"paragraph_id": 51,
"text": "",
"title": "Hand comparison games"
},
{
"paragraph_id": 52,
"text": "Vying games, are those in which players bet or \"vie\" on who has the best hand. The player with the best combination of hand cards in a \"showdown\", or the player able to bluff the others into folding, wins the hand. Easily the best known of the group around the world is Poker, which itself is a family of games with over 100 variants. Other examples include English Brag and the old Basque game of Mus. Most may be classified as gambling games and, while they may involve skill in terms of bluffing and memorising and assessing odds, they involve little or no card playing skill.",
"title": "Hand comparison games"
},
{
"paragraph_id": 53,
"text": "Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence.",
"title": "Hand comparison games"
},
{
"paragraph_id": 54,
"text": "These are gambling games played for money or chips in which players compete, not against one another, but against a banker. They are commonly played in casinos, but many have become domesticised, played at home for sweets, matchsticks or points. In casino games, the banker will have a 'house advantage' that ensures a profit for the casino. Popular casino games include Blackjack and Baccarat, while Pontoon is a cousin of Blackjack that emerged from the trenches of the First World War to become a popular British family game.",
"title": "Hand comparison games"
},
{
"paragraph_id": 55,
"text": "These games do not fit into any of the foregoing categories. The only traditional games in this group are the compendium games, which date back at least 200 years, and Speculation, a 19th century trading game.",
"title": "Miscellaneous games"
},
{
"paragraph_id": 56,
"text": "Compendium games consist of a sequence of different contracts played in succession. A common pattern is for a number of reverse deals to be played, in which the aim is to avoid certain cards, followed by a final contract which is a domino-type game. Examples include: Barbu, Herzeln, Lorum and Rosbiratschka. In other games, such as Quodlibet and Rumpel, there is a range of widely varying contracts.",
"title": "Miscellaneous games"
},
{
"paragraph_id": 57,
"text": "A new genre not recorded before 1970, most of which use proprietary cards of the collectible card game type (see below). The earliest and best known is Magic: The Gathering.",
"title": "Miscellaneous games"
},
{
"paragraph_id": 58,
"text": "Another broad way of classifying card games is by objective. There are four main types as well as a handful of games that have miscellaneous objectives.",
"title": "Card games by objective"
},
{
"paragraph_id": 59,
"text": "In these games the objective is to capture cards or to avoid capturing them. These break down into the following:",
"title": "Card games by objective"
},
{
"paragraph_id": 60,
"text": "In a shedding game, also called an accumulating game, players start with a hand of cards, and the object of the game is to be the first player to discard all cards from one's hand. Common shedding games include Crazy Eights (commercialized by Mattel as Uno) and Daihinmin. Some matching-type games are also shedding-type games; some variants of Rummy such as Paskahousu, Phase 10, Rummikub, the bluffing game I Doubt It, and the children's games Musta Maija and Old Maid, fall into both categories.",
"title": "Card games by objective"
},
{
"paragraph_id": 61,
"text": "In many games, the aim is to form combinations of cards: by addition, by matching sets or forming sequences. All Rummy games are based on the last two principles, although in the basic variants, the end objective is to shed cards which makes them shedding games (see above). However, meld scoring variants such as Canasta or Rommé are true combination games.",
"title": "Card games by objective"
},
{
"paragraph_id": 62,
"text": "Comparing card games are those where hand values are compared to determine the winner, also known as \"vying\" or \"showdown\" games. Poker, blackjack, mus, and baccarat are examples of comparing card games. As seen, nearly all of these games are designed as gambling games.",
"title": "Card games by objective"
},
{
"paragraph_id": 63,
"text": "Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are ordinary card games with the establishment of \"drinking rules\"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games.",
"title": "Drinking games"
},
{
"paragraph_id": 64,
"text": "These are card games played with a dedicated deck. Many other card games have been designed and published on a commercial or amateur basis. In a few cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer.",
"title": "Proprietary games"
},
{
"paragraph_id": 65,
"text": "Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such.",
"title": "Proprietary games"
},
{
"paragraph_id": 66,
"text": "Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two or more players. Each player has their own deck constructed from a very large pool of unique cards in the commercial market. The cards have different effects, costs, and art. New card sets are released periodically and sold as starter decks or booster packs. Obtaining the different cards makes the game a collectible card game, and cards are sold or traded on the secondary market. Magic: The Gathering, Pokémon, and Yu-Gi-Oh! are well-known collectible card games.",
"title": "Proprietary games"
},
{
"paragraph_id": 67,
"text": "Living card games (LCGs) are similar to collectible card games (CCGs), with their most distinguishing feature being a fixed distribution method, which breaks away from the traditional collectible card game format. While new cards for CCGs are usually sold in the form of starter decks or booster packs (the latter being often randomized), LCGs thrive on a model that requires players to acquire one core set in order to play the game, which players can further customize by acquiring extra sets or expansions featuring new content in the form of cards or scenarios. No randomization is involved in the process, thus players that get the same sets or expansions will get the exact same content. The term was popularized by Fantasy Flight Games (FFG) and mainly applies to its products, however some tabletop gaming companies can be seen using a very similar model.",
"title": "Proprietary games"
},
{
"paragraph_id": 68,
"text": "A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football.",
"title": "Proprietary games"
},
{
"paragraph_id": 69,
"text": "Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible.",
"title": "Fictional card games"
},
{
"paragraph_id": 70,
"text": "Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as solitaire or patience card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 71,
"text": "In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 72,
"text": "One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 73,
"text": "Another way of extending a two-player game to more players is as a cut-throat or individual game, in which all players play for themselves, and win or lose alone. Most such card games are round games, i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 74,
"text": "For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are solo games, i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 75,
"text": "The players of a card game normally form a circle around a table or other space that can hold cards. The game orientation or direction of play, which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. (In real-time card games, there may be no need for a direction of play.) Most regions have a traditional direction of play, such as:",
"title": "Typical structure of card games"
},
{
"paragraph_id": 76,
"text": "Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (counterclockwise).",
"title": "Typical structure of card games"
},
{
"paragraph_id": 77,
"text": "Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 78,
"text": "Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the dealer, i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 79,
"text": "As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In the case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general, any method can be used, such as tossing a coin in case of a two-player game, drawing cards until one player draws an ace, or rolling dice.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 80,
"text": "A hand, also called a deal, is a unit of the game that begins with the dealer shuffling and dealing the cards as described below, and ends with the players scoring and the next dealer being determined. The set of cards that each player receives and holds in his or her hands is also known as that player's hand.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 81,
"text": "The hand is over when the players have finished playing their hands. Most often this occurs when one player (or all) has no cards left. The player who sits after the dealer in the direction of play is known as eldest hand (or in two-player games as elder hand) or forehand. A game round consists of as many hands as there are players. After each hand, the deal is passed on in the direction of play, i.e. the previous eldest hand becomes the new dealer. Normally players score points after each hand. A game may consist of a fixed number of rounds. Alternatively it can be played for a fixed number of points. In this case it is over with the hand in which a player reaches the target score.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 82,
"text": "Shuffling is the process of bringing the cards of a pack into a random order. There are a large number of techniques with various advantages and disadvantages. Riffle shuffling is a method in which the deck is divided into two roughly equal-sized halves that are bent and then released, so that the cards interlace. Repeating this process several times randomizes the deck well, but the method is harder to learn than some others and may damage the cards. The overhand shuffle and the Hindu shuffle are two techniques that work by taking batches of cards from the top of the deck and reassembling them in the opposite order. They are easier to learn but must be repeated more often. A method suitable for small children consists in spreading the cards on a large surface and moving them around before picking up the deck again. This is also the most common method for shuffling tiles such as dominoes.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 83,
"text": "For casino games that are played for large sums it is vital that the cards be properly randomized, but for many games this is less critical, and in fact player experience can suffer when the cards are shuffled too well. The official skat rules stipulate that the cards are shuffled well, but according to a decision of the German skat court, a one-handed player should ask another player to do the shuffling, rather than use a shuffling machine, as it would shuffle the cards too well. French belote rules go so far as to prescribe that the deck never be shuffled between hands.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 84,
"text": "The dealer takes all of the cards in the pack, arranges them so that they are in a uniform stack, and shuffles them. In strict play, the dealer then offers the deck to the previous player (in the sense of the game direction) for cutting. If the deal is clockwise, this is the player to the dealer's right; if counterclockwise, it is the player to the dealer's left. The invitation to cut is made by placing the pack, face downward, on the table near the player who is to cut: who then lifts the upper portion of the pack clear of the lower portion and places it alongside. (Normally the two portions have about equal size. Strict rules often indicate that each portion must contain a certain minimum number of cards, such as three or five.) The formerly lower portion is then replaced on top of the formerly upper portion. Instead of cutting, one may also knock on the deck to indicate that one trusts the dealer to have shuffled fairly.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 85,
"text": "The actual deal (distribution of cards) is done in the direction of play, beginning with eldest hand. The dealer holds the pack, face down, in one hand, and removes cards from the top of it with his or her other hand to distribute to the players, placing them face down on the table in front of the players to whom they are dealt. The cards may be dealt one at a time, or in batches of more than one card; and either the entire pack or a determined number of cards are dealt out. The undealt cards, if any, are left face down in the middle of the table, forming the stock (also called the talon, widow, skat or kitty depending on the game and region).",
"title": "Typical structure of card games"
},
{
"paragraph_id": 86,
"text": "Throughout the shuffle, cut, and deal, the dealer should prevent the players from seeing the faces of any of the cards. The players should not try to see any of the faces. Should a player accidentally see a card, other than one's own, proper etiquette would be to admit this. It is also dishonest to try to see cards as they are dealt, or to take advantage of having seen a card. Should a card accidentally become exposed, (visible to all), any player can demand a redeal (all the cards are gathered up, and the shuffle, cut, and deal are repeated) or that the card be replaced randomly into the deck (\"burning\" it) and a replacement dealt from the top to the player who was to receive the revealed card.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 87,
"text": "When the deal is complete, all players pick up their cards, or \"hand\", and hold them in such a way that the faces can be seen by the holder of the cards but not the other players, or vice versa depending on the game. It is helpful to fan one's cards out so that if they have corner indices all their values can be seen at once. In most games, it is also useful to sort one's hand, rearranging the cards in a way appropriate to the game. For example, in a trick-taking game it may be easier to have all one's cards of the same suit together, whereas in a rummy game one might sort them by rank or by potential combinations.",
"title": "Typical structure of card games"
},
{
"paragraph_id": 88,
"text": "Normally communication between partners about tactics or the cards in their hands is forbidden. However, in a small number of games communication and/or signalling is permitted and very much part of the play. Most of these games are very old and, often, have rules of play that allow any card to be played at any time. Such games include:",
"title": "Signalling"
},
{
"paragraph_id": 89,
"text": "A new card game starts in a small way, either as someone's invention, or as a modification of an existing game. Those playing it may agree to change the rules as they wish. The rules that they agree on become the \"house rules\" under which they play the game. A set of house rules may be accepted as valid by a group of players wherever they play, as it may also be accepted as governing all play within a particular house, café, or club.",
"title": "Rules"
},
{
"paragraph_id": 90,
"text": "When a game becomes sufficiently popular, so that people often play it with strangers, there is a need for a generally accepted set of rules. This need is often met when a particular set of house rules becomes generally recognized. For example, when Whist became popular in 18th-century England, players in the Portland Club agreed on a set of house rules for use on its premises. Players in some other clubs then agreed to follow the \"Portland Club\" rules, rather than go to the trouble of codifying and printing their own sets of rules. The Portland Club rules eventually became generally accepted throughout England and Western cultures.",
"title": "Rules"
},
{
"paragraph_id": 91,
"text": "There is nothing static or \"official\" about this process. For the majority of games, there is no one set of universal rules by which the game is played, and the most common ruleset is no more or less than that. Many widely played card games, such as Canasta and Pinochle, have no official regulating body. The most common ruleset is often determined by the most popular distribution of rulebooks for card games. Perhaps the original compilation of popular playing card games was collected by Edmund Hoyle, a self-made authority on many popular parlor games. The U.S. Playing Card Company now owns the eponymous Hoyle brand, and publishes a series of rulebooks for various families of card games that have largely standardized the games' rules in countries and languages where the rulebooks are widely distributed. However, players are free to, and often do, invent \"house rules\" to supplement or even largely replace the \"standard\" rules.",
"title": "Rules"
},
{
"paragraph_id": 92,
"text": "If there is a sense in which a card game can have an official set of rules, it is when that card game has an \"official\" governing body. For example, the rules of tournament bridge are governed by the World Bridge Federation, and by local bodies in various countries such as the American Contract Bridge League in the U.S., and the English Bridge Union in England. The rules of skat are governed by The International Skat Players Association and, in Germany, by the Deutscher Skatverband which publishes the Skatordnung. The rules of French tarot are governed by the Fédération Française de Tarot. The rules of Schafkopf are laid down by the Schafkopfschule in Munich. Even in these cases, the rules must only be followed at games sanctioned by these governing bodies or where the tournament organisers specify them. Players in informal settings are free to implement agreed supplemental or substitute rules. For example, in Schafkopf there are numerous local variants sometimes known as \"impure\" Schafkopf and specified by assuming the official rules and describing the additions e.g. \"with Geier and Bettel, tariff 5/10 cents\".",
"title": "Rules"
},
{
"paragraph_id": 93,
"text": "An infraction is any action which is against the rules of the game, such as playing a card when it is not one's turn to play or the accidental exposure of a card, informally known as \"bleeding.\"",
"title": "Rules"
},
{
"paragraph_id": 94,
"text": "In many official sets of rules for card games, the rules specifying the penalties for various infractions occupy more pages than the rules specifying how to play correctly. This is tedious but necessary for games that are played seriously. Players who intend to play a card game at a high level generally ensure before beginning that all agree on the penalties to be used. When playing privately, this will normally be a question of agreeing house rules. In a tournament, there will probably be a tournament director who will enforce the rules when required and arbitrate in cases of doubt.",
"title": "Rules"
},
{
"paragraph_id": 95,
"text": "If a player breaks the rules of a game deliberately, this is cheating. The rest of this section is therefore about accidental infractions, caused by ignorance, clumsiness, inattention, etc.",
"title": "Rules"
},
{
"paragraph_id": 96,
"text": "As the same game is played repeatedly among a group of players, precedents build up about how a particular infraction of the rules should be handled. For example, \"Sheila just led a card when it wasn't her turn. Last week when Jo did that, we agreed ... etc.\" Sets of such precedents tend to become established among groups of players, and to be regarded as part of the house rules. Sets of house rules may become formalized, as described in the previous section. Therefore, for some games, there is a \"proper\" way of handling infractions of the rules. But for many games, without governing bodies, there is no standard way of handling infractions.",
"title": "Rules"
},
{
"paragraph_id": 97,
"text": "In many circumstances, there is no need for special rules dealing with what happens after an infraction. As a general principle, the person who broke a rule should not benefit from it, and the other players should not lose by it. An exception to this may be made in games with fixed partnerships, in which it may be felt that the partner(s) of the person who broke a rule should also not benefit. The penalty for an accidental infraction should be as mild as reasonable, consistent with there being a possible benefit to the person responsible.",
"title": "Rules"
},
{
"paragraph_id": 98,
"text": "The oldest surviving reference to the card game in world history is from the 9th century China, when the Collection of Miscellanea at Duyang, written by Tang-dynasty writer Su E, described Princess Tongchang (daughter of Emperor Yizong of Tang) playing the \"leaf game\" with members of the Wei clan (the family of the princess's husband) in 868 . The Song dynasty statesman and historian Ouyang Xiu has noted that paper playing cards arose in connection to an earlier development in the book format from scrolls to pages.",
"title": "Playing cards"
},
{
"paragraph_id": 99,
"text": "Playing cards first appeared in Europe in the last quarter of the 14th century. The earliest European references speak of a Saracen or Moorish game called naib, and in fact an almost complete Mamluk Egyptian deck of 52 cards in a distinct oriental design has survived from around the same time, with the four suits swords, polo sticks, cups and coins and the ranks king, governor, second governor, and ten to one.",
"title": "Playing cards"
},
{
"paragraph_id": 100,
"text": "The 1430s in Italy saw the invention of the tarot deck, a full Latin-suited deck augmented by suitless cards with painted motifs that played a special role as trumps. Tarot card games are still played with (subsets of) these decks in parts of Central Europe. A full tarot deck contains 14 cards in each suit; low cards labeled 1–10, and court cards valet (jack), chevalier (cavalier/knight), dame (queen), and roi (king), plus the fool or excuse card, and 21 trump cards. In the 18th century the card images of the traditional Italian tarot decks became popular in cartomancy and evolved into \"esoteric\" decks used primarily for the purpose; today most tarot decks sold in North America are the occult type, and are closely associated with fortune telling. In Europe, \"playing tarot\" decks remain popular for games, and have evolved since the 18th century to use regional suits (spades, hearts, diamonds and clubs in France; leaves, hearts, bells and acorns in Germany) as well as other familiar aspects of the English-pattern pack such as corner card indices and \"stamped\" card symbols for non-court cards. Decks differ regionally based on the number of cards needed to play the games; the French tarot consists of the \"full\" 78 cards, while Germanic, Spanish and Italian Tarot variants remove certain values (usually low suited cards) from the deck, creating a deck with as few as 32 cards.",
"title": "Playing cards"
},
{
"paragraph_id": 101,
"text": "The French suits were introduced around 1480 and, in France, mostly replaced the earlier Latin suits of swords, clubs, cups and coins. (which are still common in Spanish- and Portuguese-speaking countries as well as in some northern regions of Italy) The suit symbols, being very simple and single-color, could be stamped onto the playing cards to create a deck, thus only requiring special full-color card art for the court cards. This drastically simplifies the production of a deck of cards versus the traditional Italian deck, which used unique full-color art for each card in the deck. The French suits became popular in English playing cards in the 16th century (despite historic animosity between France and England), and from there were introduced to British colonies including North America. The rise of Western culture has led to the near-universal popularity and availability of French-suited playing cards even in areas with their own regional card art.",
"title": "Playing cards"
},
{
"paragraph_id": 102,
"text": "In Japan, a distinct 48-card hanafuda deck is popular. It is derived from 16th-century Portuguese decks, after undergoing a long evolution driven by laws enacted by the Tokugawa shogunate attempting to ban the use of playing cards",
"title": "Playing cards"
},
{
"paragraph_id": 103,
"text": "The best-known deck internationally is the English pattern of the 52-card French deck, also called the International or Anglo-American pattern, used for such games as poker and contract bridge. It contains one card for each unique combination of thirteen ranks and the four French suits spades, hearts, diamonds, and clubs. The ranks (from highest to lowest in bridge and poker) are ace, king, queen, jack (or knave), and the numbers from ten down to two (or deuce). The trump cards and knight cards from the French playing tarot are not included.",
"title": "Playing cards"
},
{
"paragraph_id": 104,
"text": "Originally the term knave was more common than \"jack\"; the card had been called a jack as part of the terminology of All-Fours since the 17th century, but the word was considered vulgar. (Note the exclamation by Estella in Charles Dickens's novel Great Expectations: \"He calls the knaves, Jacks, this boy!\") However, because the card abbreviation for knave (\"Kn\") was so close to that of the king, it was very easy to confuse them, especially after suits and rankings were moved to the corners of the card in order to enable people to fan them in one hand and still see all the values. (The earliest known deck to place suits and rankings in the corner of the card is from 1693, but these cards did not become common until after 1864 when Hart reintroduced them along with the knave-to-jack change.) However, books of card games published in the third quarter of the 19th century evidently still referred to the \"knave\", and the term with this definition is still recognized in the United Kingdom.",
"title": "Playing cards"
},
{
"paragraph_id": 105,
"text": "In the 17th century, a French, five-trick, gambling game called Bête became popular and spread to Germany, where it was called La Bete and England where it was named Beast. It was a derivative of Triomphe and was the first card game in history to introduce the concept of bidding.",
"title": "Playing cards"
},
{
"paragraph_id": 106,
"text": "Chinese handmade mother-of-pearl gaming counters were used in scoring and bidding of card games in the West during the approximate period of 1700–1840. The gaming counters would bear an engraving such as a coat of arms or a monogram to identify a family or individual. Many of the gaming counters also depict Chinese scenes, flowers or animals. Queen Charlotte is one prominent British individual who is known to have played with the Chinese gaming counters. Card games such as Ombre, Quadrille and Pope Joan were popular at the time and required counters for scoring. The production of counters declined after Whist, with its different scoring method, became the most popular card game in the West.",
"title": "Playing cards"
},
{
"paragraph_id": 107,
"text": "Based on the association of card games and gambling, Pope Benedict XIV banned card games on October 17, 1750.",
"title": "Playing cards"
}
] | A card game is any game using playing cards as the primary device with which the game is played, be they traditional or game-specific. Countless card games exist, including families of related games. A small number of card games played with traditional decks have formally standardized rules with international tournaments being held, but most are folk games whose rules may vary by region, culture, location or from circle to circle. Traditional card games are played with a deck or pack of playing cards which are identical in size and shape. Each card has two sides, the face and the back. Normally the backs of the cards are indistinguishable. The faces of the cards may all be unique, or there can be duplicates. The composition of a deck is known to each player. In some cases several decks are shuffled together to form a single pack or shoe. Modern card games usually have bespoke decks, often with a vast amount of cards, and can include number or action cards. This type of game is generally regarded as part of the board game hobby. Games using playing cards exploit the fact that cards are individually identifiable from one side only, so that each player knows only the cards they hold and not those held by anyone else. For this reason card games are often characterized as games of chance or "imperfect information"—as distinct from games of strategy or perfect information, where the current position is fully visible to all players throughout the game. Many games that are not generally placed in the family of card games do in fact use cards for some aspect of their play. Some games that are placed in the card game genre involve a board. The distinction is that the play in a card game chiefly depends on the use of the cards by players, while board games generally focus on the players' positions on the board, and use the cards for some secondary purpose. | 2001-11-03T02:46:05Z | 2023-12-30T19:33:54Z | [
"Template:Lang",
"Template:Reflist",
"Template:Cite book",
"Template:Short description",
"Template:Use mdy dates",
"Template:Rp",
"Template:Frac",
"Template:Citation",
"Template:Trick-taking card games",
"Template:Cite web",
"Template:Use American English",
"Template:Tabletop games by type",
"Template:ISSN",
"Template:Unreferenced section",
"Template:Cite encyclopedia",
"Template:Cite journal",
"Template:Citation needed",
"Template:Commons category",
"Template:Other uses",
"Template:Anchor",
"Template:Redirect",
"Template:Portal",
"Template:ISBN",
"Template:Non trick-taking card games",
"Template:Authority control",
"Template:See also",
"Template:Main",
"Template:Specify"
] | https://en.wikipedia.org/wiki/Card_game |
5,361 | Cross-stitch | Cross-stitch is a form of sewing and a popular form of counted-thread embroidery in which X-shaped stitches in a tiled, raster-like pattern are used to form a picture. The stitcher counts the threads on a piece of evenweave fabric (such as linen) in each direction so that the stitches are of uniform size and appearance. This form of cross-stitch is also called counted cross-stitch in order to distinguish it from other forms of cross-stitch. Sometimes cross-stitch is done on designs printed on the fabric (stamped cross-stitch); the stitcher simply stitches over the printed pattern. Cross-stitch is often executed on easily countable fabric called aida cloth whose weave creates a plainly visible grid of squares with holes for the needle at each corner.
Fabrics used in cross-stitch include linen, aida cloth, and mixed-content fabrics called 'evenweave' such as jobelan. All cross-stitch fabrics are technically "evenweave" as the term refers to the fact that the fabric is woven to make sure that there are the same number of threads per inch in both the warp and the weft (i.e. vertically and horizontally). Fabrics are categorized by threads per inch (referred to as 'count'), which can range from 11 to 40 count.
Counted cross-stitch projects are worked from a gridded pattern called a chart and can be used on any count fabric; the count of the fabric and the number of threads per stitch determine the size of the finished stitching. For example, if a given design is stitched on a 28 count cross-stitch fabric with each cross worked over two threads, the finished stitching size is the same as it would be on a 14 count aida cloth fabric with each cross worked over one square. These methods are referred to as "2 over 2" (2 embroidery threads used to stitch over 2 fabric threads) and "1 over 1" (1 embroidery thread used to stitch over 1 fabric thread or square), respectively. There are different methods of stitching a pattern, including the cross-country method where one colour is stitched at a time, or the parking method where one block of fabric is stitched at a time and the end of the thread is "parked" at the next point the same colour occurs in the pattern.
Cross-stitch can be found all over the world since the middle ages. Many folk museums show examples of clothing decorated with cross-stitch, especially from continental Europe and Asia.
The cross-stitch sampler is called that because it was generally stitched by a young girl to learn how to stitch and to record alphabet and other patterns to be used in her household sewing. These samples of her stitching could be referred back to over the years. Often, motifs and initials were stitched on household items to identify their owner, or simply to decorate the otherwise-plain cloth. The earliest known cross stitch sampler made in the United States is currently housed at Pilgrim Hall in Plymouth, Massachusetts. The sampler was created by Loara Standish, daughter of Captain Myles Standish and pioneer of the Leviathan stitch, circa 1653.
Traditionally, cross-stitch was used to embellish items like household linens, tablecloths, dishcloths, and doilies (only a small portion of which would actually be embroidered, such as a border). Although there are many cross-stitchers who still employ it in this fashion, it is now increasingly popular to work the pattern on pieces of fabric and hang them on the wall for decoration. Cross-stitch is also often used to make greeting cards, pillow tops, or as inserts for box tops, coasters and trivets.
Multicoloured, shaded, painting-like patterns as we know them today are a fairly modern development, deriving from similar shaded patterns of Berlin wool work of the mid-nineteenth century. Besides designs created expressly for cross-stitch, there are software programs that convert a photograph or a fine art image into a chart suitable for stitching. One example of this is in the cross-stitched reproduction of the Sistine Chapel charted and stitched by Joanna Lopianowski-Roberts.
There are many cross-stitching "guilds" and groups across the United States and Europe which offer classes, collaborate on large projects, stitch for charity, and provide other ways for local cross-stitchers to get to know one another. Individually owned local needlework shops (LNS) often have stitching nights at their shops, or host weekend stitching retreats.
Today, cotton floss is the most common embroidery thread. It is a thread made of mercerized cotton, composed of six strands that are only loosely twisted together and easily separable. While there are other manufacturers, the two most-commonly used (and oldest) brands are DMC and Anchor, both of which have been manufacturing embroidery floss since the 1800s.
Other materials used are pearl (or perle) cotton, Danish flower thread, silk and Rayon. Different wool threads, metallic threads or other novelty threads are also used, sometimes for the whole work, but often for accents and embellishments. Hand-dyed cross-stitch floss is created just as the name implies—it is dyed by hand. Because of this, there are variations in the amount of color throughout the thread. Some variations can be subtle, while some can be a huge contrast. Some also have more than one color per thread.
Cross-stitch is widely used in traditional Palestinian dressmaking.
The cross-stitch can be executed partially such as in quarter-, half-, and three-quarter-stitches. A single straight stitch, done in the form of backstitching, is often used as an outline, to add detail or definition.
There are many stitches which are related structurally to cross-stitch. The best known are Italian cross-stitch (as seen in Assisi embroidery), long-armed cross-stitch, and Montenegrin stitch. Italian cross-stitch and Montenegrin stitch are reversible, meaning the work looks the same on both sides. These styles have a slightly different look than ordinary cross-stitch. These more difficult stitches are rarely used in mainstream embroidery, but they are still used to recreate historical pieces of embroidery or by the creative and adventurous stitcher. The double cross-stitch, also known as a Leviathan stitch or Smyrna cross-stitch, combines a cross-stitch with an upright cross-stitch.
Berlin wool work and similar petit point stitchery resembles the heavily shaded, opulent styles of cross-stitch, and sometimes also used charted patterns on paper.
Cross-stitch is often combined with other popular forms of embroidery, such as Hardanger embroidery or blackwork embroidery. Cross-stitch may also be combined with other work, such as canvaswork or drawn thread work. Beadwork and other embellishments such as paillettes, charms, small buttons and specialty threads of various kinds may also be used. Cross stitch can often used in needlepoint.
Cross-stitch has become increasingly popular with the younger generation of Europe in recent years. Retailers such as John Lewis experienced a 17% rise in sales of haberdashery products between 2009 and 2010. Hobbycraft, a chain of stores selling craft supplies, also enjoyed an 11% increase in sales over the year to February 22, 2009.
Knitting and cross-stitching have become more popular hobbies for a younger market, in contrast to its traditional reputation as a hobby for retirees. Sewing and craft groups such as Stitch and Bitch London have resurrected the idea of the traditional craft club. At Clothes Show Live 2010 there was a new area called "Sknitch" promoting modern sewing, knitting and embroidery.
In a departure from the traditional designs associated with cross-stitch, there is a current trend for more postmodern or tongue-in-cheek designs featuring retro images or contemporary sayings. It is linked to a concept known as 'subversive cross-stitch', which involves more risque designs, often fusing the traditional sampler style with sayings designed to shock or be incongruous with the old-fashioned image of cross-stitch.
Stitching designs on other materials can be accomplished by using waste canvas. This is a temporary gridded canvas similar to regular canvas used for embroidery that is held together by a water-soluble glue, which is removed after completion of stitch design. Other crafters have taken to cross-stitching on all manner of gridded objects as well including old kitchen strainers or chain-link fences.
While cross stitch is traditionally a women's craft, it is growing in popularity among men.
In the 21st century, an emphasis on feminist design has emerged within cross-stitch communities. Some cross-stitchers have commented on the way that the practice of embroidery makes them feel connected to the women who practised it before them. There is a push for all embroidery, including cross-stitch, to be respected as a significant art form.
The development of computer technology has also affected such a seemingly conservative craft as cross-stitch. With the help of computer visualization algorithms, it is now possible to create embroidery designs using a photograph or any other picture. Visualisation uses a drawing on a graphical grid, representing colors and / or symbols, which gives the user an indication of the possible use of colors, the position of those colors, and the type of stitch used, such as full cross or quarter stitch.
An increasingly popular activity for cross-stitchers is to watch and make YouTube videos detailing their hobby. Flosstubers, as they are known, typically cover WIPs (Works in Progress), FOs (Finished Objects), and Haul (new patterns, thread, and fabric, as well as cross-stitching accessories, such as needle minders). Other accessories include but are not limited to: Floss organizers, thread conditioner, pin cushions, aida cloth or plastic canvas, and embroidery needles. | [
{
"paragraph_id": 0,
"text": "Cross-stitch is a form of sewing and a popular form of counted-thread embroidery in which X-shaped stitches in a tiled, raster-like pattern are used to form a picture. The stitcher counts the threads on a piece of evenweave fabric (such as linen) in each direction so that the stitches are of uniform size and appearance. This form of cross-stitch is also called counted cross-stitch in order to distinguish it from other forms of cross-stitch. Sometimes cross-stitch is done on designs printed on the fabric (stamped cross-stitch); the stitcher simply stitches over the printed pattern. Cross-stitch is often executed on easily countable fabric called aida cloth whose weave creates a plainly visible grid of squares with holes for the needle at each corner.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fabrics used in cross-stitch include linen, aida cloth, and mixed-content fabrics called 'evenweave' such as jobelan. All cross-stitch fabrics are technically \"evenweave\" as the term refers to the fact that the fabric is woven to make sure that there are the same number of threads per inch in both the warp and the weft (i.e. vertically and horizontally). Fabrics are categorized by threads per inch (referred to as 'count'), which can range from 11 to 40 count.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Counted cross-stitch projects are worked from a gridded pattern called a chart and can be used on any count fabric; the count of the fabric and the number of threads per stitch determine the size of the finished stitching. For example, if a given design is stitched on a 28 count cross-stitch fabric with each cross worked over two threads, the finished stitching size is the same as it would be on a 14 count aida cloth fabric with each cross worked over one square. These methods are referred to as \"2 over 2\" (2 embroidery threads used to stitch over 2 fabric threads) and \"1 over 1\" (1 embroidery thread used to stitch over 1 fabric thread or square), respectively. There are different methods of stitching a pattern, including the cross-country method where one colour is stitched at a time, or the parking method where one block of fabric is stitched at a time and the end of the thread is \"parked\" at the next point the same colour occurs in the pattern.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cross-stitch can be found all over the world since the middle ages. Many folk museums show examples of clothing decorated with cross-stitch, especially from continental Europe and Asia.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The cross-stitch sampler is called that because it was generally stitched by a young girl to learn how to stitch and to record alphabet and other patterns to be used in her household sewing. These samples of her stitching could be referred back to over the years. Often, motifs and initials were stitched on household items to identify their owner, or simply to decorate the otherwise-plain cloth. The earliest known cross stitch sampler made in the United States is currently housed at Pilgrim Hall in Plymouth, Massachusetts. The sampler was created by Loara Standish, daughter of Captain Myles Standish and pioneer of the Leviathan stitch, circa 1653.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Traditionally, cross-stitch was used to embellish items like household linens, tablecloths, dishcloths, and doilies (only a small portion of which would actually be embroidered, such as a border). Although there are many cross-stitchers who still employ it in this fashion, it is now increasingly popular to work the pattern on pieces of fabric and hang them on the wall for decoration. Cross-stitch is also often used to make greeting cards, pillow tops, or as inserts for box tops, coasters and trivets.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Multicoloured, shaded, painting-like patterns as we know them today are a fairly modern development, deriving from similar shaded patterns of Berlin wool work of the mid-nineteenth century. Besides designs created expressly for cross-stitch, there are software programs that convert a photograph or a fine art image into a chart suitable for stitching. One example of this is in the cross-stitched reproduction of the Sistine Chapel charted and stitched by Joanna Lopianowski-Roberts.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "There are many cross-stitching \"guilds\" and groups across the United States and Europe which offer classes, collaborate on large projects, stitch for charity, and provide other ways for local cross-stitchers to get to know one another. Individually owned local needlework shops (LNS) often have stitching nights at their shops, or host weekend stitching retreats.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Today, cotton floss is the most common embroidery thread. It is a thread made of mercerized cotton, composed of six strands that are only loosely twisted together and easily separable. While there are other manufacturers, the two most-commonly used (and oldest) brands are DMC and Anchor, both of which have been manufacturing embroidery floss since the 1800s.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Other materials used are pearl (or perle) cotton, Danish flower thread, silk and Rayon. Different wool threads, metallic threads or other novelty threads are also used, sometimes for the whole work, but often for accents and embellishments. Hand-dyed cross-stitch floss is created just as the name implies—it is dyed by hand. Because of this, there are variations in the amount of color throughout the thread. Some variations can be subtle, while some can be a huge contrast. Some also have more than one color per thread.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Cross-stitch is widely used in traditional Palestinian dressmaking.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The cross-stitch can be executed partially such as in quarter-, half-, and three-quarter-stitches. A single straight stitch, done in the form of backstitching, is often used as an outline, to add detail or definition.",
"title": "Related stitches and forms of embroidery"
},
{
"paragraph_id": 12,
"text": "There are many stitches which are related structurally to cross-stitch. The best known are Italian cross-stitch (as seen in Assisi embroidery), long-armed cross-stitch, and Montenegrin stitch. Italian cross-stitch and Montenegrin stitch are reversible, meaning the work looks the same on both sides. These styles have a slightly different look than ordinary cross-stitch. These more difficult stitches are rarely used in mainstream embroidery, but they are still used to recreate historical pieces of embroidery or by the creative and adventurous stitcher. The double cross-stitch, also known as a Leviathan stitch or Smyrna cross-stitch, combines a cross-stitch with an upright cross-stitch.",
"title": "Related stitches and forms of embroidery"
},
{
"paragraph_id": 13,
"text": "Berlin wool work and similar petit point stitchery resembles the heavily shaded, opulent styles of cross-stitch, and sometimes also used charted patterns on paper.",
"title": "Related stitches and forms of embroidery"
},
{
"paragraph_id": 14,
"text": "Cross-stitch is often combined with other popular forms of embroidery, such as Hardanger embroidery or blackwork embroidery. Cross-stitch may also be combined with other work, such as canvaswork or drawn thread work. Beadwork and other embellishments such as paillettes, charms, small buttons and specialty threads of various kinds may also be used. Cross stitch can often used in needlepoint.",
"title": "Related stitches and forms of embroidery"
},
{
"paragraph_id": 15,
"text": "Cross-stitch has become increasingly popular with the younger generation of Europe in recent years. Retailers such as John Lewis experienced a 17% rise in sales of haberdashery products between 2009 and 2010. Hobbycraft, a chain of stores selling craft supplies, also enjoyed an 11% increase in sales over the year to February 22, 2009.",
"title": "Recent trends for cross stitch"
},
{
"paragraph_id": 16,
"text": "Knitting and cross-stitching have become more popular hobbies for a younger market, in contrast to its traditional reputation as a hobby for retirees. Sewing and craft groups such as Stitch and Bitch London have resurrected the idea of the traditional craft club. At Clothes Show Live 2010 there was a new area called \"Sknitch\" promoting modern sewing, knitting and embroidery.",
"title": "Recent trends for cross stitch"
},
{
"paragraph_id": 17,
"text": "In a departure from the traditional designs associated with cross-stitch, there is a current trend for more postmodern or tongue-in-cheek designs featuring retro images or contemporary sayings. It is linked to a concept known as 'subversive cross-stitch', which involves more risque designs, often fusing the traditional sampler style with sayings designed to shock or be incongruous with the old-fashioned image of cross-stitch.",
"title": "Recent trends for cross stitch"
},
{
"paragraph_id": 18,
"text": "Stitching designs on other materials can be accomplished by using waste canvas. This is a temporary gridded canvas similar to regular canvas used for embroidery that is held together by a water-soluble glue, which is removed after completion of stitch design. Other crafters have taken to cross-stitching on all manner of gridded objects as well including old kitchen strainers or chain-link fences.",
"title": "Recent trends for cross stitch"
},
{
"paragraph_id": 19,
"text": "While cross stitch is traditionally a women's craft, it is growing in popularity among men.",
"title": "Recent trends for cross stitch"
},
{
"paragraph_id": 20,
"text": "In the 21st century, an emphasis on feminist design has emerged within cross-stitch communities. Some cross-stitchers have commented on the way that the practice of embroidery makes them feel connected to the women who practised it before them. There is a push for all embroidery, including cross-stitch, to be respected as a significant art form.",
"title": "Cross-stitch and feminism"
},
{
"paragraph_id": 21,
"text": "The development of computer technology has also affected such a seemingly conservative craft as cross-stitch. With the help of computer visualization algorithms, it is now possible to create embroidery designs using a photograph or any other picture. Visualisation uses a drawing on a graphical grid, representing colors and / or symbols, which gives the user an indication of the possible use of colors, the position of those colors, and the type of stitch used, such as full cross or quarter stitch.",
"title": "Cross-stitch and computers"
},
{
"paragraph_id": 22,
"text": "An increasingly popular activity for cross-stitchers is to watch and make YouTube videos detailing their hobby. Flosstubers, as they are known, typically cover WIPs (Works in Progress), FOs (Finished Objects), and Haul (new patterns, thread, and fabric, as well as cross-stitching accessories, such as needle minders). Other accessories include but are not limited to: Floss organizers, thread conditioner, pin cushions, aida cloth or plastic canvas, and embroidery needles.",
"title": "Flosstube"
}
] | Cross-stitch is a form of sewing and a popular form of counted-thread embroidery in which X-shaped stitches in a tiled, raster-like pattern are used to form a picture. The stitcher counts the threads on a piece of evenweave fabric in each direction so that the stitches are of uniform size and appearance. This form of cross-stitch is also called counted cross-stitch in order to distinguish it from other forms of cross-stitch. Sometimes cross-stitch is done on designs printed on the fabric; the stitcher simply stitches over the printed pattern. Cross-stitch is often executed on easily countable fabric called aida cloth whose weave creates a plainly visible grid of squares with holes for the needle at each corner. Fabrics used in cross-stitch include linen, aida cloth, and mixed-content fabrics called 'evenweave' such as jobelan. All cross-stitch fabrics are technically "evenweave" as the term refers to the fact that the fabric is woven to make sure that there are the same number of threads per inch in both the warp and the weft. Fabrics are categorized by threads per inch, which can range from 11 to 40 count. Counted cross-stitch projects are worked from a gridded pattern called a chart and can be used on any count fabric; the count of the fabric and the number of threads per stitch determine the size of the finished stitching. For example, if a given design is stitched on a 28 count cross-stitch fabric with each cross worked over two threads, the finished stitching size is the same as it would be on a 14 count aida cloth fabric with each cross worked over one square. These methods are referred to as "2 over 2" and "1 over 1", respectively. There are different methods of stitching a pattern, including the cross-country method where one colour is stitched at a time, or the parking method where one block of fabric is stitched at a time and the end of the thread is "parked" at the next point the same colour occurs in the pattern. | 2001-03-30T21:20:59Z | 2023-12-29T18:29:39Z | [
"Template:Reflist",
"Template:Commons category",
"Template:Sewing",
"Template:Cite news",
"Template:Citation",
"Template:Authority control",
"Template:Short description",
"Template:For",
"Template:Portal",
"Template:Cite web",
"Template:Decorative arts",
"Template:Main",
"Template:Citation needed",
"Template:Cite book",
"Template:Cbignore",
"Template:ISBN",
"Template:Prone to spam",
"Template:Embroidery"
] | https://en.wikipedia.org/wiki/Cross-stitch |
5,362 | Casino game | Games available in most casinos are commonly called casino games. In a casino game, the players gamble cash or casino chips on various possible random outcomes or combinations of outcomes. Casino games are also available in online casinos, where permitted by law. Casino games can also be played outside of casinos for entertainment purposes, like in parties or in school competitions, on machines that simulate gambling.
There are three general categories of casino games: gaming machines, table games, and random number games. Gaming machines, such as slot machines and pachinko, are usually played by one player at a time and do not require the involvement of casino employees. Tables games, such as blackjack or craps, involve one or more players who are competing against the house (the casino itself) rather than each other. Table games are usually conducted by casino employees known as croupiers or dealers. Random number games are based on the selection of random numbers, either from a computerized random number generator or from other gaming equipment. Random number games may be played at a table or through the purchase of paper tickets or cards, such as keno or bingo.
Some casino games combine multiple of the above aspects; for example, roulette is a table game conducted by a dealer, that involves random numbers. Casinos may also offer other types of gaming, such as hosting poker games or tournaments where players compete against each other.
Games commonly found at casinos include table games, gaming machines and random number games.
In the United States, 'table game' is the term used for games of chance such as blackjack, craps, roulette, and baccarat that are played against the casino and operated by one or more live croupiers, as opposed to those played on a mechanical device like a slot machine or against other players instead of the casino, such as standard poker.
Table games are popularly played in casinos and involve some form of legal gambling, but they are also played privately under varying house rules. The term has significance in that some jurisdictions permit casinos to have only slots and no table games. In some states, this law has resulted in casinos employing electronic table games, such as roulette, blackjack, and craps.
Table games found in casinos include:
Gaming machines found in casinos include:
Random numbers games found in casinos include:
Casino games typically provide a predictable long-term advantage to the casino, or "house", while offering the players the possibility of a short-term gain that in some cases can be large. Some casino games have a skill element, where the players' decisions have an impact on the results. Players possessing sufficient skills to eliminate the inherent long-term disadvantage (the house edge or vigorish) in a casino game are referred to as advantage players.
The players' disadvantage is a result of the casino not paying winning wagers according to the game's "true odds", which are the payouts that would be expected considering the odds of a wager either winning or losing. For example, if a game is played by wagering on the number that would result from the roll of one die, the true odds would be 6 times the amount wagered since there is a 1 in 6 chance of any single number appearing, assuming that the player gets the original amount wagered back. However, the casino may only pay 4 times the amount wagered for a winning wager.
The house edge, or vigorish, is defined as the casino profit expressed as a percentage of the player's original bet. (In games such as blackjack or Spanish 21, the final bet may be several times the original bet, if the player doubles and splits.)
In American roulette, there are two "zeroes" (0, 00) and 36 non-zero numbers (18 red and 18 black). This leads to a higher house edge compared to European roulette. The chances of a player, who bets 1 unit on red, winning are 18/38 and his chances of losing 1 unit are 20/38. The player's expected value is EV = (18/38 × 1) + (20/38 × (−1)) = 18/38 − 20/38 = −2/38 = −5.26%. Therefore, the house edge is 5.26%. After 10 spins, betting 1 unit per spin, the average house profit will be 10 × 1 × 5.26% = 0.53 units. European roulette wheels have only one "zero" and therefore the house advantage (ignoring the en prison rule) is equal to 1/37 = 2.7%.
The house edge of casino games varies greatly with the game, with some games having an edge as low as 0.3%. Keno can have house edges of up to 25%, slot machines having up to 15%.
The calculation of the roulette house edge is a trivial exercise; for other games, this is not usually the case. Combinatorial analysis and/or computer simulation is necessary to complete the task.
In games that have a skill element, such as blackjack or Spanish 21, the house edge is defined as the house advantage from optimal play (without the use of advanced techniques such as card counting), on the first hand of the shoe (the container that holds the cards). The set of optimal plays for all possible hands is known as "basic strategy" and is highly dependent on the specific rules and even the number of decks used.
Traditionally, the majority of casinos have refused to reveal the house edge information for their slots games, and due to the unknown number of symbols and weightings of the reels, in most cases, it is much more difficult to calculate the house edge than in other casino games. However, due to some online properties revealing this information and some independent research conducted by Michael Shackleford in the offline sector, this pattern is slowly changing.
In games where players are not competing against the house, such as poker, the casino usually earns money via a commission, known as a "rake".
The luck factor in a casino game is quantified using standard deviations (SD). The standard deviation of a simple game like roulette can be calculated using the binomial distribution. In the binomial distribution, SD = n p q {\displaystyle {\sqrt {npq}}} , where n = number of rounds played, p = probability of winning, and q = probability of losing. The binomial distribution assumes a result of 1 unit for a win, and 0 units for a loss, rather than −1 units for a loss, which doubles the range of possible outcomes. Furthermore, if we flat bet at 10 units per round instead of 1 unit, the range of possible outcomes increases 10 fold.
For example, after 10 rounds at 1 unit per round, the standard deviation will be 2 × 1 × 10 ∗ 18 / 38 ∗ 20 / 38 {\displaystyle {\sqrt {10*18/38*20/38}}} = 3.16 units. After 10 rounds, the expected loss will be 10 × 1 × 5.26% = 0.53. As you can see, standard deviation is many times the magnitude of the expected loss.
The standard deviation for pai gow poker is the lowest out of all common casino games. Many casino games, particularly slot machines, have extremely high standard deviations. The bigger size of the potential payouts, the more the standard deviation may increase.
As the number of rounds increases, eventually, the expected loss will exceed the standard deviation, many times over. From the formula, we can see that the standard deviation is proportional to the square root of the number of rounds played, while the expected loss is proportional to the number of rounds played. As the number of rounds increases, the expected loss increases at a much faster rate. This is why it is impossible for a gambler to win in the long term. It is the high ratio of short-term standard deviation to expected loss that fools gamblers into thinking that they can win.
It is important for a casino to know both the house edge and variance for all of their games. The house edge tells them what kind of profit they will make as a percentage of turnover, and the variance tells them how much they need in the way of cash reserves. The mathematicians and computer programmers that do this kind of work are called gaming mathematicians and gaming analysts. Casinos do not have in-house expertise in this field, so they outsource their requirements to experts in the gaming analysis field. | [
{
"paragraph_id": 0,
"text": "Games available in most casinos are commonly called casino games. In a casino game, the players gamble cash or casino chips on various possible random outcomes or combinations of outcomes. Casino games are also available in online casinos, where permitted by law. Casino games can also be played outside of casinos for entertainment purposes, like in parties or in school competitions, on machines that simulate gambling.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are three general categories of casino games: gaming machines, table games, and random number games. Gaming machines, such as slot machines and pachinko, are usually played by one player at a time and do not require the involvement of casino employees. Tables games, such as blackjack or craps, involve one or more players who are competing against the house (the casino itself) rather than each other. Table games are usually conducted by casino employees known as croupiers or dealers. Random number games are based on the selection of random numbers, either from a computerized random number generator or from other gaming equipment. Random number games may be played at a table or through the purchase of paper tickets or cards, such as keno or bingo.",
"title": "Categories"
},
{
"paragraph_id": 2,
"text": "Some casino games combine multiple of the above aspects; for example, roulette is a table game conducted by a dealer, that involves random numbers. Casinos may also offer other types of gaming, such as hosting poker games or tournaments where players compete against each other.",
"title": "Categories"
},
{
"paragraph_id": 3,
"text": "Games commonly found at casinos include table games, gaming machines and random number games.",
"title": "Common casino games"
},
{
"paragraph_id": 4,
"text": "In the United States, 'table game' is the term used for games of chance such as blackjack, craps, roulette, and baccarat that are played against the casino and operated by one or more live croupiers, as opposed to those played on a mechanical device like a slot machine or against other players instead of the casino, such as standard poker.",
"title": "Common casino games"
},
{
"paragraph_id": 5,
"text": "Table games are popularly played in casinos and involve some form of legal gambling, but they are also played privately under varying house rules. The term has significance in that some jurisdictions permit casinos to have only slots and no table games. In some states, this law has resulted in casinos employing electronic table games, such as roulette, blackjack, and craps.",
"title": "Common casino games"
},
{
"paragraph_id": 6,
"text": "Table games found in casinos include:",
"title": "Common casino games"
},
{
"paragraph_id": 7,
"text": "Gaming machines found in casinos include:",
"title": "Common casino games"
},
{
"paragraph_id": 8,
"text": "Random numbers games found in casinos include:",
"title": "Common casino games"
},
{
"paragraph_id": 9,
"text": "",
"title": "House advantage"
},
{
"paragraph_id": 10,
"text": "Casino games typically provide a predictable long-term advantage to the casino, or \"house\", while offering the players the possibility of a short-term gain that in some cases can be large. Some casino games have a skill element, where the players' decisions have an impact on the results. Players possessing sufficient skills to eliminate the inherent long-term disadvantage (the house edge or vigorish) in a casino game are referred to as advantage players.",
"title": "House advantage"
},
{
"paragraph_id": 11,
"text": "The players' disadvantage is a result of the casino not paying winning wagers according to the game's \"true odds\", which are the payouts that would be expected considering the odds of a wager either winning or losing. For example, if a game is played by wagering on the number that would result from the roll of one die, the true odds would be 6 times the amount wagered since there is a 1 in 6 chance of any single number appearing, assuming that the player gets the original amount wagered back. However, the casino may only pay 4 times the amount wagered for a winning wager.",
"title": "House advantage"
},
{
"paragraph_id": 12,
"text": "The house edge, or vigorish, is defined as the casino profit expressed as a percentage of the player's original bet. (In games such as blackjack or Spanish 21, the final bet may be several times the original bet, if the player doubles and splits.)",
"title": "House advantage"
},
{
"paragraph_id": 13,
"text": "In American roulette, there are two \"zeroes\" (0, 00) and 36 non-zero numbers (18 red and 18 black). This leads to a higher house edge compared to European roulette. The chances of a player, who bets 1 unit on red, winning are 18/38 and his chances of losing 1 unit are 20/38. The player's expected value is EV = (18/38 × 1) + (20/38 × (−1)) = 18/38 − 20/38 = −2/38 = −5.26%. Therefore, the house edge is 5.26%. After 10 spins, betting 1 unit per spin, the average house profit will be 10 × 1 × 5.26% = 0.53 units. European roulette wheels have only one \"zero\" and therefore the house advantage (ignoring the en prison rule) is equal to 1/37 = 2.7%.",
"title": "House advantage"
},
{
"paragraph_id": 14,
"text": "The house edge of casino games varies greatly with the game, with some games having an edge as low as 0.3%. Keno can have house edges of up to 25%, slot machines having up to 15%.",
"title": "House advantage"
},
{
"paragraph_id": 15,
"text": "The calculation of the roulette house edge is a trivial exercise; for other games, this is not usually the case. Combinatorial analysis and/or computer simulation is necessary to complete the task.",
"title": "House advantage"
},
{
"paragraph_id": 16,
"text": "In games that have a skill element, such as blackjack or Spanish 21, the house edge is defined as the house advantage from optimal play (without the use of advanced techniques such as card counting), on the first hand of the shoe (the container that holds the cards). The set of optimal plays for all possible hands is known as \"basic strategy\" and is highly dependent on the specific rules and even the number of decks used.",
"title": "House advantage"
},
{
"paragraph_id": 17,
"text": "Traditionally, the majority of casinos have refused to reveal the house edge information for their slots games, and due to the unknown number of symbols and weightings of the reels, in most cases, it is much more difficult to calculate the house edge than in other casino games. However, due to some online properties revealing this information and some independent research conducted by Michael Shackleford in the offline sector, this pattern is slowly changing.",
"title": "House advantage"
},
{
"paragraph_id": 18,
"text": "In games where players are not competing against the house, such as poker, the casino usually earns money via a commission, known as a \"rake\".",
"title": "House advantage"
},
{
"paragraph_id": 19,
"text": "The luck factor in a casino game is quantified using standard deviations (SD). The standard deviation of a simple game like roulette can be calculated using the binomial distribution. In the binomial distribution, SD = n p q {\\displaystyle {\\sqrt {npq}}} , where n = number of rounds played, p = probability of winning, and q = probability of losing. The binomial distribution assumes a result of 1 unit for a win, and 0 units for a loss, rather than −1 units for a loss, which doubles the range of possible outcomes. Furthermore, if we flat bet at 10 units per round instead of 1 unit, the range of possible outcomes increases 10 fold.",
"title": "House advantage"
},
{
"paragraph_id": 20,
"text": "For example, after 10 rounds at 1 unit per round, the standard deviation will be 2 × 1 × 10 ∗ 18 / 38 ∗ 20 / 38 {\\displaystyle {\\sqrt {10*18/38*20/38}}} = 3.16 units. After 10 rounds, the expected loss will be 10 × 1 × 5.26% = 0.53. As you can see, standard deviation is many times the magnitude of the expected loss.",
"title": "House advantage"
},
{
"paragraph_id": 21,
"text": "The standard deviation for pai gow poker is the lowest out of all common casino games. Many casino games, particularly slot machines, have extremely high standard deviations. The bigger size of the potential payouts, the more the standard deviation may increase.",
"title": "House advantage"
},
{
"paragraph_id": 22,
"text": "As the number of rounds increases, eventually, the expected loss will exceed the standard deviation, many times over. From the formula, we can see that the standard deviation is proportional to the square root of the number of rounds played, while the expected loss is proportional to the number of rounds played. As the number of rounds increases, the expected loss increases at a much faster rate. This is why it is impossible for a gambler to win in the long term. It is the high ratio of short-term standard deviation to expected loss that fools gamblers into thinking that they can win.",
"title": "House advantage"
},
{
"paragraph_id": 23,
"text": "It is important for a casino to know both the house edge and variance for all of their games. The house edge tells them what kind of profit they will make as a percentage of turnover, and the variance tells them how much they need in the way of cash reserves. The mathematicians and computer programmers that do this kind of work are called gaming mathematicians and gaming analysts. Casinos do not have in-house expertise in this field, so they outsource their requirements to experts in the gaming analysis field.",
"title": "House advantage"
}
] | Games available in most casinos are commonly called casino games. In a casino game, the players gamble cash or casino chips on various possible random outcomes or combinations of outcomes. Casino games are also available in online casinos, where permitted by law. Casino games can also be played outside of casinos for entertainment purposes, like in parties or in school competitions, on machines that simulate gambling. | 2001-10-14T21:45:30Z | 2023-12-11T12:23:41Z | [
"Template:Anchor",
"Template:More citations needed",
"Template:Distinguish",
"Template:Redirect",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Cite news",
"Template:Cite journal",
"Template:Gambling",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Casino_game |
5,363 | Video game | A video game or computer game is an electronic game that involves interaction with a user interface or input device (such as a joystick, controller, keyboard, or motion sensing device) to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback (e.g., haptic technology that provides tactile sensations), and some video games also allow microphone and webcam inputs for in-game chatting and livestreaming.
Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer (PC) games; the latter also encompasses LAN games, online games, and browser games. More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience.
The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971. In 1972 came the iconic hit game Pong and the first home console, the Magnavox Odyssey. The industry grew quickly during the "golden age" of arcade video games from the late 1970s to early 1980s but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, was dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or "indie games") to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service.
Today, video game development requires numerous interdisciplinary skills, vision, teamwork, and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers, and other marketers, to successfully bring a game to its consumers. As of 2020, the global video game market had estimated annual revenues of US$159 billion across hardware, software, and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console, and peripheral sales, as well as consumer demands for better game performance, have been powerful driving factors for hardware design and innovation.
Early video games use interactive electronic devices with various display formats. The earliest example is from 1947—a "cathode-ray tube amusement device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consists of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which are paper drawings fixed to the screen. Other early examples include Christopher Strachey's draughts game, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by Massachusetts Institute of Technology students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961. Each game has different means of display: NIMROD has a panel of lights to play the game of Nim, OXO has a graphical display to play tic-tac-toe, Tennis for Two has an oscilloscope to display a side view of a tennis court, and Spacewar! has the DEC PDP-1's vector display to have two spaceships battle each other.
These preliminary inventions paved the way for the origins of video games today. Ralph H. Baer, while working at Sanders Associates in 1966, devised a control system to play a rudimentary game of table tennis on a television screen. With the company's approval, Baer built the prototype "Brown Box". Sanders patented Baer's inventions and licensed them to Magnavox, which commercialized it as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Sanders and Magnavox sued Atari for infringement of Baer's patents, but Atari settled out of court, paying for perpetual rights to the patents. Following their agreement, Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled "Father of Video Games" for their contributions.
The term "video game" was developed to distinguish this class of electronic games that were played on some type of video display rather than on a teletype printer, audio speaker or similar device. This also distinguished from many handheld electronic games like Merlin which commonly used LED lights for indicators but did not use these in combination for imaging purposes.
"Computer game" may also be used as a descriptor, as all these types of games essentially require the use of a computer processor, and in some cases, it is used interchangeably with "video game". Particularly in the United Kingdom and Western Europe, this is common due to the historic relevance of domestically produced microcomputers. Other terms used include digital game, for example by the Australian Bureau of Statistics. However, the term "computer game" can also be used to more specifically refer to games played primarily on personal computers or other type of flexible hardware systems (also known as a PC game), as a way distinguish them from console games, arcade games or mobile games. Other terms such as "television game" or "telegame" had been used in the 1970s and early 1980s, particularly for the home gaming consoles that rely on connection to a television set. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as "TV games", or TV geemu or terebi geemu. "Electronic game" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output. and the term "TV game" is still commonly used into the 21st century.
The first appearance of the term "video game" emerged around 1973. The Oxford English Dictionary cited a 10 November 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term came much earlier, appearing first around March 1973 in these magazines in mass usage including by the arcade game manufacturers. As analyzed by video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those involved. This appeared to trace to Ed Adlum, who ran Cashbox's coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as "video games": "RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboard's description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck." Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He "wrestled with descriptions of this type of game," alternating between "TV game" and "television game" but "finally woke up one day" and said, "what the hell... video game!"
For many years, the traveling Videotopia exhibit served as the closest representation of such a vital resource. In addition to collecting home video game consoles, the Electronics Conservancy organization set out to locate and restore 400 antique arcade cabinets after realizing that the majority of these games had been destroyed and feared the loss of their historical significance. Video games have significantly began to be seen in the real-world as a purpose to present history in a way of understanding the methodology and terms that are being compared. Researchers have looked at how historical representations affect how the public perceives the past, and digital humanists encourage historians to use video games as primary materials. Video games, considering their past and age, have over time progressed as what a video game really means. Whether played through a monitor, TV, or a hand-held device, there are many ways that video games are being displayed for users to enjoy. People have drawn comparisons between flow-state-engaged video gamers and pupils in conventional school settings. In traditional, teacher-led classrooms, students have little say in what they learn, are passive consumers of the information selected by teachers, are required to follow the pace and skill level of the group (group teaching), and receive brief, imprecise, normative feedback on their work. Video games, as they continue to develop into better graphic definition and genre's, create new terminology when something unknown tends to become known. Yearly, consoles are being created to compete against other brands with similar functioning features that tends to lead the consumer into which they'd like to purchase. Now, companies have moved towards games only the specific console can play to grasp the consumer into purchasing their product compared to when video games first began, there was little to no variety. In 1989, a console war begun with Nintendo, one of the biggest in gaming was up against target, Sega with their brand new Master System which, failed to compete, allowing the Nintendo Emulator System to be one of the most consumed product in the world. More technology continued to be created, as the computer began to be used in people's houses for more than just office and daily use. Games began being implemented into computers and have progressively grown since then with coded robots to play against you. Early games like tic-tac-toe, solitaire, and Tennis for Two were great ways to bring new gaming to another system rather than one specifically meant for gaming.
While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment.
The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display.
Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means.
The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that "At a bare minimum, videogames appear to require some level of interactivity or involvement between the player and the medium" compared to passive entertainment like film, music, and television, and "videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television". Rogers still concluded that what is a video game "appears highly eclectic and diverse".
The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or "1-UP", then the player will reach the "game over" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen.
Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game.
To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display.
Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform is used as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems – and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed.
The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators.
Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic-tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates.
Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game.
Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game.
By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays.
The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game.
Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game.
Video games are frequently classified by a number of factors related to how one plays them.
A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror.
Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game.
A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time.
A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life.
Most video games are intended for entertainment purposes. Different game types include:
Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of.
The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include:
Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions.
Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements.
Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers.
In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs).
Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates.
With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products.
While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms.
Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter.
Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player.
While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game.
Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well.
Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game.
Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets.
The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American home video game market crashed in 1983, dropping from revenues of around $3 billion in 1983 to $100 million by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today.
The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least $10 million or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry.
Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include:
The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominantly led by major companies in North America (primarily the United States and Canada), Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field.
According to the market research firm Newzoo, the global video game industry drew estimated revenues of over $159 billion in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%.
Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China.
Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020–2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing.
Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016.
Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider.
More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together.
Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian.
Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum.
Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking.
Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types.
Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds.
A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures".
Video games have caused controversy since the 1970s. Parents and children's advocates regularly raise concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which some claimed the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games.
Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogyny from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery.
Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over US$100,000 as of 2020. Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry.
There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a 5,000-square-foot (460 m) exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong.
The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum. | [
{
"paragraph_id": 0,
"text": "A video game or computer game is an electronic game that involves interaction with a user interface or input device (such as a joystick, controller, keyboard, or motion sensing device) to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback (e.g., haptic technology that provides tactile sensations), and some video games also allow microphone and webcam inputs for in-game chatting and livestreaming.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer (PC) games; the latter also encompasses LAN games, online games, and browser games. More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971. In 1972 came the iconic hit game Pong and the first home console, the Magnavox Odyssey. The industry grew quickly during the \"golden age\" of arcade video games from the late 1970s to early 1980s but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, was dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on \"AAA\" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or \"indie games\") to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Today, video game development requires numerous interdisciplinary skills, vision, teamwork, and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers, and other marketers, to successfully bring a game to its consumers. As of 2020, the global video game market had estimated annual revenues of US$159 billion across hardware, software, and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console, and peripheral sales, as well as consumer demands for better game performance, have been powerful driving factors for hardware design and innovation.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Early video games use interactive electronic devices with various display formats. The earliest example is from 1947—a \"cathode-ray tube amusement device\" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consists of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which are paper drawings fixed to the screen. Other early examples include Christopher Strachey's draughts game, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by Massachusetts Institute of Technology students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961. Each game has different means of display: NIMROD has a panel of lights to play the game of Nim, OXO has a graphical display to play tic-tac-toe, Tennis for Two has an oscilloscope to display a side view of a tennis court, and Spacewar! has the DEC PDP-1's vector display to have two spaceships battle each other.",
"title": "Origins"
},
{
"paragraph_id": 5,
"text": "These preliminary inventions paved the way for the origins of video games today. Ralph H. Baer, while working at Sanders Associates in 1966, devised a control system to play a rudimentary game of table tennis on a television screen. With the company's approval, Baer built the prototype \"Brown Box\". Sanders patented Baer's inventions and licensed them to Magnavox, which commercialized it as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Sanders and Magnavox sued Atari for infringement of Baer's patents, but Atari settled out of court, paying for perpetual rights to the patents. Following their agreement, Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled \"Father of Video Games\" for their contributions.",
"title": "Origins"
},
{
"paragraph_id": 6,
"text": "The term \"video game\" was developed to distinguish this class of electronic games that were played on some type of video display rather than on a teletype printer, audio speaker or similar device. This also distinguished from many handheld electronic games like Merlin which commonly used LED lights for indicators but did not use these in combination for imaging purposes.",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "\"Computer game\" may also be used as a descriptor, as all these types of games essentially require the use of a computer processor, and in some cases, it is used interchangeably with \"video game\". Particularly in the United Kingdom and Western Europe, this is common due to the historic relevance of domestically produced microcomputers. Other terms used include digital game, for example by the Australian Bureau of Statistics. However, the term \"computer game\" can also be used to more specifically refer to games played primarily on personal computers or other type of flexible hardware systems (also known as a PC game), as a way distinguish them from console games, arcade games or mobile games. Other terms such as \"television game\" or \"telegame\" had been used in the 1970s and early 1980s, particularly for the home gaming consoles that rely on connection to a television set. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as \"TV games\", or TV geemu or terebi geemu. \"Electronic game\" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output. and the term \"TV game\" is still commonly used into the 21st century.",
"title": "Terminology"
},
{
"paragraph_id": 8,
"text": "The first appearance of the term \"video game\" emerged around 1973. The Oxford English Dictionary cited a 10 November 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term came much earlier, appearing first around March 1973 in these magazines in mass usage including by the arcade game manufacturers. As analyzed by video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those involved. This appeared to trace to Ed Adlum, who ran Cashbox's coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as \"video games\": \"RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboard's description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck.\" Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He \"wrestled with descriptions of this type of game,\" alternating between \"TV game\" and \"television game\" but \"finally woke up one day\" and said, \"what the hell... video game!\"",
"title": "Terminology"
},
{
"paragraph_id": 9,
"text": "For many years, the traveling Videotopia exhibit served as the closest representation of such a vital resource. In addition to collecting home video game consoles, the Electronics Conservancy organization set out to locate and restore 400 antique arcade cabinets after realizing that the majority of these games had been destroyed and feared the loss of their historical significance. Video games have significantly began to be seen in the real-world as a purpose to present history in a way of understanding the methodology and terms that are being compared. Researchers have looked at how historical representations affect how the public perceives the past, and digital humanists encourage historians to use video games as primary materials. Video games, considering their past and age, have over time progressed as what a video game really means. Whether played through a monitor, TV, or a hand-held device, there are many ways that video games are being displayed for users to enjoy. People have drawn comparisons between flow-state-engaged video gamers and pupils in conventional school settings. In traditional, teacher-led classrooms, students have little say in what they learn, are passive consumers of the information selected by teachers, are required to follow the pace and skill level of the group (group teaching), and receive brief, imprecise, normative feedback on their work. Video games, as they continue to develop into better graphic definition and genre's, create new terminology when something unknown tends to become known. Yearly, consoles are being created to compete against other brands with similar functioning features that tends to lead the consumer into which they'd like to purchase. Now, companies have moved towards games only the specific console can play to grasp the consumer into purchasing their product compared to when video games first began, there was little to no variety. In 1989, a console war begun with Nintendo, one of the biggest in gaming was up against target, Sega with their brand new Master System which, failed to compete, allowing the Nintendo Emulator System to be one of the most consumed product in the world. More technology continued to be created, as the computer began to be used in people's houses for more than just office and daily use. Games began being implemented into computers and have progressively grown since then with coded robots to play against you. Early games like tic-tac-toe, solitaire, and Tennis for Two were great ways to bring new gaming to another system rather than one specifically meant for gaming.",
"title": "Terminology"
},
{
"paragraph_id": 10,
"text": "While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment.",
"title": "Terminology"
},
{
"paragraph_id": 11,
"text": "The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display.",
"title": "Terminology"
},
{
"paragraph_id": 12,
"text": "Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means.",
"title": "Terminology"
},
{
"paragraph_id": 13,
"text": "The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that \"At a bare minimum, videogames appear to require some level of interactivity or involvement between the player and the medium\" compared to passive entertainment like film, music, and television, and \"videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television\". Rogers still concluded that what is a video game \"appears highly eclectic and diverse\".",
"title": "Terminology"
},
{
"paragraph_id": 14,
"text": "The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or \"1-UP\", then the player will reach the \"game over\" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen.",
"title": "Terminology"
},
{
"paragraph_id": 15,
"text": "Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game.",
"title": "Terminology"
},
{
"paragraph_id": 16,
"text": "To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display.",
"title": "Components"
},
{
"paragraph_id": 17,
"text": "Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform is used as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems – and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed.",
"title": "Components"
},
{
"paragraph_id": 18,
"text": "The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators.",
"title": "Components"
},
{
"paragraph_id": 19,
"text": "Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic-tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates.",
"title": "Components"
},
{
"paragraph_id": 20,
"text": "Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game.",
"title": "Components"
},
{
"paragraph_id": 21,
"text": "Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads (\"d-pads\"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game.",
"title": "Components"
},
{
"paragraph_id": 22,
"text": "By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays.",
"title": "Components"
},
{
"paragraph_id": 23,
"text": "The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game.",
"title": "Components"
},
{
"paragraph_id": 24,
"text": "Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game.",
"title": "Components"
},
{
"paragraph_id": 25,
"text": "Video games are frequently classified by a number of factors related to how one plays them.",
"title": "Classifications"
},
{
"paragraph_id": 26,
"text": "A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror.",
"title": "Classifications"
},
{
"paragraph_id": 27,
"text": "Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called \"Doom clones\" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like \"shooter game\" and \"action game\" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game.",
"title": "Classifications"
},
{
"paragraph_id": 28,
"text": "A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time.",
"title": "Classifications"
},
{
"paragraph_id": 29,
"text": "A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life.",
"title": "Classifications"
},
{
"paragraph_id": 30,
"text": "Most video games are intended for entertainment purposes. Different game types include:",
"title": "Classifications"
},
{
"paragraph_id": 31,
"text": "Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of.",
"title": "Classifications"
},
{
"paragraph_id": 32,
"text": "The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include:",
"title": "Classifications"
},
{
"paragraph_id": 33,
"text": "Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions.",
"title": "Classifications"
},
{
"paragraph_id": 34,
"text": "Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for \"social adequacy\" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements.",
"title": "Classifications"
},
{
"paragraph_id": 35,
"text": "Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers.",
"title": "Development"
},
{
"paragraph_id": 36,
"text": "In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the \"one-man shop\" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs).",
"title": "Development"
},
{
"paragraph_id": 37,
"text": "Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates.",
"title": "Development"
},
{
"paragraph_id": 38,
"text": "With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products.",
"title": "Development"
},
{
"paragraph_id": 39,
"text": "While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger \"AAA\" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms.",
"title": "Development"
},
{
"paragraph_id": 40,
"text": "Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls \"Cyberdrama\". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter.",
"title": "Development"
},
{
"paragraph_id": 41,
"text": "Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that \"the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it.\" Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player.",
"title": "Development"
},
{
"paragraph_id": 42,
"text": "While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term \"emergent narrative\" has been used to describe how, in a simulated environment, storyline can be created simply by \"what happens to the player.\" However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game.",
"title": "Development"
},
{
"paragraph_id": 43,
"text": "Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well.",
"title": "Development"
},
{
"paragraph_id": 44,
"text": "Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game.",
"title": "Development"
},
{
"paragraph_id": 45,
"text": "Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets.",
"title": "Development"
},
{
"paragraph_id": 46,
"text": "The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American home video game market crashed in 1983, dropping from revenues of around $3 billion in 1983 to $100 million by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today.",
"title": "Industry"
},
{
"paragraph_id": 47,
"text": "The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least $10 million or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry.",
"title": "Industry"
},
{
"paragraph_id": 48,
"text": "Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include:",
"title": "Industry"
},
{
"paragraph_id": 49,
"text": "The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominantly led by major companies in North America (primarily the United States and Canada), Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field.",
"title": "Industry"
},
{
"paragraph_id": 50,
"text": "According to the market research firm Newzoo, the global video game industry drew estimated revenues of over $159 billion in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%.",
"title": "Industry"
},
{
"paragraph_id": 51,
"text": "Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China.",
"title": "Industry"
},
{
"paragraph_id": 52,
"text": "Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020–2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing.",
"title": "Effects on society"
},
{
"paragraph_id": 53,
"text": "Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay \"Video Games can never be art\", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016.",
"title": "Effects on society"
},
{
"paragraph_id": 54,
"text": "Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a \"Fresh\" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving \"Fresh\" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider.",
"title": "Effects on society"
},
{
"paragraph_id": 55,
"text": "More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together.",
"title": "Effects on society"
},
{
"paragraph_id": 56,
"text": "Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for \"machine cinema\") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian.",
"title": "Effects on society"
},
{
"paragraph_id": 57,
"text": "Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum.",
"title": "Effects on society"
},
{
"paragraph_id": 58,
"text": "Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be \"learning by doing\" while playing video games while fostering creative thinking.",
"title": "Effects on society"
},
{
"paragraph_id": 59,
"text": "Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types.",
"title": "Effects on society"
},
{
"paragraph_id": 60,
"text": "Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater \"wellbeing\". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – \"from pre-literate children through to older adults living in long term care homes\" – with a main focus on 18 to 55-year-olds.",
"title": "Effects on society"
},
{
"paragraph_id": 61,
"text": "A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it \"helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures\".",
"title": "Effects on society"
},
{
"paragraph_id": 62,
"text": "Video games have caused controversy since the 1970s. Parents and children's advocates regularly raise concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which some claimed the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included \"gaming disorder\" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games.",
"title": "Effects on society"
},
{
"paragraph_id": 63,
"text": "Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the \"No Russian\" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogyny from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use \"crunch time\", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery.",
"title": "Effects on society"
},
{
"paragraph_id": 64,
"text": "Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over US$100,000 as of 2020. Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry.",
"title": "Collecting and preservation"
},
{
"paragraph_id": 65,
"text": "There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a 5,000-square-foot (460 m) exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong.",
"title": "Collecting and preservation"
},
{
"paragraph_id": 66,
"text": "The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on \"The Art of Video Games\". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum.",
"title": "Collecting and preservation"
}
] | A video game or computer game is an electronic game that involves interaction with a user interface or input device to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback, and some video games also allow microphone and webcam inputs for in-game chatting and livestreaming. Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer (PC) games; the latter also encompasses LAN games, online games, and browser games. More recently, the video game industry has expanded onto mobile gaming through mobile devices, virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience. The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971. In 1972 came the iconic hit game Pong and the first home console, the Magnavox Odyssey. The industry grew quickly during the "golden age" of arcade video games from the late 1970s to early 1980s but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, was dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service. Today, video game development requires numerous interdisciplinary skills, vision, teamwork, and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers, and other marketers, to successfully bring a game to its consumers. As of 2020, the global video game market had estimated annual revenues of US$159 billion across hardware, software, and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console, and peripheral sales, as well as consumer demands for better game performance, have been powerful driving factors for hardware design and innovation. | 2001-03-31T07:44:09Z | 2023-12-28T16:02:34Z | [
"Template:US Patent",
"Template:Webarchive",
"Template:ISBN",
"Template:Pp-vandalism",
"Template:Multiple images",
"Template:Citation needed",
"Template:More citations needed section",
"Template:Cite web",
"Template:Types of games",
"Template:Dubious",
"Template:Cbignore",
"Template:Prone to spam",
"Template:More citations needed",
"Template:Tone",
"Template:Convert",
"Template:Short description",
"Template:As of",
"Template:USD",
"Template:Further",
"Template:See also",
"Template:Clear",
"Template:Use dmy dates",
"Template:'",
"Template:Cite news",
"Template:Cite report",
"Template:Redirect",
"Template:Dead link",
"Template:Cite encyclopedia",
"Template:Cite conference",
"Template:Video Games",
"Template:Main",
"Template:Sfn",
"Template:Refend",
"Template:Video game gameplay",
"Template:Authority control",
"Template:Efn",
"Template:Reflist",
"Template:Cite journal",
"Template:Refbegin",
"Template:Years in Video Gaming",
"Template:Library resources box",
"Template:VideoGameGenre",
"Template:Portal",
"Template:Notelist",
"Template:Cite book",
"Template:Cite magazine",
"Template:Sister project links"
] | https://en.wikipedia.org/wiki/Video_game |
5,367 | Cambrian | The Cambrian Period ( /ˈkæmbri.ən, ˈkeɪm-/ KAM-bree-ən, KAYM-; sometimes symbolized Ꞓ) is the first geological period of the Paleozoic Era, and of the Phanerozoic Eon. The Cambrian lasted 53.4 million years from the end of the preceding Ediacaran Period 538.8 million years ago (mya) to the beginning of the Ordovician Period 485.4 mya. Its subdivisions, and its base, are somewhat in flux.
The period was established as "Cambrian series" by Adam Sedgwick, who named it after Cambria, the Latin name for 'Cymru' (Wales), where Britain's Cambrian rocks are best exposed. Sedgwick identified the layer as part of his task, along with Roderick Murchison, to subdivide the large "Transition Series", although the two geologists disagreed for a while on the appropriate categorization.
The Cambrian is unique in its unusually high proportion of lagerstätte sedimentary deposits, sites of exceptional preservation where "soft" parts of organisms are preserved as well as their more resistant shells. As a result, scientific understanding of the Cambrian biology surpasses that of some later periods.
The Cambrian marked a profound change in life on Earth: prior to the Cambrian, the majority of living organisms on the whole were small, unicellular and simple (Ediacaran fauna and earlier Tonian Huainan biota being notable exceptions). Complex, multicellular organisms gradually became more common in the millions of years immediately preceding the Cambrian, but it was not until this period that mineralized – hence readily fossilized – organisms became common.
The rapid diversification of lifeforms in the Cambrian, known as the Cambrian explosion, produced the first representatives of most modern animal phyla. Phylogenetic analysis has supported the view that before the Cambrian radiation, in the Cryogenian or Tonian, animals (metazoans) evolved monophyletically from a single common ancestor: flagellated colonial protists similar to modern choanoflagellates. Although diverse life forms prospered in the oceans, the land is thought to have been comparatively barren – with nothing more complex than a microbial soil crust and a few molluscs and arthropods (albeit not terrestrial) that emerged to browse on the microbial biofilm.
By the end of the Cambrian, myriapods, arachnids, and hexapods started adapting to the land, along with the first plants. Most of the continents were probably dry and rocky due to a lack of vegetation. Shallow seas flanked the margins of several continents created during the breakup of the supercontinent Pannotia. The seas were relatively warm, and polar ice was absent for much of the period.
The Cambrian Period followed the Ediacaran Period and was followed by the Ordovician Period.
The base of the Cambrian lies atop a complex assemblage of trace fossils known as the Treptichnus pedum assemblage. The use of Treptichnus pedum, a reference ichnofossil to mark the lower boundary of the Cambrian, is problematic because very similar trace fossils belonging to the Treptichnids group are found well below T. pedum in Namibia, Spain, Newfoundland, and possibly in the western US. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain.
The Cambrian is divided into four epochs (series) and ten ages (stages). Currently only three series and six stages are named and have a GSSP (an internationally agreed-upon stratigraphic reference point).
Because the international stratigraphic subdivision is not yet complete, many local subdivisions are still widely used. In some of these subdivisions the Cambrian is divided into three epochs with locally differing names – the Early Cambrian (Caerfai or Waucoban, 538.8 ± 0.2 to 509 ± 1.9 mya), Middle Cambrian (St Davids or Albertan, 509 ± 0.2 to 497 ± 1.9 mya) and Late Cambrian (497 ± 0.2 to 485.4 ± 1.9 mya; also known as Merioneth or Croixan). Trilobite zones allow biostratigraphic correlation in the Cambrian. Rocks of these epochs are referred to as belonging to the Lower, Middle, or Upper Cambrian.
Each of the local series is divided into several stages. The Cambrian is divided into several regional faunal stages of which the Russian-Kazakhian system is most used in international parlance:
*Most Russian paleontologists define the lower boundary of the Cambrian at the base of the Tommotian Stage, characterized by diversification and global distribution of organisms with mineral skeletons and the appearance of the first Archaeocyath bioherms.
The International Commission on Stratigraphy lists the Cambrian Period as beginning at 538.8 million years ago and ending at 485.4 million years ago.
The lower boundary of the Cambrian was originally held to represent the first appearance of complex life, represented by trilobites. The recognition of small shelly fossils before the first trilobites, and Ediacara biota substantially earlier, led to calls for a more precisely defined base to the Cambrian Period.
Despite the long recognition of its distinction from younger Ordovician rocks and older Precambrian rocks, it was not until 1994 that the Cambrian system/period was internationally ratified. After decades of careful consideration, a continuous sedimentary sequence at Fortune Head, Newfoundland was settled upon as a formal base of the Cambrian Period, which was to be correlated worldwide by the earliest appearance of Treptichnus pedum. Discovery of this fossil a few metres below the GSSP led to the refinement of this statement, and it is the T. pedum ichnofossil assemblage that is now formally used to correlate the base of the Cambrian.
This formal designation allowed radiometric dates to be obtained from samples across the globe that corresponded to the base of the Cambrian. Early dates of 570 million years ago quickly gained favour, though the methods used to obtain this number are now considered to be unsuitable and inaccurate. A more precise date using modern radiometric dating yield a date of 538.8 ± 0.2 million years ago. The ash horizon in Oman from which this date was recovered corresponds to a marked fall in the abundance of carbon-13 that correlates to equivalent excursions elsewhere in the world, and to the disappearance of distinctive Ediacaran fossils (Namacalathus, Cloudina). Nevertheless, there are arguments that the dated horizon in Oman does not correspond to the Ediacaran-Cambrian boundary, but represents a facies change from marine to evaporite-dominated strata – which would mean that dates from other sections, ranging from 544 or 542 Ma, are more suitable.
Plate reconstructions suggest a global supercontinent, Pannotia, was in the process of breaking up early in the Cambrian, with Laurentia (North America), Baltica, and Siberia having separated from the main supercontinent of Gondwana to form isolated land masses. Most continental land was clustered in the Southern Hemisphere at this time, but was drifting north. Large, high-velocity rotational movement of Gondwana appears to have occurred in the Early Cambrian.
With a lack of sea ice – the great glaciers of the Marinoan Snowball Earth were long melted – the sea level was high, which led to large areas of the continents being flooded in warm, shallow seas ideal for sea life. The sea levels fluctuated somewhat, suggesting there were "ice ages", associated with pulses of expansion and contraction of a south polar ice cap.
In Baltoscandia a Lower Cambrian transgression transformed large swathes of the Sub-Cambrian peneplain into an epicontinental sea.
Glaciers likely existed during the earliest Cambrian at high and possibly even at middle palaeolatitudes, possibly due to the ancient continent of Gondwana covering the South Pole and cutting off polar ocean currents. Middle Terreneuvian deposits, corresponding to the boundary between the Fortunian and Stage 2, show evidence of glaciation. However, other authors believe these very early, pretrilobitic glacial deposits may not even be of Cambrian age at all but instead date back to the Neoproterozoic, an era characterised by numerous severe icehouse periods.
The beginning of Stage 3 was relatively cool, with the period between 521 and 517 Ma being known as the Cambrian Arthropod Radiation Cool Event (CARCE). The Earth was generally very warm during Stage 4; its climate was comparable to the hot greenhouse of the Late Cretaceous and Early Palaeogene, as evidenced by a maximum in continental weathering rates over the last 900 million years and the presence of tropical, lateritic palaeosols at high palaeolatitudes during this time.
The Archaecyathid Extinction Warm Event (AEWE), lasting from 511 to 510.5 Ma, was particularly warm. Another warm event, the Redlichiid-Olenid Extinction Warm Event, occurred at the beginning of the Wuliuan. It became even warmer towards the end of the period, and sea levels rose dramatically. This warming trend continued into the Early Ordovician, the start of which was characterised by an extremely hot global climate.
The Cambrian flora was little different from the Ediacaran. The principal taxa were the marine macroalgae Fuxianospira, Sinocylindra, and Marpolia. No calcareous macroalgae are known from the period.
No land plant (embryophyte) fossils are known from the Cambrian. However, biofilms and microbial mats were well developed on Cambrian tidal flats and beaches 500 mya, and microbes forming microbial Earth ecosystems, comparable with modern soil crust of desert regions, contributing to soil formation. Although molecular clock estimates suggest terrestrial plants may have first emerged during the Middle or Late Cambrian, the consequent large-scale removal of the greenhouse gas CO2 from the atmosphere through sequestration did not begin until the Ordovician.
The Cambrian explosion was a period of rapid multicellular growth. Most animal life during the Cambrian was aquatic. Trilobites were once assumed to be the dominant life form at that time, but this has proven to be incorrect. Arthropods were by far the most dominant animals in the ocean, but trilobites were only a minor part of the total arthropod diversity. What made them so apparently abundant was their heavy armor reinforced by calcium carbonate (CaCO3), which fossilized far more easily than the fragile chitinous exoskeletons of other arthropods, leaving numerous preserved remains.
The period marked a steep change in the diversity and composition of Earth's biosphere. The Ediacaran biota suffered a mass extinction at the start of the Cambrian Period, which corresponded with an increase in the abundance and complexity of burrowing behaviour. This behaviour had a profound and irreversible effect on the substrate which transformed the seabed ecosystems. Before the Cambrian, the sea floor was covered by microbial mats. By the end of the Cambrian, burrowing animals had destroyed the mats in many areas through bioturbation. As a consequence, many of those organisms that were dependent on the mats became extinct, while the other species adapted to the changed environment that now offered new ecological niches. Around the same time there was a seemingly rapid appearance of representatives of all the mineralized phyla, including the Bryozoa, which were once thought to have only appeared in the Lower Ordovician. However, many of those phyla were represented only by stem-group forms; and since mineralized phyla generally have a benthic origin, they may not be a good proxy for (more abundant) non-mineralized phyla.
While the early Cambrian showed such diversification that it has been named the Cambrian Explosion, this changed later in the period, when there occurred a sharp drop in biodiversity. About 515 million years ago, the number of species going extinct exceeded the number of new species appearing. Five million years later, the number of genera had dropped from an earlier peak of about 600 to just 450. Also, the speciation rate in many groups was reduced to between a fifth and a third of previous levels. 500 million years ago, oxygen levels fell dramatically in the oceans, leading to hypoxia, while the level of poisonous hydrogen sulfide simultaneously increased, causing another extinction. The later half of Cambrian was surprisingly barren and showed evidence of several rapid extinction events; the stromatolites which had been replaced by reef building sponges known as Archaeocyatha, returned once more as the archaeocyathids became extinct. This declining trend did not change until the Great Ordovician Biodiversification Event.
Some Cambrian organisms ventured onto land, producing the trace fossils Protichnites and Climactichnites. Fossil evidence suggests that euthycarcinoids, an extinct group of arthropods, produced at least some of the Protichnites. Fossils of the track-maker of Climactichnites have not been found; however, fossil trackways and resting traces suggest a large, slug-like mollusc.
In contrast to later periods, the Cambrian fauna was somewhat restricted; free-floating organisms were rare, with the majority living on or close to the sea floor; and mineralizing animals were rarer than in future periods, in part due to the unfavourable ocean chemistry.
Many modes of preservation are unique to the Cambrian, and some preserve soft body parts, resulting in an abundance of Lagerstätten. These include Sirius Passet, the Sinsk Algal Lens, the Maotianshan Shales, the Emu Bay Shale, and the Burgess Shale,.
The United States Federal Geographic Data Committee uses a "barred capital C" ⟨Ꞓ⟩ character to represent the Cambrian Period. The Unicode character is U+A792 Ꞓ LATIN CAPITAL LETTER C WITH BAR. | [
{
"paragraph_id": 0,
"text": "The Cambrian Period ( /ˈkæmbri.ən, ˈkeɪm-/ KAM-bree-ən, KAYM-; sometimes symbolized Ꞓ) is the first geological period of the Paleozoic Era, and of the Phanerozoic Eon. The Cambrian lasted 53.4 million years from the end of the preceding Ediacaran Period 538.8 million years ago (mya) to the beginning of the Ordovician Period 485.4 mya. Its subdivisions, and its base, are somewhat in flux.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The period was established as \"Cambrian series\" by Adam Sedgwick, who named it after Cambria, the Latin name for 'Cymru' (Wales), where Britain's Cambrian rocks are best exposed. Sedgwick identified the layer as part of his task, along with Roderick Murchison, to subdivide the large \"Transition Series\", although the two geologists disagreed for a while on the appropriate categorization.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Cambrian is unique in its unusually high proportion of lagerstätte sedimentary deposits, sites of exceptional preservation where \"soft\" parts of organisms are preserved as well as their more resistant shells. As a result, scientific understanding of the Cambrian biology surpasses that of some later periods.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Cambrian marked a profound change in life on Earth: prior to the Cambrian, the majority of living organisms on the whole were small, unicellular and simple (Ediacaran fauna and earlier Tonian Huainan biota being notable exceptions). Complex, multicellular organisms gradually became more common in the millions of years immediately preceding the Cambrian, but it was not until this period that mineralized – hence readily fossilized – organisms became common.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The rapid diversification of lifeforms in the Cambrian, known as the Cambrian explosion, produced the first representatives of most modern animal phyla. Phylogenetic analysis has supported the view that before the Cambrian radiation, in the Cryogenian or Tonian, animals (metazoans) evolved monophyletically from a single common ancestor: flagellated colonial protists similar to modern choanoflagellates. Although diverse life forms prospered in the oceans, the land is thought to have been comparatively barren – with nothing more complex than a microbial soil crust and a few molluscs and arthropods (albeit not terrestrial) that emerged to browse on the microbial biofilm.",
"title": ""
},
{
"paragraph_id": 5,
"text": "By the end of the Cambrian, myriapods, arachnids, and hexapods started adapting to the land, along with the first plants. Most of the continents were probably dry and rocky due to a lack of vegetation. Shallow seas flanked the margins of several continents created during the breakup of the supercontinent Pannotia. The seas were relatively warm, and polar ice was absent for much of the period.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The Cambrian Period followed the Ediacaran Period and was followed by the Ordovician Period.",
"title": "Stratigraphy"
},
{
"paragraph_id": 7,
"text": "The base of the Cambrian lies atop a complex assemblage of trace fossils known as the Treptichnus pedum assemblage. The use of Treptichnus pedum, a reference ichnofossil to mark the lower boundary of the Cambrian, is problematic because very similar trace fossils belonging to the Treptichnids group are found well below T. pedum in Namibia, Spain, Newfoundland, and possibly in the western US. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain.",
"title": "Stratigraphy"
},
{
"paragraph_id": 8,
"text": "The Cambrian is divided into four epochs (series) and ten ages (stages). Currently only three series and six stages are named and have a GSSP (an internationally agreed-upon stratigraphic reference point).",
"title": "Stratigraphy"
},
{
"paragraph_id": 9,
"text": "Because the international stratigraphic subdivision is not yet complete, many local subdivisions are still widely used. In some of these subdivisions the Cambrian is divided into three epochs with locally differing names – the Early Cambrian (Caerfai or Waucoban, 538.8 ± 0.2 to 509 ± 1.9 mya), Middle Cambrian (St Davids or Albertan, 509 ± 0.2 to 497 ± 1.9 mya) and Late Cambrian (497 ± 0.2 to 485.4 ± 1.9 mya; also known as Merioneth or Croixan). Trilobite zones allow biostratigraphic correlation in the Cambrian. Rocks of these epochs are referred to as belonging to the Lower, Middle, or Upper Cambrian.",
"title": "Stratigraphy"
},
{
"paragraph_id": 10,
"text": "Each of the local series is divided into several stages. The Cambrian is divided into several regional faunal stages of which the Russian-Kazakhian system is most used in international parlance:",
"title": "Stratigraphy"
},
{
"paragraph_id": 11,
"text": "*Most Russian paleontologists define the lower boundary of the Cambrian at the base of the Tommotian Stage, characterized by diversification and global distribution of organisms with mineral skeletons and the appearance of the first Archaeocyath bioherms.",
"title": "Stratigraphy"
},
{
"paragraph_id": 12,
"text": "The International Commission on Stratigraphy lists the Cambrian Period as beginning at 538.8 million years ago and ending at 485.4 million years ago.",
"title": "Stratigraphy"
},
{
"paragraph_id": 13,
"text": "The lower boundary of the Cambrian was originally held to represent the first appearance of complex life, represented by trilobites. The recognition of small shelly fossils before the first trilobites, and Ediacara biota substantially earlier, led to calls for a more precisely defined base to the Cambrian Period.",
"title": "Stratigraphy"
},
{
"paragraph_id": 14,
"text": "Despite the long recognition of its distinction from younger Ordovician rocks and older Precambrian rocks, it was not until 1994 that the Cambrian system/period was internationally ratified. After decades of careful consideration, a continuous sedimentary sequence at Fortune Head, Newfoundland was settled upon as a formal base of the Cambrian Period, which was to be correlated worldwide by the earliest appearance of Treptichnus pedum. Discovery of this fossil a few metres below the GSSP led to the refinement of this statement, and it is the T. pedum ichnofossil assemblage that is now formally used to correlate the base of the Cambrian.",
"title": "Stratigraphy"
},
{
"paragraph_id": 15,
"text": "This formal designation allowed radiometric dates to be obtained from samples across the globe that corresponded to the base of the Cambrian. Early dates of 570 million years ago quickly gained favour, though the methods used to obtain this number are now considered to be unsuitable and inaccurate. A more precise date using modern radiometric dating yield a date of 538.8 ± 0.2 million years ago. The ash horizon in Oman from which this date was recovered corresponds to a marked fall in the abundance of carbon-13 that correlates to equivalent excursions elsewhere in the world, and to the disappearance of distinctive Ediacaran fossils (Namacalathus, Cloudina). Nevertheless, there are arguments that the dated horizon in Oman does not correspond to the Ediacaran-Cambrian boundary, but represents a facies change from marine to evaporite-dominated strata – which would mean that dates from other sections, ranging from 544 or 542 Ma, are more suitable.",
"title": "Stratigraphy"
},
{
"paragraph_id": 16,
"text": "Plate reconstructions suggest a global supercontinent, Pannotia, was in the process of breaking up early in the Cambrian, with Laurentia (North America), Baltica, and Siberia having separated from the main supercontinent of Gondwana to form isolated land masses. Most continental land was clustered in the Southern Hemisphere at this time, but was drifting north. Large, high-velocity rotational movement of Gondwana appears to have occurred in the Early Cambrian.",
"title": "Paleogeography"
},
{
"paragraph_id": 17,
"text": "With a lack of sea ice – the great glaciers of the Marinoan Snowball Earth were long melted – the sea level was high, which led to large areas of the continents being flooded in warm, shallow seas ideal for sea life. The sea levels fluctuated somewhat, suggesting there were \"ice ages\", associated with pulses of expansion and contraction of a south polar ice cap.",
"title": "Paleogeography"
},
{
"paragraph_id": 18,
"text": "In Baltoscandia a Lower Cambrian transgression transformed large swathes of the Sub-Cambrian peneplain into an epicontinental sea.",
"title": "Paleogeography"
},
{
"paragraph_id": 19,
"text": "Glaciers likely existed during the earliest Cambrian at high and possibly even at middle palaeolatitudes, possibly due to the ancient continent of Gondwana covering the South Pole and cutting off polar ocean currents. Middle Terreneuvian deposits, corresponding to the boundary between the Fortunian and Stage 2, show evidence of glaciation. However, other authors believe these very early, pretrilobitic glacial deposits may not even be of Cambrian age at all but instead date back to the Neoproterozoic, an era characterised by numerous severe icehouse periods.",
"title": "Climate"
},
{
"paragraph_id": 20,
"text": "The beginning of Stage 3 was relatively cool, with the period between 521 and 517 Ma being known as the Cambrian Arthropod Radiation Cool Event (CARCE). The Earth was generally very warm during Stage 4; its climate was comparable to the hot greenhouse of the Late Cretaceous and Early Palaeogene, as evidenced by a maximum in continental weathering rates over the last 900 million years and the presence of tropical, lateritic palaeosols at high palaeolatitudes during this time.",
"title": "Climate"
},
{
"paragraph_id": 21,
"text": "The Archaecyathid Extinction Warm Event (AEWE), lasting from 511 to 510.5 Ma, was particularly warm. Another warm event, the Redlichiid-Olenid Extinction Warm Event, occurred at the beginning of the Wuliuan. It became even warmer towards the end of the period, and sea levels rose dramatically. This warming trend continued into the Early Ordovician, the start of which was characterised by an extremely hot global climate.",
"title": "Climate"
},
{
"paragraph_id": 22,
"text": "The Cambrian flora was little different from the Ediacaran. The principal taxa were the marine macroalgae Fuxianospira, Sinocylindra, and Marpolia. No calcareous macroalgae are known from the period.",
"title": "Flora"
},
{
"paragraph_id": 23,
"text": "No land plant (embryophyte) fossils are known from the Cambrian. However, biofilms and microbial mats were well developed on Cambrian tidal flats and beaches 500 mya, and microbes forming microbial Earth ecosystems, comparable with modern soil crust of desert regions, contributing to soil formation. Although molecular clock estimates suggest terrestrial plants may have first emerged during the Middle or Late Cambrian, the consequent large-scale removal of the greenhouse gas CO2 from the atmosphere through sequestration did not begin until the Ordovician.",
"title": "Flora"
},
{
"paragraph_id": 24,
"text": "The Cambrian explosion was a period of rapid multicellular growth. Most animal life during the Cambrian was aquatic. Trilobites were once assumed to be the dominant life form at that time, but this has proven to be incorrect. Arthropods were by far the most dominant animals in the ocean, but trilobites were only a minor part of the total arthropod diversity. What made them so apparently abundant was their heavy armor reinforced by calcium carbonate (CaCO3), which fossilized far more easily than the fragile chitinous exoskeletons of other arthropods, leaving numerous preserved remains.",
"title": "Oceanic life"
},
{
"paragraph_id": 25,
"text": "The period marked a steep change in the diversity and composition of Earth's biosphere. The Ediacaran biota suffered a mass extinction at the start of the Cambrian Period, which corresponded with an increase in the abundance and complexity of burrowing behaviour. This behaviour had a profound and irreversible effect on the substrate which transformed the seabed ecosystems. Before the Cambrian, the sea floor was covered by microbial mats. By the end of the Cambrian, burrowing animals had destroyed the mats in many areas through bioturbation. As a consequence, many of those organisms that were dependent on the mats became extinct, while the other species adapted to the changed environment that now offered new ecological niches. Around the same time there was a seemingly rapid appearance of representatives of all the mineralized phyla, including the Bryozoa, which were once thought to have only appeared in the Lower Ordovician. However, many of those phyla were represented only by stem-group forms; and since mineralized phyla generally have a benthic origin, they may not be a good proxy for (more abundant) non-mineralized phyla.",
"title": "Oceanic life"
},
{
"paragraph_id": 26,
"text": "While the early Cambrian showed such diversification that it has been named the Cambrian Explosion, this changed later in the period, when there occurred a sharp drop in biodiversity. About 515 million years ago, the number of species going extinct exceeded the number of new species appearing. Five million years later, the number of genera had dropped from an earlier peak of about 600 to just 450. Also, the speciation rate in many groups was reduced to between a fifth and a third of previous levels. 500 million years ago, oxygen levels fell dramatically in the oceans, leading to hypoxia, while the level of poisonous hydrogen sulfide simultaneously increased, causing another extinction. The later half of Cambrian was surprisingly barren and showed evidence of several rapid extinction events; the stromatolites which had been replaced by reef building sponges known as Archaeocyatha, returned once more as the archaeocyathids became extinct. This declining trend did not change until the Great Ordovician Biodiversification Event.",
"title": "Oceanic life"
},
{
"paragraph_id": 27,
"text": "Some Cambrian organisms ventured onto land, producing the trace fossils Protichnites and Climactichnites. Fossil evidence suggests that euthycarcinoids, an extinct group of arthropods, produced at least some of the Protichnites. Fossils of the track-maker of Climactichnites have not been found; however, fossil trackways and resting traces suggest a large, slug-like mollusc.",
"title": "Oceanic life"
},
{
"paragraph_id": 28,
"text": "In contrast to later periods, the Cambrian fauna was somewhat restricted; free-floating organisms were rare, with the majority living on or close to the sea floor; and mineralizing animals were rarer than in future periods, in part due to the unfavourable ocean chemistry.",
"title": "Oceanic life"
},
{
"paragraph_id": 29,
"text": "Many modes of preservation are unique to the Cambrian, and some preserve soft body parts, resulting in an abundance of Lagerstätten. These include Sirius Passet, the Sinsk Algal Lens, the Maotianshan Shales, the Emu Bay Shale, and the Burgess Shale,.",
"title": "Oceanic life"
},
{
"paragraph_id": 30,
"text": "The United States Federal Geographic Data Committee uses a \"barred capital C\" ⟨Ꞓ⟩ character to represent the Cambrian Period. The Unicode character is U+A792 Ꞓ LATIN CAPITAL LETTER C WITH BAR.",
"title": "Symbol"
}
] | The Cambrian Period is the first geological period of the Paleozoic Era, and of the Phanerozoic Eon. The Cambrian lasted 53.4 million years from the end of the preceding Ediacaran Period 538.8 million years ago (mya) to the beginning of the Ordovician Period 485.4 mya. Its subdivisions, and its base, are somewhat in flux. The period was established as "Cambrian series" by Adam Sedgwick, who named it after Cambria, the Latin name for 'Cymru' (Wales), where Britain's Cambrian rocks are best exposed. Sedgwick identified the layer as part of his task, along with Roderick Murchison, to subdivide the large "Transition Series", although the two geologists disagreed for a while on the appropriate categorization. The Cambrian is unique in its unusually high proportion of lagerstätte sedimentary deposits, sites of exceptional preservation where "soft" parts of organisms are preserved as well as their more resistant shells. As a result, scientific understanding of the Cambrian biology surpasses that of some later periods. The Cambrian marked a profound change in life on Earth: prior to the Cambrian, the majority of living organisms on the whole were small, unicellular and simple. Complex, multicellular organisms gradually became more common in the millions of years immediately preceding the Cambrian, but it was not until this period that mineralized – hence readily fossilized – organisms became common. The rapid diversification of lifeforms in the Cambrian, known as the Cambrian explosion, produced the first representatives of most modern animal phyla. Phylogenetic analysis has supported the view that before the Cambrian radiation, in the Cryogenian or Tonian, animals (metazoans) evolved monophyletically from a single common ancestor: flagellated colonial protists similar to modern choanoflagellates.
Although diverse life forms prospered in the oceans, the land is thought to have been comparatively barren – with nothing more complex than a microbial soil crust and a few molluscs and arthropods that emerged to browse on the microbial biofilm. By the end of the Cambrian, myriapods, arachnids, and hexapods started adapting to the land, along with the first plants. Most of the continents were probably dry and rocky due to a lack of vegetation. Shallow seas flanked the margins of several continents created during the breakup of the supercontinent Pannotia. The seas were relatively warm, and polar ice was absent for much of the period. | 2001-09-12T16:00:24Z | 2023-12-30T04:20:01Z | [
"Template:Lang",
"Template:When",
"Template:Sfnm",
"Template:Angbr",
"Template:Wikisource portal",
"Template:Hatgrp",
"Template:IPAc-en",
"Template:Geological history",
"Template:Redirect",
"Template:Respell",
"Template:Period span",
"Template:Life timeline",
"Template:Unichar",
"Template:Cite journal",
"Template:Cambrian footer",
"Template:Infobox geologic timespan",
"Template:CEXNAV",
"Template:Cambrian preservational modes",
"Template:Cite web",
"Template:Cite EB1911",
"Template:Authority control",
"Template:Sfn",
"Template:Reflist",
"Template:Cite book",
"Template:Period end",
"Template:Anchor",
"Template:Main",
"Template:Webarchive",
"Template:Citation",
"Template:Use dmy dates",
"Template:Ma",
"Template:In Our Time",
"Template:Short description",
"Template:Further",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Cambrian |
5,370 | Theory of categories | In ontology, the theory of categories concerns itself with the categories of being: the highest genera or kinds of entities according to Amie Thomasson. To investigate the categories of being, or simply categories, is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. Various systems of categories have been proposed, they often include categories for substances, properties, relations, states of affairs or events. A representative question within the theory of categories might articulate itself, for example, in a query like, "Are universals prior to particulars?"
The process of abstraction required to discover the number and names of the categories of being has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle's ideas. For example, Gilbert of Poitiers divides Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not:
Furthermore, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle's list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation. Many supposed that relations only exist in the mind. Substance and Relation, then, are closely commutative with Matter and Mind--this is expressed most clearly in the dualism of René Descartes.
The Stoics held that all beings (ὄντα)—though not all things (τινά)—are material. Besides the existing beings they admitted four incorporeals (asomata): time, place, void, and sayable. They were held to be just 'subsisting' while such a status was denied to universals. Thus, they accepted Anaxagoras's idea (as did Aristotle) that if an object is hot, it is because some part of a universal heat body had entered the object. But, unlike Aristotle, they extended the idea to cover all accidents. Thus, if an object is red, it would be because some part of a universal red body had entered the object.
They held that there were four categories:
The Stoics outlined that our own actions, thoughts, and reactions are within our control. The opening paragraph of the Enchiridion states the categories as: "Some things in the world are up to us, while others are not. Up to us are our faculties of judgment, motivation, desire, and aversion. In short, whatever is our own doing." These suggest a space that is up to us or within our power. A simple example of the Stoic categories in use is provided by Jacques Brunschwig:
I am a certain lump of matter, and thereby a substance, an existent something (and thus far that is all); I am a man, and this individual man that I am, and thereby qualified by a common quality and a peculiar one; I am sitting or standing, disposed in a certain way; I am the father of my children, the fellow citizen of my fellow citizens, disposed in a certain way in relation to something else.
One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus "animal" could be first divided into "two-footed animal" and then into "wingless, two-footed animal". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition "this animal is ..." Aristotle stated in his work on the Categories that there were ten kinds of predicate where ...
"... each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon".
He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the "categorical" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example "this is a horse running". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the "hypothetical" and "disjunctive" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant's system of categories.
Category came into use with Aristotle's essay Categories, in which he discussed univocal and equivocal terms, predication, and ten categories:
Plotinus in writing his Enneads around AD 250 recorded that "philosophy at a very early age investigated the number and character of the existents ... some found ten, others less .... to some the genera were the first principles, to others only a generic classification of existents". He realised that some categories were reducible to others saying "why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms:
Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process":
Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore, Unity, having from all eternity arrived by motion at duality, came to rest in trinity".
Kant and Hegel accused the Aristotelian table of categories of being 'rhapsodic', derived arbitrarily and in bulk from experience, without any systematic necessity.
The early modern dualism, which has been described above, of Mind and Matter or Subject and Relation, as reflected in the writings of Descartes underwent a substantial revision in the late 18th century. The first objections to this stance were formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things.
For example: In the sentence "This is a house" the substantive subject "house" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant's tables, and under the heading of Relation, Kant lists inter alia the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F. Hegel's extensive tabulation of categories, and in C.S. Peirce's categories set out in his work on the logic of relations. One of Peirce's contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category.
In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or "derivative" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, "Community" was an example that Kant gave of such a derivative category; the second, "Modality", introduced by Kant, was a term which Hegel, in developing Kant's dialectical method, showed could also be seen as a derivative category; and the third, "Spirit" or "Will" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled.
In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows.
Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of concept.
G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed the first principle of the world, which he called the "absolute", is "a system of categories ... the categories must be the reason of which the world is a consequent".
Using his own logical method of sublation, later called the Hegelian dialectic, reasoning from the abstract through the negative to the concrete, he arrived at a hierarchy of some 270 categories, as explained by W. T. Stace. The three very highest categories were "logic", "nature" and "spirit". The three highest categories of "logic", however, he called "being", "essence", and "notion" which he explained as follows:
Schopenhauer's category that corresponded with "notion" was that of "idea", which in his Four-Fold Root of Sufficient Reason he complemented with the category of the "will". The title of his major work was The World as Will and Idea. The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the Farbenlehre of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle.
In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a "halo" or "corona" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with "a galaxy of ideas" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. "university"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions "the house is on the creek" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and "the house is eighteenth century" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition "the house is impressive or sublime" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit.
Apart from these, the categorial scheme of Alfred North Whitehead and his Process Philosophy, alongside Nicolai Hartmann and his Critical Realism, remain one of the most detailed and advanced systems in categorial research in metaphysics.
Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S. Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas. "It seems that the true categories of consciousness are first, feeling ... second, a sense of resistance ... and third, synthetic consciousness, or thought". Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, "perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions":
Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the US, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge".
Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology.
For Gilbert Ryle (1949), a category (in particular a "category mistake") is an important semantic concept, but one having only loose affinities to an ontological category.
Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006). | [
{
"paragraph_id": 0,
"text": "In ontology, the theory of categories concerns itself with the categories of being: the highest genera or kinds of entities according to Amie Thomasson. To investigate the categories of being, or simply categories, is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. Various systems of categories have been proposed, they often include categories for substances, properties, relations, states of affairs or events. A representative question within the theory of categories might articulate itself, for example, in a query like, \"Are universals prior to particulars?\"",
"title": ""
},
{
"paragraph_id": 1,
"text": "The process of abstraction required to discover the number and names of the categories of being has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle's ideas. For example, Gilbert of Poitiers divides Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not:",
"title": "Early development"
},
{
"paragraph_id": 2,
"text": "Furthermore, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle's list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation. Many supposed that relations only exist in the mind. Substance and Relation, then, are closely commutative with Matter and Mind--this is expressed most clearly in the dualism of René Descartes.",
"title": "Early development"
},
{
"paragraph_id": 3,
"text": "The Stoics held that all beings (ὄντα)—though not all things (τινά)—are material. Besides the existing beings they admitted four incorporeals (asomata): time, place, void, and sayable. They were held to be just 'subsisting' while such a status was denied to universals. Thus, they accepted Anaxagoras's idea (as did Aristotle) that if an object is hot, it is because some part of a universal heat body had entered the object. But, unlike Aristotle, they extended the idea to cover all accidents. Thus, if an object is red, it would be because some part of a universal red body had entered the object.",
"title": "Early development"
},
{
"paragraph_id": 4,
"text": "They held that there were four categories:",
"title": "Early development"
},
{
"paragraph_id": 5,
"text": "The Stoics outlined that our own actions, thoughts, and reactions are within our control. The opening paragraph of the Enchiridion states the categories as: \"Some things in the world are up to us, while others are not. Up to us are our faculties of judgment, motivation, desire, and aversion. In short, whatever is our own doing.\" These suggest a space that is up to us or within our power. A simple example of the Stoic categories in use is provided by Jacques Brunschwig:",
"title": "Early development"
},
{
"paragraph_id": 6,
"text": "I am a certain lump of matter, and thereby a substance, an existent something (and thus far that is all); I am a man, and this individual man that I am, and thereby qualified by a common quality and a peculiar one; I am sitting or standing, disposed in a certain way; I am the father of my children, the fellow citizen of my fellow citizens, disposed in a certain way in relation to something else.",
"title": "Early development"
},
{
"paragraph_id": 7,
"text": "One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus \"animal\" could be first divided into \"two-footed animal\" and then into \"wingless, two-footed animal\". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition \"this animal is ...\" Aristotle stated in his work on the Categories that there were ten kinds of predicate where ...",
"title": "Early development"
},
{
"paragraph_id": 8,
"text": "\"... each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon\".",
"title": "Early development"
},
{
"paragraph_id": 9,
"text": "He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the \"categorical\" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example \"this is a horse running\". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the \"hypothetical\" and \"disjunctive\" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant's system of categories.",
"title": "Early development"
},
{
"paragraph_id": 10,
"text": "Category came into use with Aristotle's essay Categories, in which he discussed univocal and equivocal terms, predication, and ten categories:",
"title": "Early development"
},
{
"paragraph_id": 11,
"text": "Plotinus in writing his Enneads around AD 250 recorded that \"philosophy at a very early age investigated the number and character of the existents ... some found ten, others less .... to some the genera were the first principles, to others only a generic classification of existents\". He realised that some categories were reducible to others saying \"why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?\" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms:",
"title": "Early development"
},
{
"paragraph_id": 12,
"text": "Plotinus called these \"the hearth of reality\" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as \"the three moments of the Neoplatonic world process\":",
"title": "Early development"
},
{
"paragraph_id": 13,
"text": "Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. \"From a single root all being multiplies\". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying \"Therefore, Unity, having from all eternity arrived by motion at duality, came to rest in trinity\".",
"title": "Early development"
},
{
"paragraph_id": 14,
"text": "Kant and Hegel accused the Aristotelian table of categories of being 'rhapsodic', derived arbitrarily and in bulk from experience, without any systematic necessity.",
"title": "Modern development"
},
{
"paragraph_id": 15,
"text": "The early modern dualism, which has been described above, of Mind and Matter or Subject and Relation, as reflected in the writings of Descartes underwent a substantial revision in the late 18th century. The first objections to this stance were formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things.",
"title": "Modern development"
},
{
"paragraph_id": 16,
"text": "For example: In the sentence \"This is a house\" the substantive subject \"house\" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant's tables, and under the heading of Relation, Kant lists inter alia the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F. Hegel's extensive tabulation of categories, and in C.S. Peirce's categories set out in his work on the logic of relations. One of Peirce's contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category.",
"title": "Modern development"
},
{
"paragraph_id": 17,
"text": "In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or \"derivative\" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, \"Community\" was an example that Kant gave of such a derivative category; the second, \"Modality\", introduced by Kant, was a term which Hegel, in developing Kant's dialectical method, showed could also be seen as a derivative category; and the third, \"Spirit\" or \"Will\" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled.",
"title": "Modern development"
},
{
"paragraph_id": 18,
"text": "In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows.",
"title": "Modern development"
},
{
"paragraph_id": 19,
"text": "Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term \"Community\", and declared that the tables \"do open violence to truth, treating it as nature was treated by old-fashioned gardeners\", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of concept.",
"title": "Modern development"
},
{
"paragraph_id": 20,
"text": "G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed the first principle of the world, which he called the \"absolute\", is \"a system of categories ... the categories must be the reason of which the world is a consequent\".",
"title": "Modern development"
},
{
"paragraph_id": 21,
"text": "Using his own logical method of sublation, later called the Hegelian dialectic, reasoning from the abstract through the negative to the concrete, he arrived at a hierarchy of some 270 categories, as explained by W. T. Stace. The three very highest categories were \"logic\", \"nature\" and \"spirit\". The three highest categories of \"logic\", however, he called \"being\", \"essence\", and \"notion\" which he explained as follows:",
"title": "Modern development"
},
{
"paragraph_id": 22,
"text": "Schopenhauer's category that corresponded with \"notion\" was that of \"idea\", which in his Four-Fold Root of Sufficient Reason he complemented with the category of the \"will\". The title of his major work was The World as Will and Idea. The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the Farbenlehre of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, \"the primordial relations which belong both to nature and vision\". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle.",
"title": "Modern development"
},
{
"paragraph_id": 23,
"text": "In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a \"halo\" or \"corona\" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with \"a galaxy of ideas\" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. \"university\"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions \"the house is on the creek\" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and \"the house is eighteenth century\" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition \"the house is impressive or sublime\" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit.",
"title": "Twentieth-century development"
},
{
"paragraph_id": 24,
"text": "Apart from these, the categorial scheme of Alfred North Whitehead and his Process Philosophy, alongside Nicolai Hartmann and his Critical Realism, remain one of the most detailed and advanced systems in categorial research in metaphysics.",
"title": "Twentieth-century development"
},
{
"paragraph_id": 25,
"text": "Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S. Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas. \"It seems that the true categories of consciousness are first, feeling ... second, a sense of resistance ... and third, synthetic consciousness, or thought\". Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, \"perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions\":",
"title": "Twentieth-century development"
},
{
"paragraph_id": 26,
"text": "Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a \"compound of triadic relations\". Ferdinand de Saussure, who was developing \"semiology\" in France just as Peirce was developing \"semiotics\" in the US, likened each term of a proposition to \"the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge\".",
"title": "Twentieth-century development"
},
{
"paragraph_id": 27,
"text": "Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology.",
"title": "Twentieth-century development"
},
{
"paragraph_id": 28,
"text": "For Gilbert Ryle (1949), a category (in particular a \"category mistake\") is an important semantic concept, but one having only loose affinities to an ontological category.",
"title": "Twentieth-century development"
},
{
"paragraph_id": 29,
"text": "Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006).",
"title": "Twentieth-century development"
}
] | In ontology, the theory of categories concerns itself with the categories of being: the highest genera or kinds of entities according to Amie Thomasson. To investigate the categories of being, or simply categories, is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. Various systems of categories have been proposed, they often include categories for substances, properties, relations, states of affairs or events. A representative question within the theory of categories might articulate itself, for example, in a query like, "Are universals prior to particulars?" | 2001-04-02T21:37:34Z | 2023-12-30T20:32:50Z | [
"Template:Div col",
"Template:Citation",
"Template:Distinguish",
"Template:Main",
"Template:Col-end",
"Template:Omission",
"Template:Col-break",
"Template:Reflist",
"Template:Div col end",
"Template:Cite web",
"Template:Cite book",
"Template:EB1911 poster",
"Template:Short description",
"Template:Excerpt",
"Template:Col-begin",
"Template:Lang",
"Template:Metaphysics",
"Template:Authority control",
"Template:Cite journal",
"Template:Webarchive",
"Template:Cite SEP"
] | https://en.wikipedia.org/wiki/Theory_of_categories |
5,371 | Concrete | Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.
When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.
In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ.
The word concrete comes from the Latin word "concretus" (meaning compact or condensed), the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow).
Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400-1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures.
Mayan concrete at the ruins of Uxmal (850-925 A.D.) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock."
Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day.
In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction.
The Romans used concrete extensively from 300 BC to 476 AD. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome.
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. 200 kg/cm [20 MPa; 2,800 psi]). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed:
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.
The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time.
The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon.
After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added.
The Canal du Midi was built using concrete in 1670.
Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.
A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement.
Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853. The first concrete reinforced bridge was designed and built by Joseph Monier in 1875.
Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928.
Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product.
Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.
Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete.
Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces.
Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar.
The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.
Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. British masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds which combine calcium, silicon, aluminium and iron in forms which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).
In modern cement kilns, many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels.
Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.
As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure.
Portland cement consists of five major compounds of calcium silicates and aluminates ranging from 5 to 50% in weight, which all undergo hydration to contribute to final material's strength. Thus, the hydration of cement involves many reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass.
Due to the nature of the chemical bonds created in these reactions and the final characteristics of the hardened cement paste formed, the process of cement hydration is considered irreversible.
Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.
The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete.
Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients.
Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers.
Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See § Production below.) The common types of admixtures are as follows:
Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices.
Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided.
In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant.
A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.
Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products.
A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.
Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix.
Concrete Mixes are primarily divided into nominal mix, standard mix and design mix.
Nominal mix ratios are given in volume of Cement : Sand : Aggregate {\displaystyle {\text{Cement : Sand : Aggregate}}} . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance.
Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength.
Thorough mixing is essential to produce uniform, high-quality concrete.
Separate paste mixing has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a high-speed, shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.
Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish.
Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of one foot (300 mm). A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.
Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.
High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.
After mixing, concrete is a fluid and can be pumped to the location where needed.
Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars.
Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength.
Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking.
During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use.
Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete.
For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly.
Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt.
The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene (typically < 0.5% by weight) is added. These enhanced graphene concretes are designed around the concrete application.
Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.
Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated.
Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding.
Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains.
Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation.
Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy.
Waste light is form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m of shredded waste and no other aggregates.
Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water.
Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.
Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.
The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.
The strengths of concrete is dictated by its function. Very low-strength—14 MPa (2,000 psi) or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, 20 to 32 MPa (2,900 to 4,600 psi) concrete is often used. 40 MPa (5,800 psi) concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above 40 MPa (5,800 psi) are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of 80 MPa (11,600 psi) or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as 130 MPa (18,900 psi) have been used commercially for these reasons.
The cement produced for making concrete accounts for about 8% of worldwide CO2 emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of CO2 are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.
Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.
Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.
As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).
Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.
The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces.
Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element.
Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking.
Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site.
Advantages to be achieved by employing precast concrete:
Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.
Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass.
Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing.
Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants.
Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials.
The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures.
Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.
There are two different systems being used:
More than 55,000 miles (89,000 km) of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture.
Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater.
Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing.
The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is:
In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1:
The minimum strength before exposing concrete to extreme cold is 500 psi (3.4 MPa). CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing.
Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork.
Grouted aggregate is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled with pumped grout.
Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground.
The manufacture and use of concrete produce a wide range of environmental, economic and social impacts.
A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions.
The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical.
Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt.
Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach.
An environmental investigation found that the embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding.
Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m, while FA decreased by 17.3 kg CO2 eq/m when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived.
Researchers at University of Auckland are working on utilizing biochar in concrete applications to reduce carbon emissions during concrete production and to improve strength.
High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed.
Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment.
Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated.
Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminium, iron, calcium, and silicon.
Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively.
Currently, there is not an End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself.
Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse.
Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete.
Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris were once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits.
Contrary to general belief, concrete recovery is achievable – concrete can be crushed and reused as aggregate in new projects.
Recycling or recovering concrete reduces natural resource exploitation and associated transportation costs, and reduces waste landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made, and cement alone cannot be recycled. At present, most recovered concrete is used for road sub-base and civil engineering projects. From a sustainability viewpoint, these relatively low-grade uses currently provide the optimal outcome.
The recycling process can be done in situ, with mobile plants, or in specific recycling units. The input material can be returned concrete which is fresh (wet) from ready-mix trucks, production waste at a pre-cast production facility, or waste from construction and demolition. The most significant source is demolition waste, preferably pre-sorted from selective demolition processes.
By far the most common method for recycling dry and hardened concrete involves crushing. Mobile sorters and crushers are often installed on construction sites to allow on-site processing. In other situations, specific processing sites are established, which are usually able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate.
Chloride and sulfates are undesired contaminants originated from soil and weathering and can provoke corrosion problems on aluminium and steel structures. The final product, Recycled Concrete Aggregate (RCA), presents interesting properties such as: angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions.
The lower density of RCA usually Increases project efficiency and improve job cost – recycled concrete aggregates yield more volume by weight (up to 15%). The physical properties of coarse aggregates made from crushed demolition concrete make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material.
The main commercial applications of the final recycled concrete aggregate are:
The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world.
The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption.
The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.
The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of 715 m (2,346 ft).
The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.
The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of 225,000 square feet (20,900 m) of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.
The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the 50,180-square-foot (4,662 m) cofferdam to be dewatered approximately 26 feet (7.9 m) below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry. | [
{
"paragraph_id": 0,
"text": "Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.",
"title": ""
},
{
"paragraph_id": 1,
"text": "When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word concrete comes from the Latin word \"concretus\" (meaning compact or condensed), the perfect passive participle of \"concrescere\", from \"con-\" (together) and \"crescere\" (to grow).",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400-1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Mayan concrete at the ruins of Uxmal (850-925 A.D.) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. \"The roof is flat and had been covered with cement\". \"The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet.\" \"But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock.\"",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Romans used concrete extensively from 300 BC to 476 AD. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. 200 kg/cm [20 MPa; 2,800 psi]). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed:",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, \"hearting\" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads \"lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made\". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Canal du Midi was built using concrete in 1670.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of \"modern\" Portland cement.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853. The first concrete reinforced bridge was designed and built by Joseph Monier in 1875.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or \"filler\" of aggregate (typically a rocky material, loose stones, and sand). The binder \"glues\" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product.",
"title": "Composition"
},
{
"paragraph_id": 21,
"text": "Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.",
"title": "Composition"
},
{
"paragraph_id": 22,
"text": "Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete.",
"title": "Composition"
},
{
"paragraph_id": 23,
"text": "Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces.",
"title": "Composition"
},
{
"paragraph_id": 24,
"text": "Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar.",
"title": "Composition"
},
{
"paragraph_id": 25,
"text": "The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.",
"title": "Composition"
},
{
"paragraph_id": 26,
"text": "Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. British masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds which combine calcium, silicon, aluminium and iron in forms which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).",
"title": "Composition"
},
{
"paragraph_id": 27,
"text": "In modern cement kilns, many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels.",
"title": "Composition"
},
{
"paragraph_id": 28,
"text": "Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.",
"title": "Composition"
},
{
"paragraph_id": 29,
"text": "As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure.",
"title": "Composition"
},
{
"paragraph_id": 30,
"text": "Portland cement consists of five major compounds of calcium silicates and aluminates ranging from 5 to 50% in weight, which all undergo hydration to contribute to final material's strength. Thus, the hydration of cement involves many reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass.",
"title": "Composition"
},
{
"paragraph_id": 31,
"text": "Due to the nature of the chemical bonds created in these reactions and the final characteristics of the hardened cement paste formed, the process of cement hydration is considered irreversible.",
"title": "Composition"
},
{
"paragraph_id": 32,
"text": "Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.",
"title": "Composition"
},
{
"paragraph_id": 33,
"text": "The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete.",
"title": "Composition"
},
{
"paragraph_id": 34,
"text": "Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients.",
"title": "Composition"
},
{
"paragraph_id": 35,
"text": "Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative \"exposed aggregate\" finish, popular among landscape designers.",
"title": "Composition"
},
{
"paragraph_id": 36,
"text": "Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions \"made as the concrete mix is being prepared\". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See § Production below.) The common types of admixtures are as follows:",
"title": "Composition"
},
{
"paragraph_id": 37,
"text": "Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices.",
"title": "Composition"
},
{
"paragraph_id": 38,
"text": "Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided.",
"title": "Production"
},
{
"paragraph_id": 39,
"text": "In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant.",
"title": "Production"
},
{
"paragraph_id": 40,
"text": "A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.",
"title": "Production"
},
{
"paragraph_id": 41,
"text": "Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products.",
"title": "Production"
},
{
"paragraph_id": 42,
"text": "A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.",
"title": "Production"
},
{
"paragraph_id": 43,
"text": "Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix.",
"title": "Production"
},
{
"paragraph_id": 44,
"text": "Concrete Mixes are primarily divided into nominal mix, standard mix and design mix.",
"title": "Production"
},
{
"paragraph_id": 45,
"text": "Nominal mix ratios are given in volume of Cement : Sand : Aggregate {\\displaystyle {\\text{Cement : Sand : Aggregate}}} . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance.",
"title": "Production"
},
{
"paragraph_id": 46,
"text": "Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength.",
"title": "Production"
},
{
"paragraph_id": 47,
"text": "Thorough mixing is essential to produce uniform, high-quality concrete.",
"title": "Production"
},
{
"paragraph_id": 48,
"text": "Separate paste mixing has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a high-speed, shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.",
"title": "Production"
},
{
"paragraph_id": 49,
"text": "Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish.",
"title": "Production"
},
{
"paragraph_id": 50,
"text": "Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an \"Abrams cone\" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of one foot (300 mm). A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.",
"title": "Production"
},
{
"paragraph_id": 51,
"text": "Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.",
"title": "Production"
},
{
"paragraph_id": 52,
"text": "High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.",
"title": "Production"
},
{
"paragraph_id": 53,
"text": "After mixing, concrete is a fluid and can be pumped to the location where needed.",
"title": "Production"
},
{
"paragraph_id": 54,
"text": "Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars.",
"title": "Production"
},
{
"paragraph_id": 55,
"text": "Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength.",
"title": "Production"
},
{
"paragraph_id": 56,
"text": "Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking.",
"title": "Production"
},
{
"paragraph_id": 57,
"text": "During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with \"curing compounds\" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use.",
"title": "Production"
},
{
"paragraph_id": 58,
"text": "Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete.",
"title": "Production"
},
{
"paragraph_id": 59,
"text": "For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly.",
"title": "Production"
},
{
"paragraph_id": 60,
"text": "Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt.",
"title": "Alternative types"
},
{
"paragraph_id": 61,
"text": "The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.",
"title": "Alternative types"
},
{
"paragraph_id": 62,
"text": "Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene (typically < 0.5% by weight) is added. These enhanced graphene concretes are designed around the concrete application.",
"title": "Alternative types"
},
{
"paragraph_id": 63,
"text": "Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.",
"title": "Alternative types"
},
{
"paragraph_id": 64,
"text": "Nanoconcrete (also spelled \"nano concrete\"' or \"nano-concrete\") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated.",
"title": "Alternative types"
},
{
"paragraph_id": 65,
"text": "Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as \"no-fines\" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding.",
"title": "Alternative types"
},
{
"paragraph_id": 66,
"text": "Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains.",
"title": "Alternative types"
},
{
"paragraph_id": 67,
"text": "Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation.",
"title": "Alternative types"
},
{
"paragraph_id": 68,
"text": "Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy.",
"title": "Alternative types"
},
{
"paragraph_id": 69,
"text": "Waste light is form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m of shredded waste and no other aggregates.",
"title": "Alternative types"
},
{
"paragraph_id": 70,
"text": "Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water.",
"title": "Alternative types"
},
{
"paragraph_id": 71,
"text": "Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.",
"title": "Properties"
},
{
"paragraph_id": 72,
"text": "Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.",
"title": "Properties"
},
{
"paragraph_id": 73,
"text": "The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.",
"title": "Properties"
},
{
"paragraph_id": 74,
"text": "The strengths of concrete is dictated by its function. Very low-strength—14 MPa (2,000 psi) or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, 20 to 32 MPa (2,900 to 4,600 psi) concrete is often used. 40 MPa (5,800 psi) concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above 40 MPa (5,800 psi) are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of 80 MPa (11,600 psi) or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as 130 MPa (18,900 psi) have been used commercially for these reasons.",
"title": "Properties"
},
{
"paragraph_id": 75,
"text": "The cement produced for making concrete accounts for about 8% of worldwide CO2 emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of CO2 are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.",
"title": "Properties"
},
{
"paragraph_id": 76,
"text": "Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.",
"title": "Properties"
},
{
"paragraph_id": 77,
"text": "Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.",
"title": "Properties"
},
{
"paragraph_id": 78,
"text": "Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.",
"title": "Properties"
},
{
"paragraph_id": 79,
"text": "Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.",
"title": "Properties"
},
{
"paragraph_id": 80,
"text": "As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).",
"title": "Properties"
},
{
"paragraph_id": 81,
"text": "Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.",
"title": "Construction with concrete"
},
{
"paragraph_id": 82,
"text": "The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces.",
"title": "Construction with concrete"
},
{
"paragraph_id": 83,
"text": "Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element.",
"title": "Construction with concrete"
},
{
"paragraph_id": 84,
"text": "Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking.",
"title": "Construction with concrete"
},
{
"paragraph_id": 85,
"text": "Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site.",
"title": "Construction with concrete"
},
{
"paragraph_id": 86,
"text": "Advantages to be achieved by employing precast concrete:",
"title": "Construction with concrete"
},
{
"paragraph_id": 87,
"text": "Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.",
"title": "Construction with concrete"
},
{
"paragraph_id": 88,
"text": "Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass.",
"title": "Construction with concrete"
},
{
"paragraph_id": 89,
"text": "Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing.",
"title": "Construction with concrete"
},
{
"paragraph_id": 90,
"text": "Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants.",
"title": "Construction with concrete"
},
{
"paragraph_id": 91,
"text": "Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials.",
"title": "Construction with concrete"
},
{
"paragraph_id": 92,
"text": "The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures.",
"title": "Construction with concrete"
},
{
"paragraph_id": 93,
"text": "Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.",
"title": "Construction with concrete"
},
{
"paragraph_id": 94,
"text": "There are two different systems being used:",
"title": "Construction with concrete"
},
{
"paragraph_id": 95,
"text": "More than 55,000 miles (89,000 km) of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture.",
"title": "Construction with concrete"
},
{
"paragraph_id": 96,
"text": "Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater.",
"title": "Construction with concrete"
},
{
"paragraph_id": 97,
"text": "Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing.",
"title": "Construction with concrete"
},
{
"paragraph_id": 98,
"text": "The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is:",
"title": "Construction with concrete"
},
{
"paragraph_id": 99,
"text": "In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1:",
"title": "Construction with concrete"
},
{
"paragraph_id": 100,
"text": "The minimum strength before exposing concrete to extreme cold is 500 psi (3.4 MPa). CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing.",
"title": "Construction with concrete"
},
{
"paragraph_id": 101,
"text": "Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork.",
"title": "Construction with concrete"
},
{
"paragraph_id": 102,
"text": "Grouted aggregate is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled with pumped grout.",
"title": "Construction with concrete"
},
{
"paragraph_id": 103,
"text": "Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground.",
"title": "Construction with concrete"
},
{
"paragraph_id": 104,
"text": "The manufacture and use of concrete produce a wide range of environmental, economic and social impacts.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 105,
"text": "A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is \"Portland cement\", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 106,
"text": "The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 107,
"text": "Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 108,
"text": "Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 109,
"text": "An environmental investigation found that the embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 110,
"text": "Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m, while FA decreased by 17.3 kg CO2 eq/m when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 111,
"text": "Researchers at University of Auckland are working on utilizing biochar in concrete applications to reduce carbon emissions during concrete production and to improve strength.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 112,
"text": "High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 113,
"text": "Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment.",
"title": "Environment, health and safety"
},
{
"paragraph_id": 114,
"text": "Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated.",
"title": "Circular economy"
},
{
"paragraph_id": 115,
"text": "Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminium, iron, calcium, and silicon.",
"title": "Circular economy"
},
{
"paragraph_id": 116,
"text": "Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively.",
"title": "Circular economy"
},
{
"paragraph_id": 117,
"text": "Currently, there is not an End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself.",
"title": "Circular economy"
},
{
"paragraph_id": 118,
"text": "Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse.",
"title": "Circular economy"
},
{
"paragraph_id": 119,
"text": "Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete.",
"title": "Circular economy"
},
{
"paragraph_id": 120,
"text": "Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris were once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits.",
"title": "Circular economy"
},
{
"paragraph_id": 121,
"text": "Contrary to general belief, concrete recovery is achievable – concrete can be crushed and reused as aggregate in new projects.",
"title": "Circular economy"
},
{
"paragraph_id": 122,
"text": "Recycling or recovering concrete reduces natural resource exploitation and associated transportation costs, and reduces waste landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made, and cement alone cannot be recycled. At present, most recovered concrete is used for road sub-base and civil engineering projects. From a sustainability viewpoint, these relatively low-grade uses currently provide the optimal outcome.",
"title": "Circular economy"
},
{
"paragraph_id": 123,
"text": "The recycling process can be done in situ, with mobile plants, or in specific recycling units. The input material can be returned concrete which is fresh (wet) from ready-mix trucks, production waste at a pre-cast production facility, or waste from construction and demolition. The most significant source is demolition waste, preferably pre-sorted from selective demolition processes.",
"title": "Circular economy"
},
{
"paragraph_id": 124,
"text": "By far the most common method for recycling dry and hardened concrete involves crushing. Mobile sorters and crushers are often installed on construction sites to allow on-site processing. In other situations, specific processing sites are established, which are usually able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate.",
"title": "Circular economy"
},
{
"paragraph_id": 125,
"text": "Chloride and sulfates are undesired contaminants originated from soil and weathering and can provoke corrosion problems on aluminium and steel structures. The final product, Recycled Concrete Aggregate (RCA), presents interesting properties such as: angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions.",
"title": "Circular economy"
},
{
"paragraph_id": 126,
"text": "The lower density of RCA usually Increases project efficiency and improve job cost – recycled concrete aggregates yield more volume by weight (up to 15%). The physical properties of coarse aggregates made from crushed demolition concrete make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material.",
"title": "Circular economy"
},
{
"paragraph_id": 127,
"text": "The main commercial applications of the final recycled concrete aggregate are:",
"title": "Circular economy"
},
{
"paragraph_id": 128,
"text": "The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world.",
"title": "Circular economy"
},
{
"paragraph_id": 129,
"text": "The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption.",
"title": "Circular economy"
},
{
"paragraph_id": 130,
"text": "The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.",
"title": "World records"
},
{
"paragraph_id": 131,
"text": "The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of 715 m (2,346 ft).",
"title": "World records"
},
{
"paragraph_id": 132,
"text": "The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.",
"title": "World records"
},
{
"paragraph_id": 133,
"text": "The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of 225,000 square feet (20,900 m) of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.",
"title": "World records"
},
{
"paragraph_id": 134,
"text": "The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the 50,180-square-foot (4,662 m) cofferdam to be dewatered approximately 26 feet (7.9 m) below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry.",
"title": "World records"
}
] | Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined. When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials embedded to provide tensile strength, yielding reinforced concrete. In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, such as a calcium aluminate cement or with Portland cement to form Portland cement concrete. Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ. | 2001-04-03T16:23:33Z | 2023-12-07T07:49:32Z | [
"Template:Use dmy dates",
"Template:Main",
"Template:Nowrap",
"Template:CO2",
"Template:Cite book",
"Template:ISBN",
"Template:Road types",
"Template:More citations needed",
"Template:Em",
"Template:Main article",
"Template:Cite web",
"Template:Space",
"Template:Section link",
"Template:Citation needed",
"Template:Annotated link",
"Template:Cite arXiv",
"Template:Webarchive",
"Template:Distinguish",
"Template:Reflist",
"Template:Cite journal",
"Template:Commons category-inline",
"Template:Authority control",
"Template:Cite conference",
"Template:Short description",
"Template:About",
"Template:Toclimit",
"Template:Convert",
"Template:Cite news",
"Template:Cite encyclopedia",
"Template:Stonemasonry",
"Template:See also",
"Template:Cn",
"Template:Citation",
"Template:Page needed",
"Template:Skeptoid",
"Template:YouTube",
"Template:Components of Cement, Comparison of Chemical and Physical Characteristics",
"Template:Visible anchor",
"Template:Concrete navbox"
] | https://en.wikipedia.org/wiki/Concrete |
5,373 | Coitus interruptus | Coitus interruptus, also known as withdrawal, pulling out or the pull-out method, is a method of birth control during penetrative sexual intercourse, whereby the penis is withdrawn from a vagina prior to ejaculation so that the ejaculate (semen) may be directed away from the vagina in an effort to avoid insemination.
This method was used by an estimated 38 million couples worldwide in 1991. Coitus interruptus does not protect against sexually transmitted infections (STIs/STDs).
Perhaps the oldest description of the use of the withdrawal method to avoid pregnancy is the story of Onan in the Torah and the Bible. This text is believed to have been written down over 2,500 years ago. Societies in the ancient civilizations of Greece and Rome preferred small families and are known to have practiced a variety of birth control methods. There are references that have led historians to believe withdrawal was sometimes used as birth control. However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets).
After the decline of the Roman Empire in the 5th century AD, contraceptive practices fell out of use in Europe; the use of contraceptive pessaries, for example, is not documented again until the 15th century. If withdrawal was used during the Roman Empire, knowledge of the practice may have been lost during its decline.
From the 18th century until the development of modern methods, withdrawal was one of the most popular methods of birth-control in Europe, North America, and elsewhere.
Like many methods of birth control, reliable effect is achieved only by correct and consistent use. Observed failure rates of withdrawal vary depending on the population being studied: American studies have found actual failure rates of 15–28% per year. One U.S. study, based on self-reported data from the 2006-2010 cycle of the National Survey of Family Growth, found significant differences in failure rate based on parity status. Women with 0 previous births had a 12-month failure rate of only 8.4%, which then increased to 20.4% for those with 1 prior birth and again to 27.7% for those with 2 or more.
An analysis of Demographic and Health Surveys in 43 developing countries between 1990 and 2013 found a median 12-month failure rate across subregions of 13.4%, with a range of 7.8-17.1%. Individual countries within the subregions were even more varied. A large scale study of women in England and Scotland during 1968–1974 to determine the efficacy of various contraceptive methods found a failure rate of 6.7 per 100 woman-years of use. This was a “typical use” failure rate, including user failure to use the method correctly. In comparison, the combined oral contraceptive pill has an actual use failure rate of 2–8%, while intrauterine devices (IUDs) have an actual use failure rate of 0.1–0.8%. Condoms have an actual use failure rate of 10–18%. However, some authors suggest that actual effectiveness of withdrawal could be similar to the effectiveness of condoms; this area needs further research. (See Comparison of birth control methods.)
For couples that use coitus interruptus consistently and correctly at every act of intercourse, the failure rate is 4% per year. This rate is derived from an educated guess based on a modest chance of sperm in the pre-ejaculate. In comparison, the pill has a perfect-use failure rate of 0.3%, IUDs a rate of 0.1-0.6%, and internal condoms a rate of 2%.
It has been suggested that the pre-ejaculate ("Cowper's fluid") emitted by the penis prior to ejaculation may contain spermatozoa (sperm cells), which would compromise the effectiveness of the method. However, several small studies have failed to find any viable sperm in the fluid. While no large conclusive studies have been done, it is believed by some that the cause of method (correct-use) failure is the pre-ejaculate fluid picking up sperm from a previous ejaculation. For this reason, it is recommended that the male partner urinate between ejaculations, to clear the urethra of sperm, and wash any ejaculate from objects that might come near the woman's vulva (e.g. hands and penis).
However, recent research suggests that this might not be accurate. A contrary, yet non-generalizable study that found mixed evidence, including individual cases of a high sperm concentration, was published in March 2011. A noted limitation to these previous studies' findings is that pre-ejaculate samples were analyzed after the critical two-minute point. That is, looking for motile sperm in small amounts of pre-ejaculate via microscope after two minutes – when the sample has most likely dried – makes examination and evaluation "extremely difficult". Thus, in March 2011 a team of researchers assembled 27 male volunteers and analyzed their pre-ejaculate samples within two minutes after producing them. The researchers found that 11 of the 27 men (41%) produced pre-ejaculatory samples that contained sperm, and 10 of these samples (37%) contained a "fair amount" of motile sperm (i.e. as few as 1 million to as many as 35 million). This study therefore recommends, in order to minimize unintended pregnancy and disease transmission, the use of condoms from the first moment of genital contact. As a point of reference, a study showed that, of couples who conceived within a year of trying, only 2.5% included a male partner with a total sperm count (per ejaculate) of 23 million sperm or less. However, across a wide range of observed values, total sperm count (as with other identified semen and sperm characteristics) has weak power to predict which couples are at risk of pregnancy. Regardless, this study introduced the concept that some men may consistently have sperm in their pre-ejaculate, due to a "leakage," while others may not.
Similarly, another robust study performed in 2016 found motile sperm in the pre-ejaculate of 16.7% (7/42) healthy men. What more, this study attempted to exclude contamination of sperm from ejaculate by drying the pre-ejaculate specimens to reveal a fern-like pattern, characteristics of true pre-ejaculate. All pre-ejaculate specimens were examined within an hour of production and then dried; all pre-ejaculate specimens were found to be true pre-ejaculate.
It is widely believed that urinating after an ejaculation will flush the urethra of remaining sperm. However, some of the subjects in the March 2011 study who produced sperm in their pre-ejaculate did urinate (sometimes more than once) before producing their sample. Therefore, some males can release the pre-ejaculate fluid containing sperm without a previous ejaculation.
The advantage of coitus interruptus is that it can be used by people who have objections to, or do not have access to, other forms of contraception. Some people prefer it so they can avoid possible adverse effects of hormonal contraceptives or so that they can have a full experience and be able to "feel" their partner. Other reasons for the popularity of this method are it has no direct monetary cost, requires no artificial devices, has no physical side effects, can be practiced without a prescription or medical consultation, and provides no barriers to stimulation.
Compared to the other common reversible methods of contraception such as IUDs, hormonal contraceptives, and male condoms, coitus interruptus is less effective at preventing pregnancy. As a result, it is also less cost-effective than many more effective methods: although the method itself has no direct cost, users have a greater chance of incurring the risks and expenses of either child-birth or abortion. Only models that assume all couples practice perfect use of the method find cost savings associated with the choice of withdrawal as a birth control method.
The method is largely ineffective in the prevention of sexually transmitted infections (STIs/STDs), like HIV, since pre-ejaculate may carry viral particles or bacteria which may infect the partner if this fluid comes in contact with mucous membranes. However, a reduction in the volume of bodily fluids exchanged during intercourse may reduce the likelihood of disease transmission compared to using no method due to the smaller number of pathogens present.
Based on data from surveys conducted during the late 1990s, 3% of women of childbearing age worldwide rely on withdrawal as their primary method of contraception. Regional popularity of the method varies widely, from a low of 1% in Africa to 16% in Western Asia.
In the United States, according to the National Survey of Family Growth (NSFG) in 2014, 8.1% of reproductive-aged women reported using withdrawal as a primary contraceptive method. This was a significant increase from 2012 when 4.8% of women reported the use of withdrawal as their most effective method. However, when withdrawal is used in addition to or in rotation with another contraceptive method, the percentage of women using withdrawal jumps from 5% for sole use and 11% for any withdrawal use in 2002, and for adolescents from 7.1% of sole withdrawal use to 14.6% of any withdrawal use in 2006–2008.
When asked if withdrawal was used at least once in the past month by women, use of withdrawal increased from 13% as sole use to 33% ever use in the past month. These increases are even more pronounced for adolescents 15 to 19 years old and young women 20 to 24 years old Similarly, the NSFG reports that 9.8% of unmarried men who have had sexual intercourse in the last three months in 2002 used withdrawal, which then increased to 14.5% in 2006–2010, and then to 18.8% in 2011–2015. The use of withdrawal varied by the unmarried man's age and cohabiting status, but not by ethnicity or race. The use of withdrawal decreased significantly with increasing age groups, ranging from 26.2% among men aged 15–19 to 12% among men aged 35–44. The use of withdrawal was significantly higher for never-married men (23.0%) compared with formerly married (16.3%) and cohabiting (13.0%) men.
For 1998, about 18% of married men in Turkey reported using withdrawal as a contraceptive method. | [
{
"paragraph_id": 0,
"text": "Coitus interruptus, also known as withdrawal, pulling out or the pull-out method, is a method of birth control during penetrative sexual intercourse, whereby the penis is withdrawn from a vagina prior to ejaculation so that the ejaculate (semen) may be directed away from the vagina in an effort to avoid insemination.",
"title": ""
},
{
"paragraph_id": 1,
"text": "This method was used by an estimated 38 million couples worldwide in 1991. Coitus interruptus does not protect against sexually transmitted infections (STIs/STDs).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Perhaps the oldest description of the use of the withdrawal method to avoid pregnancy is the story of Onan in the Torah and the Bible. This text is believed to have been written down over 2,500 years ago. Societies in the ancient civilizations of Greece and Rome preferred small families and are known to have practiced a variety of birth control methods. There are references that have led historians to believe withdrawal was sometimes used as birth control. However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets).",
"title": "History"
},
{
"paragraph_id": 3,
"text": "After the decline of the Roman Empire in the 5th century AD, contraceptive practices fell out of use in Europe; the use of contraceptive pessaries, for example, is not documented again until the 15th century. If withdrawal was used during the Roman Empire, knowledge of the practice may have been lost during its decline.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "From the 18th century until the development of modern methods, withdrawal was one of the most popular methods of birth-control in Europe, North America, and elsewhere.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Like many methods of birth control, reliable effect is achieved only by correct and consistent use. Observed failure rates of withdrawal vary depending on the population being studied: American studies have found actual failure rates of 15–28% per year. One U.S. study, based on self-reported data from the 2006-2010 cycle of the National Survey of Family Growth, found significant differences in failure rate based on parity status. Women with 0 previous births had a 12-month failure rate of only 8.4%, which then increased to 20.4% for those with 1 prior birth and again to 27.7% for those with 2 or more.",
"title": "Effects"
},
{
"paragraph_id": 6,
"text": "An analysis of Demographic and Health Surveys in 43 developing countries between 1990 and 2013 found a median 12-month failure rate across subregions of 13.4%, with a range of 7.8-17.1%. Individual countries within the subregions were even more varied. A large scale study of women in England and Scotland during 1968–1974 to determine the efficacy of various contraceptive methods found a failure rate of 6.7 per 100 woman-years of use. This was a “typical use” failure rate, including user failure to use the method correctly. In comparison, the combined oral contraceptive pill has an actual use failure rate of 2–8%, while intrauterine devices (IUDs) have an actual use failure rate of 0.1–0.8%. Condoms have an actual use failure rate of 10–18%. However, some authors suggest that actual effectiveness of withdrawal could be similar to the effectiveness of condoms; this area needs further research. (See Comparison of birth control methods.)",
"title": "Effects"
},
{
"paragraph_id": 7,
"text": "For couples that use coitus interruptus consistently and correctly at every act of intercourse, the failure rate is 4% per year. This rate is derived from an educated guess based on a modest chance of sperm in the pre-ejaculate. In comparison, the pill has a perfect-use failure rate of 0.3%, IUDs a rate of 0.1-0.6%, and internal condoms a rate of 2%.",
"title": "Effects"
},
{
"paragraph_id": 8,
"text": "It has been suggested that the pre-ejaculate (\"Cowper's fluid\") emitted by the penis prior to ejaculation may contain spermatozoa (sperm cells), which would compromise the effectiveness of the method. However, several small studies have failed to find any viable sperm in the fluid. While no large conclusive studies have been done, it is believed by some that the cause of method (correct-use) failure is the pre-ejaculate fluid picking up sperm from a previous ejaculation. For this reason, it is recommended that the male partner urinate between ejaculations, to clear the urethra of sperm, and wash any ejaculate from objects that might come near the woman's vulva (e.g. hands and penis).",
"title": "Effects"
},
{
"paragraph_id": 9,
"text": "However, recent research suggests that this might not be accurate. A contrary, yet non-generalizable study that found mixed evidence, including individual cases of a high sperm concentration, was published in March 2011. A noted limitation to these previous studies' findings is that pre-ejaculate samples were analyzed after the critical two-minute point. That is, looking for motile sperm in small amounts of pre-ejaculate via microscope after two minutes – when the sample has most likely dried – makes examination and evaluation \"extremely difficult\". Thus, in March 2011 a team of researchers assembled 27 male volunteers and analyzed their pre-ejaculate samples within two minutes after producing them. The researchers found that 11 of the 27 men (41%) produced pre-ejaculatory samples that contained sperm, and 10 of these samples (37%) contained a \"fair amount\" of motile sperm (i.e. as few as 1 million to as many as 35 million). This study therefore recommends, in order to minimize unintended pregnancy and disease transmission, the use of condoms from the first moment of genital contact. As a point of reference, a study showed that, of couples who conceived within a year of trying, only 2.5% included a male partner with a total sperm count (per ejaculate) of 23 million sperm or less. However, across a wide range of observed values, total sperm count (as with other identified semen and sperm characteristics) has weak power to predict which couples are at risk of pregnancy. Regardless, this study introduced the concept that some men may consistently have sperm in their pre-ejaculate, due to a \"leakage,\" while others may not.",
"title": "Effects"
},
{
"paragraph_id": 10,
"text": "Similarly, another robust study performed in 2016 found motile sperm in the pre-ejaculate of 16.7% (7/42) healthy men. What more, this study attempted to exclude contamination of sperm from ejaculate by drying the pre-ejaculate specimens to reveal a fern-like pattern, characteristics of true pre-ejaculate. All pre-ejaculate specimens were examined within an hour of production and then dried; all pre-ejaculate specimens were found to be true pre-ejaculate.",
"title": "Effects"
},
{
"paragraph_id": 11,
"text": "It is widely believed that urinating after an ejaculation will flush the urethra of remaining sperm. However, some of the subjects in the March 2011 study who produced sperm in their pre-ejaculate did urinate (sometimes more than once) before producing their sample. Therefore, some males can release the pre-ejaculate fluid containing sperm without a previous ejaculation.",
"title": "Effects"
},
{
"paragraph_id": 12,
"text": "The advantage of coitus interruptus is that it can be used by people who have objections to, or do not have access to, other forms of contraception. Some people prefer it so they can avoid possible adverse effects of hormonal contraceptives or so that they can have a full experience and be able to \"feel\" their partner. Other reasons for the popularity of this method are it has no direct monetary cost, requires no artificial devices, has no physical side effects, can be practiced without a prescription or medical consultation, and provides no barriers to stimulation.",
"title": "Advantages"
},
{
"paragraph_id": 13,
"text": "Compared to the other common reversible methods of contraception such as IUDs, hormonal contraceptives, and male condoms, coitus interruptus is less effective at preventing pregnancy. As a result, it is also less cost-effective than many more effective methods: although the method itself has no direct cost, users have a greater chance of incurring the risks and expenses of either child-birth or abortion. Only models that assume all couples practice perfect use of the method find cost savings associated with the choice of withdrawal as a birth control method.",
"title": "Disadvantages"
},
{
"paragraph_id": 14,
"text": "The method is largely ineffective in the prevention of sexually transmitted infections (STIs/STDs), like HIV, since pre-ejaculate may carry viral particles or bacteria which may infect the partner if this fluid comes in contact with mucous membranes. However, a reduction in the volume of bodily fluids exchanged during intercourse may reduce the likelihood of disease transmission compared to using no method due to the smaller number of pathogens present.",
"title": "Disadvantages"
},
{
"paragraph_id": 15,
"text": "Based on data from surveys conducted during the late 1990s, 3% of women of childbearing age worldwide rely on withdrawal as their primary method of contraception. Regional popularity of the method varies widely, from a low of 1% in Africa to 16% in Western Asia.",
"title": "Prevalence"
},
{
"paragraph_id": 16,
"text": "In the United States, according to the National Survey of Family Growth (NSFG) in 2014, 8.1% of reproductive-aged women reported using withdrawal as a primary contraceptive method. This was a significant increase from 2012 when 4.8% of women reported the use of withdrawal as their most effective method. However, when withdrawal is used in addition to or in rotation with another contraceptive method, the percentage of women using withdrawal jumps from 5% for sole use and 11% for any withdrawal use in 2002, and for adolescents from 7.1% of sole withdrawal use to 14.6% of any withdrawal use in 2006–2008.",
"title": "Prevalence"
},
{
"paragraph_id": 17,
"text": "When asked if withdrawal was used at least once in the past month by women, use of withdrawal increased from 13% as sole use to 33% ever use in the past month. These increases are even more pronounced for adolescents 15 to 19 years old and young women 20 to 24 years old Similarly, the NSFG reports that 9.8% of unmarried men who have had sexual intercourse in the last three months in 2002 used withdrawal, which then increased to 14.5% in 2006–2010, and then to 18.8% in 2011–2015. The use of withdrawal varied by the unmarried man's age and cohabiting status, but not by ethnicity or race. The use of withdrawal decreased significantly with increasing age groups, ranging from 26.2% among men aged 15–19 to 12% among men aged 35–44. The use of withdrawal was significantly higher for never-married men (23.0%) compared with formerly married (16.3%) and cohabiting (13.0%) men.",
"title": "Prevalence"
},
{
"paragraph_id": 18,
"text": "For 1998, about 18% of married men in Turkey reported using withdrawal as a contraceptive method.",
"title": "Prevalence"
}
] | Coitus interruptus, also known as withdrawal, pulling out or the pull-out method, is a method of birth control during penetrative sexual intercourse, whereby the penis is withdrawn from a vagina prior to ejaculation so that the ejaculate (semen) may be directed away from the vagina in an effort to avoid insemination. This method was used by an estimated 38 million couples worldwide in 1991. Coitus interruptus does not protect against sexually transmitted infections (STIs/STDs). | 2001-08-25T07:33:04Z | 2023-11-27T17:03:47Z | [
"Template:Spoken Wikipedia",
"Template:Birth control methods",
"Template:Redirect",
"Template:Italic title",
"Template:Cite journal",
"Template:Subscription required",
"Template:Bibleverse",
"Template:Cite book",
"Template:Short description",
"Template:Infobox birth control",
"Template:Rp",
"Template:Reflist",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Coitus_interruptus |
5,374 | Condom | A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms.
The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times.
With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis.
Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year.
The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables.
The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection.
Condoms are widely recommended for the prevention of sexually transmitted infections (STIs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of organisms that cause AIDS, genital herpes, cervical cancer, genital warts, syphilis, chlamydia, gonorrhea, and other diseases. Condoms are often recommended as an adjunct to more effective birth control methods (such as IUD) in situations where STI protection is also desired. For this reason, condoms are frequently used by those in the swinging (sexual practice) community.
According to a 2000 report by the National Institutes of Health (NIH), consistent use of latex condoms reduces the risk of HIV transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. Analysis published in 2007 from the University of Texas Medical Branch and the World Health Organization found similar risk reductions of 80–95%.
The 2000 NIH review concluded that condom use significantly reduces the risk of gonorrhea for men. A 2006 study reports that proper condom use decreases the risk of transmission of human papillomavirus (HPV) to women by approximately 70%. Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2, also known as genital herpes, in both men and women.
Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases like HPV and herpes may be transmitted by direct contact. The primary effectiveness issue with using condoms to prevent STIs, however, is inconsistent use.
Condoms may also be useful in treating potentially precancerous cervical changes. Exposure to human papillomavirus, even in individuals already infected with the virus, appears to increase the risk of precancerous changes. The use of condoms helps promote regression of these changes. In addition, researchers in the UK suggest that a hormone in semen can aggravate existing cervical cancer, condom use during sex can prevent exposure to the hormone.
Condoms may slip off the penis after ejaculation, break due to improper application or physical damage (such as tears caused when opening the package), or break or slip due to latex degradation (typically from usage past the expiration date, improper storage, or exposure to oils). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–3% of women will test positive for semen residue after intercourse with a condom. Failure rates are higher for anal sex, and until 2022, condoms were only approved by the FDA for vaginal sex. The One Male Condom received FDA approval for anal sex on 23 February 2022.
"Double bagging", using two condoms at once, is often believed to cause a higher rate of failure due to the friction of rubber on rubber. This claim is not supported by research. The limited studies that have been done found that the simultaneous use of multiple condoms decreases the risk of condom breakage.
Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins – such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse.
Standard condoms will fit almost any penis, with varying degrees of comfort or risk of slippage. Many condom manufacturers offer "snug" or "magnum" sizes. Some manufacturers also offer custom sized-to-fit condoms, with claims that they are more reliable and offer improved sensation/comfort. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive.
It is recommended for condoms manufacturers to avoid very thick or very thin condoms, because they are both considered less effective. Some authors encourage users to choose thinner condoms "for greater durability, sensation, and comfort", but others warn that "the thinner the condom, the smaller the force required to break it".
Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage.
Among people who intend condoms to be their form of birth control, pregnancy may occur when the user has sex without a condom. The person may have run out of condoms, or be traveling and not have a condom with them, or dislike the feel of condoms and decide to "take a chance". This behavior is the primary cause of typical use failure (as opposed to method or perfect use failure).
Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers from Nigeria reported clients sabotaging condoms in retaliation for being coerced into condom use. Using a fine needle to make several pinholes at the tip of the condom is believed to significantly impact on their effectiveness. Cases of such condom sabotage have occurred.
The use of latex condoms by people with an allergy to latex can cause allergic symptoms, such as skin irritation. In people with severe latex allergies, using a latex condom can potentially be life-threatening. Repeated use of latex condoms can also cause the development of a latex allergy in some people. Irritation may also occur due to spermicides that may be present.
Male condoms are usually packaged inside a foil or plastic wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. It is important that some space be left in the tip of the condom so that semen has a place to collect; otherwise it may be forced out of the base of the device. Most condoms have a teat end for this purpose. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle. Condoms are used to reduce the likelihood of pregnancy during intercourse and to reduce the likelihood of contracting sexually transmitted infections (STIs). Condoms are also used during fellatio to reduce the likelihood of contracting STIs.
Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement. Advocates of condom use also cite their advantages of being inexpensive, easy to use, and having few side effects.
In 2012 proponents gathered 372,000 voter signatures through a citizens' initiative in Los Angeles County to put Measure B on the 2012 ballot. As a result, Measure B, a law requiring the use of condoms in the production of pornographic films, was passed. This requirement has received much criticism and is said by some to be counter-productive, merely forcing companies that make pornographic films to relocate to other places without this requirement. Producers claim that condom use depresses sales.
Condoms are often used in sex education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted infections when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs ... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active."
In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sex education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 76% of American parents want their children to receive comprehensive sexuality education including condom use.
Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse.
Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervical or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them.
For fertility treatments, a collection condom may be used to collect semen during sexual intercourse where the semen is provided by the woman's partner. Private sperm donors may also use a collection condom to obtain samples through masturbation or by sexual intercourse with a partner and will transfer the ejaculate from the collection condom to a specially designed container. The sperm is transported in such containers, in the case of a donor, to a recipient woman to be used for insemination, and in the case of a woman's partner, to a fertility clinic for processing and use. However, transportation may reduce the fecundity of the sperm. Collection condoms may also be used where semen is produced at a sperm bank or fertility clinic.
Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates.
Condoms excel as multipurpose containers and barriers because they are waterproof, elastic, durable, and (for military and espionage uses) will not arouse suspicion if found.
Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies.
Condoms have also been used to smuggle alcohol, cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose.
Medically, condoms can be used to cover endovaginal ultrasound probes, or in field chest needle decompressions they can be used to make a one-way valve.
Condoms have also been used to protect scientific samples from the environment, and to waterproof microphones for underwater recording.
Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes and shapes.
They also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavored condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms also exist.
Male condoms have a tight ring to form a seal around the penis, while female condoms usually have a large stiff ring to prevent them from slipping into the body orifice. The Female Health Company produced a female condom that was initially made of polyurethane, but newer versions are made of nitrile rubber. Medtech Products produces a female condom made of latex.
Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electric current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing.
While the advantages of latex have made it the most popular condom material, it does have some drawbacks. Latex condoms are damaged when used with oil-based substances as lubricants, such as petroleum jelly, cooking oil, baby oil, mineral oil, skin lotions, suntan lotions, cold creams, butter or margarine. Contact with oil makes latex condoms more likely to break or slip off due to loss of elasticity caused by the oils. Additionally, latex allergy precludes use of latex condoms and is one of the principal reasons for the use of other materials. In May 2009, the U.S. Food and Drug Administration (FDA) granted approval for the production of condoms composed of Vytex, latex that has been treated to remove 90% of the proteins responsible for allergic reactions. An allergen-free condom made of synthetic latex (polyisoprene) is also available.
The most common non-latex condoms are made from polyurethane. Condoms may also be made from other synthetic materials, such as AT-10 resin, and most polyisoprene.
Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick.
Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes.
However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, lose their shape or bunch up more than latex, and are more expensive.
Polyisoprene is a synthetic version of natural rubber latex. While significantly more expensive, it has the advantages of latex (such as being softer and more elastic than polyurethane condoms) without the protein which is responsible for latex allergies. Unlike polyurethane condoms, they cannot be used with an oil-based lubricant.
Condoms made from sheep intestines, labeled "lambskin", are also available. Although they are generally effective as a contraceptive by blocking sperm, it is presumed that they are less effective than latex in preventing the transmission of sexually transmitted infections because of pores in the material. This is based on the idea that intestines, by their nature, are porous, permeable membranes, and while sperm are too large to pass through the pores, viruses — such as HIV, herpes, and genital warts — are small enough to pass. However, there are to date no clinical data confirming or denying this theory.
As a result of laboratory data on condom porosity, in 1989, the FDA began requiring lambskin condom manufacturers to indicate that the products were not to be used for the prevention of sexually transmitted infections. This was based on the presumption that lambskin condoms would be less effective than latex in preventing HIV transmission, rather than a conclusion that lambskin condoms lack efficacy in STI prevention altogether. An FDA publication in 1992 states that lambskin condoms "provide good birth control and a varying degree of protection against some, but not all, sexually transmitted diseases" and that the labelling requirement was decided upon because the FDA "cannot expect people to know which STDs they need to be protected against", and since "the reality is that you don't know what your partner has, we wanted natural-membrane condoms to have labels that don't allow the user to assume they're effective against the small viral STDs."
Some believe that lambskin condoms provide a more "natural" sensation and lack the allergens inherent to latex. Still, because of their lesser protection against infection, other hypoallergenic materials such as polyurethane are recommended for latex-allergic users and partners. Lambskin condoms are also significantly more expensive than different types, and as slaughter by-products, they are also not vegetarian.
Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, condoms lubricated with spermicide have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms.
Nonoxynol-9 was once believed to offer additional protection against STIs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, it recommends using a nonoxynol-9 lubricated condom over no condom at all. As of 2005, nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9 and Planned Parenthood has discontinued the distribution of condoms so lubricated.
Textured condoms include studded and ribbed condoms which can provide extra sensations to both partners. The studs or ribs can be located on the inside, outside, or both; alternatively, they are located in specific sections to provide directed stimulation to either the G-spot or frenulum. Many textured condoms which advertise "mutual pleasure" also are bulb-shaped at the top, to provide extra stimulation to the penis. Some women experience irritation during vaginal intercourse with studded condoms.
The anti-rape condom is another variation designed to be worn by women. It is designed to cause pain to the attacker, hopefully allowing the victim a chance to escape.
A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life.
Some condom-like devices are intended for entertainment only, such as glow-in-the dark condoms. These novelty condoms may not provide protection against pregnancy and STIs.
In February 2022, the U.S. Food and Drug Administration (FDA) approved the first condoms specifically indicated to help reduce transmission of sexually transmitted infections (STIs) during anal intercourse.
The prevalence of condom use varies greatly between countries. Most surveys of contraceptive use are among married women, or women in informal unions. Japan has the highest rate of condom usage in the world: in that country, condoms account for almost 80% of contraceptive use by married women. On average, in developed countries, condoms are the most popular method of birth control: 28% of married contraceptive users rely on condoms. In the average less-developed country, condoms are less common: only 6–8% of married contraceptive users choose condoms.
Whether condoms were used in ancient civilizations is debated by archaeologists and historians. In ancient Egypt, Greece, and Rome, pregnancy prevention was generally seen as a woman's responsibility, and the only well documented contraception methods were female-controlled devices. In Asia before the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded. Condoms seem to have been used for contraception, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, condoms called Kabuto-gata (甲形) were made of tortoise shell or animal horn.
In 16th-century Italy, anatomist and physician Gabriele Falloppio wrote a treatise on syphilis. The earliest documented strain of syphilis, first appearing in Europe in a 1490s outbreak, caused severe symptoms and often death within a few months of contracting the disease. Falloppio's treatise is the earliest uncontested description of condom use: it describes linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Falloppio claimed that an experimental trial of the linen sheath demonstrated protection against syphilis.
After this, the use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication that these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius, who condemned them as immoral. In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word or any similar spelling. Other early spellings include "condam" and "quondam", from which the Italian derivation guantone has been suggested, from guanto, "a glove".
In addition to linen, condoms during the Renaissance were made out of intestines and bladder. In the late 16th century, Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis.
Casanova in the 18th century was one of the first reported using "assurance caps" to prevent impregnating his mistresses.
From at least the 18th century, condom use was opposed in some legal, religious, and medical circles for essentially the same reasons that are given today: condoms reduce the likelihood of pregnancy, which some thought immoral or undesirable for the nation; they do not provide full protection against sexually transmitted infections, while belief in their protective powers was thought to encourage sexual promiscuity; and, they are not used consistently due to inconvenience, expense, or loss of sensation.
Despite some opposition, the condom market grew rapidly. In the 18th century, condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or "skin" (bladder or intestine softened by treatment with sulfur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theater throughout Europe and Russia. They later spread to America, although in every place there were generally used only by the middle and upper classes, due to both expense and lack of sex education.
The early 19th century saw contraceptives promoted to the poorer classes for the first time. Writers on contraception tended to prefer other birth control methods to the condom. By the late 19th century, many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods controlled by women, such as diaphragms and spermicidal douches. Other writers cited both the expense of condoms and their unreliability (they were often riddled with holes and often fell off or tore). Still, they discussed condoms as a good option for some and the only contraceptive that protects from disease.
Many countries passed laws impeding the manufacture and promotion of contraceptives. In spite of these restrictions, condoms were promoted by traveling lecturers and in newspaper advertisements, using euphemisms in places where such ads were illegal. Instructions on how to make condoms at home were distributed in the United States and Europe. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method.
Beginning in the second half of the 19th century, American rates of sexually transmitted infections skyrocketed. Causes cited by historians include the effects of the American Civil War and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sex education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught abstinence was the only way to avoid sexually transmitted infections. Condoms were not promoted for disease prevention because the medical community and moral watchdogs considered STIs to be punishment for sexual misbehavior. The stigma against people with these diseases was so significant that many hospitals refused to treat people with syphilis.
The German military was the first to promote condom use among its soldiers in the later 19th century. Early 20th century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted infections. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use.
In the decades after World War I, there remained social and legal obstacles to condom use throughout the U.S. and Europe. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control because their failure rates were too high. Freud was especially opposed to the condom because he thought it cut down on sexual pleasure. Some feminists continued to oppose male-controlled contraceptives such as condoms. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance". The Bishop of London, Arthur Winnington-Ingram, complained of the huge number of condoms discarded in alleyways and parks, especially after weekends and holidays.
However, European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Worldwide, condom sales doubled in the 1920s.
In 1839, Charles Goodyear discovered a way of processing natural rubber, which is too stiff when cold and too soft when warm, in such a way as to make it elastic. This proved to have advantages for the manufacture of condoms; unlike the sheep's gut condoms, they could stretch and did not tear quickly when used. The rubber vulcanization process was patented by Goodyear in 1844. The first rubber condom was produced in 1855. The earliest rubber condoms had a seam and were as thick as a bicycle inner tube. Besides this type, small rubber condoms covering only the glans were often used in England and the United States. There was more risk of losing them and if the rubber ring was too tight, it would constrict the penis. This type of condom was the original "capote" (French for condom), perhaps because of its resemblance to a woman's bonnet worn at that time, also called a capote.
For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped molds, then dipping the wrapped molds in a chemical solution to cure the rubber. In 1912, Polish-born inventor Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid. Around 1920 patent lawyer and vice-president of the United States Rubber Company Ernest Hopkinson invented a new technique of converting latex into rubber without a coagulant (demulsifier), which featured using water as a solvent and warm air to dry the solution, as well as optionally preserving liquid latex with ammonia. Condoms made this way, commonly called "latex" ones, required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. The use of water to suspend the rubber instead of gasoline and benzene eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber).
Until the twenties, all condoms were individually hand-dipped by semi-skilled workers. Throughout the decade of the 1920s, advances in the automation of the condom assembly line were made. The first fully automated line was patented in 1930. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market.
In 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. In the 1930s, legal restrictions on condoms began to be relaxed. But during this period Fascist Italy and Nazi Germany increased restrictions on condoms (limited sales as disease preventatives were still allowed). During the Depression, condom lines by Schmid gained in popularity. Schmid still used the cement-dipping method of manufacture which had two advantages over the latex variety. Firstly, cement-dipped condoms could be safely used with oil-based lubricants. Secondly, while less comfortable, these older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s, and the U.S. Food and Drug Administration began to regulate the quality of condoms sold in the United States.
Throughout World War II, condoms were not only distributed to male U.S. military members, but also heavily promoted with films, posters, and lectures. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to this day. After the war, condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. The birth control pill became the world's most popular method of birth control in the years after its 1960 début, but condoms remained a strong second. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone.(This number has grown in recent decades: in 2004, the government of India purchased 1.9 billion condoms for distribution at family planning clinics.)
In the 1960s and 1970s quality regulations tightened, and more legal barriers to condom use were removed. In Ireland, legal condom sales were allowed for the first time in 1978. Advertising, however was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television; this policy remained in place until 1979.
After it was discovered in the early 1980s that AIDS can be a sexually transmitted infection, the use of condoms was encouraged to prevent transmission of HIV. Despite opposition by some political, religious, and other figures, national condom promotion campaigns occurred in the U.S. and Europe. These campaigns increased condom use significantly.
Due to increased demand and greater social acceptance, condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Walmart. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. The phenomenon of decreasing use of condoms as disease preventatives has been called prevention fatigue or condom fatigue. Observers have cited condom fatigue in both Europe and North America. As one response, manufacturers have changed the tone of their advertisements from scary to humorous.
New developments continued to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms by 2015. As of September 2013, condoms are available inside prisons in Canada, most of the European Union, Australia, Brazil, Indonesia, South Africa, and the US states of Vermont (on 17 September 2013, the Californian Senate approved a bill for condom distribution inside the state's prisons, but the bill was not yet law at the time of approval).
The global condom market was estimated at US$9.2 billion in 2020.
The term condom first appears in the early 18th century: early forms include condum (1706 and 1717), condon (1708) and cundum (1744). The word's etymology is unknown. In popular tradition, the invention and naming of the condom came to be attributed to an associate of England's King Charles II, one "Dr. Condom" or "Earl of Condom". There is however no evidence of the existence of such a person, and condoms had been used for over one hundred years before King Charles II acceded to the throne in 1660.
A variety of unproven Latin etymologies have been proposed, including condon (receptacle), condamina (house), and cumdum (scabbard or case). It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown".
Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters or rubber johnnies. Additionally, condoms may be referred to using the manufacturer's name.
Some moral and scientific criticism of condoms exists despite the many benefits of condoms agreed on by scientific consensus and sexual health experts.
Condom usage is typically recommended for new couples who have yet to develop full trust in their partner with regard to STIs. Established couples on the other hand have few concerns about STIs, and can use other methods of birth control such as the pill, which does not act as a barrier to intimate sexual contact. Note that the polar debate with regard to condom usage is attenuated by the target group the argument is directed. Notably the age category and stable partner question are factors, as well as the distinction between heterosexual and homosexuals, who have different kinds of sex and have different risk consequences and factors.
Among the prime objections to condom usage is the blocking of erotic sensation, or the intimacy that barrier-free sex provides. As the condom is held tightly to the skin of the penis, it diminishes the delivery of stimulation through rubbing and friction. Condom proponents claim this has the benefit of making sex last longer, by diminishing sensation and delaying male ejaculation. Those who promote condom-free heterosexual sex (slang: "bareback") claim that the condom puts a barrier between partners, diminishing what is normally a highly sensual, intimate, and spiritual connection between partners.
The United Church of Christ (UCC), a Reformed denomination of the Congregationalist tradition, promotes the distribution of condoms in churches and faith-based educational settings. Michael Shuenemeyer, a UCC minister, has stated that "The practice of safer sex is a matter of life and death. People of faith make condoms available because we have chosen life so that we and our children may live."
On the other hand, the Roman Catholic Church opposes all kinds of sexual acts outside of marriage, as well as any sexual act in which the chance of successful conception has been reduced by direct and intentional acts (for example, surgery to prevent conception) or foreign objects (for example, condoms).
The use of condoms to prevent STI transmission is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, the majority view—including all statements from the Vatican—is that condom-promotion programs encourage promiscuity, thereby actually increasing STI transmission. This view was most recently reiterated in 2009 by Pope Benedict XVI.
The Roman Catholic Church is the largest organized body of any world religion. The church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa, but its opposition to condom use in these programs has been highly controversial.
In a November 2011 interview, Pope Benedict XVI discussed for the first time the use of condoms to prevent STI transmission. He said that the use of a condom can be justified in a few individual cases if the purpose is to reduce the risk of an HIV infection. He gave as an example male prostitutes. There was some confusion at first whether the statement applied only to homosexual prostitutes and thus not to heterosexual intercourse at all. However, Federico Lombardi, spokesman for the Vatican, clarified that it applied to heterosexual and transsexual prostitutes, whether male or female, as well. He did, however, also clarify that the Vatican's principles on sexuality and contraception had not been changed.
More generally, some scientific researchers have expressed objective concern over certain ingredients sometimes added to condoms, notably talc and nitrosamines. Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, but cornstarch is currently the most popular dusting powder. Although rare during normal use, talc is known to be potentially irritant to mucous membranes (such as in the vagina). Cornstarch is generally believed to be safe; however, some researchers have raised concerns over its use as well.
Nitrosamines, which are potentially carcinogenic in humans, are believed to be present in a substance used to improve elasticity in latex condoms. A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low. However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold.
In addition, the large-scale use of disposable condoms has resulted in concerns over their environmental impact via littering and in landfills, where they can eventually wind up in wildlife environments if not incinerated or otherwise permanently disposed of first. Polyurethane condoms in particular, given they are a form of plastic, are not biodegradable, and latex condoms take a very long time to break down. Experts, such as AVERT, recommend condoms be disposed of in a garbage receptacle, as flushing them down the toilet (which some people do) may cause plumbing blockages and other problems. Furthermore, the plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass. Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem.
While biodegradable, latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food.
In much of the Western world, the introduction of the pill in the 1960s was associated with a decline in condom use. In Japan, oral contraceptives were not approved for use until September 1999, and even then access was more restricted than in other industrialized nations. Perhaps because of this restricted access to hormonal contraception, Japan has the highest rate of condom usage in the world: in 2008, 80% of contraceptive users relied on condoms.
Cultural attitudes toward gender roles, contraception, and sexual activity vary greatly around the world, and range from extremely conservative to extremely liberal. But in places where condoms are misunderstood, mischaracterised, demonised, or looked upon with overall cultural disapproval, the prevalence of condom use is directly affected. In less-developed countries and among less-educated populations, misperceptions about how disease transmission and conception work negatively affect the use of condoms; additionally, in cultures with more traditional gender roles, women may feel uncomfortable demanding that their partners use condoms.
As an example, Latino immigrants in the United States often face cultural barriers to condom use. A study on female HIV prevention published in the Journal of Sex Health Research asserts that Latino women often lack the attitudes needed to negotiate safe sex due to traditional gender-role norms in the Latino community, and may be afraid to bring up the subject of condom use with their partners. Women who participated in the study often reported that because of the general machismo subtly encouraged in Latino culture, their male partners would be angry or possibly violent at the woman's suggestion that they use condoms. A similar phenomenon has been noted in a survey of low-income American black women; the women in this study also reported a fear of violence at the suggestion to their male partners that condoms be used.
A telephone survey conducted by Rand Corporation and Oregon State University, and published in the Journal of Acquired Immune Deficiency Syndromes showed that belief in AIDS conspiracy theories among United States black men is linked to rates of condom use. As conspiracy beliefs about AIDS grow in a given sector of these black men, consistent condom use drops in that same sector. Female use of condoms was not similarly affected.
In the African continent, condom promotion in some areas has been impeded by anti-condom campaigns by some Muslim and Catholic clerics. Among the Maasai in Tanzania, condom use is hampered by an aversion to "wasting" sperm, which is given sociocultural importance beyond reproduction. Sperm is believed to be an "elixir" to women and to have beneficial health effects. Maasai women believe that, after conceiving a child, they must have sexual intercourse repeatedly so that the additional sperm aids the child's development. Frequent condom use is also considered by some Maasai to cause impotence. Some women in Africa believe that condoms are "for prostitutes" and that respectable women should not use them. A few clerics even promote the lie that condoms are deliberately laced with HIV. In the United States, possession of many condoms has been used by police to accuse women of engaging in prostitution. The Presidential Advisory Council on HIV/AIDS has condemned this practice and there are efforts to end it.
Middle-Eastern couples who have not had children, because of the strong desire and social pressure to establish fertility as soon as possible within marriage, rarely use condoms.
In 2017, India restricted TV advertisements for condoms to between the hours of 10 pm to 6 am. Family planning advocates were against this, saying it was liable to "undo decades of progress on sexual and reproductive health".
One analyst described the size of the condom market as something that "boggles the mind". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations. Most large manufacturers have ties to the business that reach back to the end of the 19th century.
In the United States condoms usually cost less than US$1.00.
A spray-on condom made of latex is intended to be easier to apply and more successful in preventing the transmission of diseases. As of 2009, the spray-on condom was not going to market because the drying time could not be reduced below two to three minutes.
The Invisible Condom, developed at Université Laval in Quebec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. As of 2005, the invisible condom is in the clinical trial phase, and has not yet been approved for use.
Also developed in 2005 is a condom treated with an erectogenic compound. The drug-treated condom is intended to help the wearer maintain his erection, which should also help reduce slippage. If approved, the condom would be marketed under the Durex brand. As of 2007, it was still in clinical trials. In 2009, Ansell Healthcare, the makers of Lifestyle condoms, introduced the X2 condom lubricated with "Excite Gel" which contains the amino acid L-arginine and is intended to improve the strength of the erectile response.
In March 2013, philanthropist Bill Gates offered US$100,000 grants through his foundation for a condom design that "significantly preserves or enhances pleasure" to encourage more males to adopt the use of condoms for safer sex. The grant information stated: "The primary drawback from the male perspective is that condoms decrease pleasure as compared to no condom, creating a trade-off that many men find unacceptable, particularly given that the decisions about use must be made just prior to intercourse. Is it possible to develop a product without this stigma, or better, one that is felt to enhance pleasure?" In November of the same year, 11 research teams were selected to receive the grant money. | [
{
"paragraph_id": 0,
"text": "A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times.",
"title": ""
},
{
"paragraph_id": 2,
"text": "With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables.",
"title": "Medical uses"
},
{
"paragraph_id": 5,
"text": "The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection.",
"title": "Medical uses"
},
{
"paragraph_id": 6,
"text": "Condoms are widely recommended for the prevention of sexually transmitted infections (STIs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of organisms that cause AIDS, genital herpes, cervical cancer, genital warts, syphilis, chlamydia, gonorrhea, and other diseases. Condoms are often recommended as an adjunct to more effective birth control methods (such as IUD) in situations where STI protection is also desired. For this reason, condoms are frequently used by those in the swinging (sexual practice) community.",
"title": "Medical uses"
},
{
"paragraph_id": 7,
"text": "According to a 2000 report by the National Institutes of Health (NIH), consistent use of latex condoms reduces the risk of HIV transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. Analysis published in 2007 from the University of Texas Medical Branch and the World Health Organization found similar risk reductions of 80–95%.",
"title": "Medical uses"
},
{
"paragraph_id": 8,
"text": "The 2000 NIH review concluded that condom use significantly reduces the risk of gonorrhea for men. A 2006 study reports that proper condom use decreases the risk of transmission of human papillomavirus (HPV) to women by approximately 70%. Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2, also known as genital herpes, in both men and women.",
"title": "Medical uses"
},
{
"paragraph_id": 9,
"text": "Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases like HPV and herpes may be transmitted by direct contact. The primary effectiveness issue with using condoms to prevent STIs, however, is inconsistent use.",
"title": "Medical uses"
},
{
"paragraph_id": 10,
"text": "Condoms may also be useful in treating potentially precancerous cervical changes. Exposure to human papillomavirus, even in individuals already infected with the virus, appears to increase the risk of precancerous changes. The use of condoms helps promote regression of these changes. In addition, researchers in the UK suggest that a hormone in semen can aggravate existing cervical cancer, condom use during sex can prevent exposure to the hormone.",
"title": "Medical uses"
},
{
"paragraph_id": 11,
"text": "Condoms may slip off the penis after ejaculation, break due to improper application or physical damage (such as tears caused when opening the package), or break or slip due to latex degradation (typically from usage past the expiration date, improper storage, or exposure to oils). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–3% of women will test positive for semen residue after intercourse with a condom. Failure rates are higher for anal sex, and until 2022, condoms were only approved by the FDA for vaginal sex. The One Male Condom received FDA approval for anal sex on 23 February 2022.",
"title": "Medical uses"
},
{
"paragraph_id": 12,
"text": "\"Double bagging\", using two condoms at once, is often believed to cause a higher rate of failure due to the friction of rubber on rubber. This claim is not supported by research. The limited studies that have been done found that the simultaneous use of multiple condoms decreases the risk of condom breakage.",
"title": "Medical uses"
},
{
"paragraph_id": 13,
"text": "Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins – such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse.",
"title": "Medical uses"
},
{
"paragraph_id": 14,
"text": "Standard condoms will fit almost any penis, with varying degrees of comfort or risk of slippage. Many condom manufacturers offer \"snug\" or \"magnum\" sizes. Some manufacturers also offer custom sized-to-fit condoms, with claims that they are more reliable and offer improved sensation/comfort. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive.",
"title": "Medical uses"
},
{
"paragraph_id": 15,
"text": "It is recommended for condoms manufacturers to avoid very thick or very thin condoms, because they are both considered less effective. Some authors encourage users to choose thinner condoms \"for greater durability, sensation, and comfort\", but others warn that \"the thinner the condom, the smaller the force required to break it\".",
"title": "Medical uses"
},
{
"paragraph_id": 16,
"text": "Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage.",
"title": "Medical uses"
},
{
"paragraph_id": 17,
"text": "Among people who intend condoms to be their form of birth control, pregnancy may occur when the user has sex without a condom. The person may have run out of condoms, or be traveling and not have a condom with them, or dislike the feel of condoms and decide to \"take a chance\". This behavior is the primary cause of typical use failure (as opposed to method or perfect use failure).",
"title": "Medical uses"
},
{
"paragraph_id": 18,
"text": "Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers from Nigeria reported clients sabotaging condoms in retaliation for being coerced into condom use. Using a fine needle to make several pinholes at the tip of the condom is believed to significantly impact on their effectiveness. Cases of such condom sabotage have occurred.",
"title": "Medical uses"
},
{
"paragraph_id": 19,
"text": "The use of latex condoms by people with an allergy to latex can cause allergic symptoms, such as skin irritation. In people with severe latex allergies, using a latex condom can potentially be life-threatening. Repeated use of latex condoms can also cause the development of a latex allergy in some people. Irritation may also occur due to spermicides that may be present.",
"title": "Side effects"
},
{
"paragraph_id": 20,
"text": "Male condoms are usually packaged inside a foil or plastic wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. It is important that some space be left in the tip of the condom so that semen has a place to collect; otherwise it may be forced out of the base of the device. Most condoms have a teat end for this purpose. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle. Condoms are used to reduce the likelihood of pregnancy during intercourse and to reduce the likelihood of contracting sexually transmitted infections (STIs). Condoms are also used during fellatio to reduce the likelihood of contracting STIs.",
"title": "Use"
},
{
"paragraph_id": 21,
"text": "Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement. Advocates of condom use also cite their advantages of being inexpensive, easy to use, and having few side effects.",
"title": "Use"
},
{
"paragraph_id": 22,
"text": "In 2012 proponents gathered 372,000 voter signatures through a citizens' initiative in Los Angeles County to put Measure B on the 2012 ballot. As a result, Measure B, a law requiring the use of condoms in the production of pornographic films, was passed. This requirement has received much criticism and is said by some to be counter-productive, merely forcing companies that make pornographic films to relocate to other places without this requirement. Producers claim that condom use depresses sales.",
"title": "Use"
},
{
"paragraph_id": 23,
"text": "Condoms are often used in sex education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted infections when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying \"comprehensive sexuality education programs ... discuss the appropriate use of condoms\", and \"promote condom use for those who are sexually active.\"",
"title": "Use"
},
{
"paragraph_id": 24,
"text": "In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sex education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 76% of American parents want their children to receive comprehensive sexuality education including condom use.",
"title": "Use"
},
{
"paragraph_id": 25,
"text": "Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse.",
"title": "Use"
},
{
"paragraph_id": 26,
"text": "Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervical or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them.",
"title": "Use"
},
{
"paragraph_id": 27,
"text": "For fertility treatments, a collection condom may be used to collect semen during sexual intercourse where the semen is provided by the woman's partner. Private sperm donors may also use a collection condom to obtain samples through masturbation or by sexual intercourse with a partner and will transfer the ejaculate from the collection condom to a specially designed container. The sperm is transported in such containers, in the case of a donor, to a recipient woman to be used for insemination, and in the case of a woman's partner, to a fertility clinic for processing and use. However, transportation may reduce the fecundity of the sperm. Collection condoms may also be used where semen is produced at a sperm bank or fertility clinic.",
"title": "Use"
},
{
"paragraph_id": 28,
"text": "Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates.",
"title": "Use"
},
{
"paragraph_id": 29,
"text": "Condoms excel as multipurpose containers and barriers because they are waterproof, elastic, durable, and (for military and espionage uses) will not arouse suspicion if found.",
"title": "Use"
},
{
"paragraph_id": 30,
"text": "Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies.",
"title": "Use"
},
{
"paragraph_id": 31,
"text": "Condoms have also been used to smuggle alcohol, cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose.",
"title": "Use"
},
{
"paragraph_id": 32,
"text": "Medically, condoms can be used to cover endovaginal ultrasound probes, or in field chest needle decompressions they can be used to make a one-way valve.",
"title": "Use"
},
{
"paragraph_id": 33,
"text": "Condoms have also been used to protect scientific samples from the environment, and to waterproof microphones for underwater recording.",
"title": "Use"
},
{
"paragraph_id": 34,
"text": "Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes and shapes.",
"title": "Types"
},
{
"paragraph_id": 35,
"text": "They also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavored condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms also exist.",
"title": "Types"
},
{
"paragraph_id": 36,
"text": "Male condoms have a tight ring to form a seal around the penis, while female condoms usually have a large stiff ring to prevent them from slipping into the body orifice. The Female Health Company produced a female condom that was initially made of polyurethane, but newer versions are made of nitrile rubber. Medtech Products produces a female condom made of latex.",
"title": "Types"
},
{
"paragraph_id": 37,
"text": "Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electric current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing.",
"title": "Types"
},
{
"paragraph_id": 38,
"text": "While the advantages of latex have made it the most popular condom material, it does have some drawbacks. Latex condoms are damaged when used with oil-based substances as lubricants, such as petroleum jelly, cooking oil, baby oil, mineral oil, skin lotions, suntan lotions, cold creams, butter or margarine. Contact with oil makes latex condoms more likely to break or slip off due to loss of elasticity caused by the oils. Additionally, latex allergy precludes use of latex condoms and is one of the principal reasons for the use of other materials. In May 2009, the U.S. Food and Drug Administration (FDA) granted approval for the production of condoms composed of Vytex, latex that has been treated to remove 90% of the proteins responsible for allergic reactions. An allergen-free condom made of synthetic latex (polyisoprene) is also available.",
"title": "Types"
},
{
"paragraph_id": 39,
"text": "The most common non-latex condoms are made from polyurethane. Condoms may also be made from other synthetic materials, such as AT-10 resin, and most polyisoprene.",
"title": "Types"
},
{
"paragraph_id": 40,
"text": "Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick.",
"title": "Types"
},
{
"paragraph_id": 41,
"text": "Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes.",
"title": "Types"
},
{
"paragraph_id": 42,
"text": "However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, lose their shape or bunch up more than latex, and are more expensive.",
"title": "Types"
},
{
"paragraph_id": 43,
"text": "Polyisoprene is a synthetic version of natural rubber latex. While significantly more expensive, it has the advantages of latex (such as being softer and more elastic than polyurethane condoms) without the protein which is responsible for latex allergies. Unlike polyurethane condoms, they cannot be used with an oil-based lubricant.",
"title": "Types"
},
{
"paragraph_id": 44,
"text": "Condoms made from sheep intestines, labeled \"lambskin\", are also available. Although they are generally effective as a contraceptive by blocking sperm, it is presumed that they are less effective than latex in preventing the transmission of sexually transmitted infections because of pores in the material. This is based on the idea that intestines, by their nature, are porous, permeable membranes, and while sperm are too large to pass through the pores, viruses — such as HIV, herpes, and genital warts — are small enough to pass. However, there are to date no clinical data confirming or denying this theory.",
"title": "Types"
},
{
"paragraph_id": 45,
"text": "As a result of laboratory data on condom porosity, in 1989, the FDA began requiring lambskin condom manufacturers to indicate that the products were not to be used for the prevention of sexually transmitted infections. This was based on the presumption that lambskin condoms would be less effective than latex in preventing HIV transmission, rather than a conclusion that lambskin condoms lack efficacy in STI prevention altogether. An FDA publication in 1992 states that lambskin condoms \"provide good birth control and a varying degree of protection against some, but not all, sexually transmitted diseases\" and that the labelling requirement was decided upon because the FDA \"cannot expect people to know which STDs they need to be protected against\", and since \"the reality is that you don't know what your partner has, we wanted natural-membrane condoms to have labels that don't allow the user to assume they're effective against the small viral STDs.\"",
"title": "Types"
},
{
"paragraph_id": 46,
"text": "Some believe that lambskin condoms provide a more \"natural\" sensation and lack the allergens inherent to latex. Still, because of their lesser protection against infection, other hypoallergenic materials such as polyurethane are recommended for latex-allergic users and partners. Lambskin condoms are also significantly more expensive than different types, and as slaughter by-products, they are also not vegetarian.",
"title": "Types"
},
{
"paragraph_id": 47,
"text": "Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, condoms lubricated with spermicide have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms.",
"title": "Types"
},
{
"paragraph_id": 48,
"text": "Nonoxynol-9 was once believed to offer additional protection against STIs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, it recommends using a nonoxynol-9 lubricated condom over no condom at all. As of 2005, nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9 and Planned Parenthood has discontinued the distribution of condoms so lubricated.",
"title": "Types"
},
{
"paragraph_id": 49,
"text": "Textured condoms include studded and ribbed condoms which can provide extra sensations to both partners. The studs or ribs can be located on the inside, outside, or both; alternatively, they are located in specific sections to provide directed stimulation to either the G-spot or frenulum. Many textured condoms which advertise \"mutual pleasure\" also are bulb-shaped at the top, to provide extra stimulation to the penis. Some women experience irritation during vaginal intercourse with studded condoms.",
"title": "Types"
},
{
"paragraph_id": 50,
"text": "The anti-rape condom is another variation designed to be worn by women. It is designed to cause pain to the attacker, hopefully allowing the victim a chance to escape.",
"title": "Types"
},
{
"paragraph_id": 51,
"text": "A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life.",
"title": "Types"
},
{
"paragraph_id": 52,
"text": "Some condom-like devices are intended for entertainment only, such as glow-in-the dark condoms. These novelty condoms may not provide protection against pregnancy and STIs.",
"title": "Types"
},
{
"paragraph_id": 53,
"text": "In February 2022, the U.S. Food and Drug Administration (FDA) approved the first condoms specifically indicated to help reduce transmission of sexually transmitted infections (STIs) during anal intercourse.",
"title": "Types"
},
{
"paragraph_id": 54,
"text": "The prevalence of condom use varies greatly between countries. Most surveys of contraceptive use are among married women, or women in informal unions. Japan has the highest rate of condom usage in the world: in that country, condoms account for almost 80% of contraceptive use by married women. On average, in developed countries, condoms are the most popular method of birth control: 28% of married contraceptive users rely on condoms. In the average less-developed country, condoms are less common: only 6–8% of married contraceptive users choose condoms.",
"title": "Prevalence"
},
{
"paragraph_id": 55,
"text": "Whether condoms were used in ancient civilizations is debated by archaeologists and historians. In ancient Egypt, Greece, and Rome, pregnancy prevention was generally seen as a woman's responsibility, and the only well documented contraception methods were female-controlled devices. In Asia before the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded. Condoms seem to have been used for contraception, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, condoms called Kabuto-gata (甲形) were made of tortoise shell or animal horn.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "In 16th-century Italy, anatomist and physician Gabriele Falloppio wrote a treatise on syphilis. The earliest documented strain of syphilis, first appearing in Europe in a 1490s outbreak, caused severe symptoms and often death within a few months of contracting the disease. Falloppio's treatise is the earliest uncontested description of condom use: it describes linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Falloppio claimed that an experimental trial of the linen sheath demonstrated protection against syphilis.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "After this, the use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication that these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius, who condemned them as immoral. In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of \"condons\", the first documented use of that word or any similar spelling. Other early spellings include \"condam\" and \"quondam\", from which the Italian derivation guantone has been suggested, from guanto, \"a glove\".",
"title": "History"
},
{
"paragraph_id": 58,
"text": "In addition to linen, condoms during the Renaissance were made out of intestines and bladder. In the late 16th century, Dutch traders introduced condoms made from \"fine leather\" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Casanova in the 18th century was one of the first reported using \"assurance caps\" to prevent impregnating his mistresses.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "From at least the 18th century, condom use was opposed in some legal, religious, and medical circles for essentially the same reasons that are given today: condoms reduce the likelihood of pregnancy, which some thought immoral or undesirable for the nation; they do not provide full protection against sexually transmitted infections, while belief in their protective powers was thought to encourage sexual promiscuity; and, they are not used consistently due to inconvenience, expense, or loss of sensation.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "Despite some opposition, the condom market grew rapidly. In the 18th century, condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or \"skin\" (bladder or intestine softened by treatment with sulfur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theater throughout Europe and Russia. They later spread to America, although in every place there were generally used only by the middle and upper classes, due to both expense and lack of sex education.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "The early 19th century saw contraceptives promoted to the poorer classes for the first time. Writers on contraception tended to prefer other birth control methods to the condom. By the late 19th century, many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods controlled by women, such as diaphragms and spermicidal douches. Other writers cited both the expense of condoms and their unreliability (they were often riddled with holes and often fell off or tore). Still, they discussed condoms as a good option for some and the only contraceptive that protects from disease.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "Many countries passed laws impeding the manufacture and promotion of contraceptives. In spite of these restrictions, condoms were promoted by traveling lecturers and in newspaper advertisements, using euphemisms in places where such ads were illegal. Instructions on how to make condoms at home were distributed in the United States and Europe. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Beginning in the second half of the 19th century, American rates of sexually transmitted infections skyrocketed. Causes cited by historians include the effects of the American Civil War and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sex education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught abstinence was the only way to avoid sexually transmitted infections. Condoms were not promoted for disease prevention because the medical community and moral watchdogs considered STIs to be punishment for sexual misbehavior. The stigma against people with these diseases was so significant that many hospitals refused to treat people with syphilis.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "The German military was the first to promote condom use among its soldiers in the later 19th century. Early 20th century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted infections. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "In the decades after World War I, there remained social and legal obstacles to condom use throughout the U.S. and Europe. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control because their failure rates were too high. Freud was especially opposed to the condom because he thought it cut down on sexual pleasure. Some feminists continued to oppose male-controlled contraceptives such as condoms. In 1920 the Church of England's Lambeth Conference condemned all \"unnatural means of conception avoidance\". The Bishop of London, Arthur Winnington-Ingram, complained of the huge number of condoms discarded in alleyways and parks, especially after weekends and holidays.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "However, European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Worldwide, condom sales doubled in the 1920s.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "In 1839, Charles Goodyear discovered a way of processing natural rubber, which is too stiff when cold and too soft when warm, in such a way as to make it elastic. This proved to have advantages for the manufacture of condoms; unlike the sheep's gut condoms, they could stretch and did not tear quickly when used. The rubber vulcanization process was patented by Goodyear in 1844. The first rubber condom was produced in 1855. The earliest rubber condoms had a seam and were as thick as a bicycle inner tube. Besides this type, small rubber condoms covering only the glans were often used in England and the United States. There was more risk of losing them and if the rubber ring was too tight, it would constrict the penis. This type of condom was the original \"capote\" (French for condom), perhaps because of its resemblance to a woman's bonnet worn at that time, also called a capote.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped molds, then dipping the wrapped molds in a chemical solution to cure the rubber. In 1912, Polish-born inventor Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid. Around 1920 patent lawyer and vice-president of the United States Rubber Company Ernest Hopkinson invented a new technique of converting latex into rubber without a coagulant (demulsifier), which featured using water as a solvent and warm air to dry the solution, as well as optionally preserving liquid latex with ammonia. Condoms made this way, commonly called \"latex\" ones, required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. The use of water to suspend the rubber instead of gasoline and benzene eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber).",
"title": "History"
},
{
"paragraph_id": 70,
"text": "Until the twenties, all condoms were individually hand-dipped by semi-skilled workers. Throughout the decade of the 1920s, advances in the automation of the condom assembly line were made. The first fully automated line was patented in 1930. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market.",
"title": "History"
},
{
"paragraph_id": 71,
"text": "In 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. In the 1930s, legal restrictions on condoms began to be relaxed. But during this period Fascist Italy and Nazi Germany increased restrictions on condoms (limited sales as disease preventatives were still allowed). During the Depression, condom lines by Schmid gained in popularity. Schmid still used the cement-dipping method of manufacture which had two advantages over the latex variety. Firstly, cement-dipped condoms could be safely used with oil-based lubricants. Secondly, while less comfortable, these older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s, and the U.S. Food and Drug Administration began to regulate the quality of condoms sold in the United States.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "Throughout World War II, condoms were not only distributed to male U.S. military members, but also heavily promoted with films, posters, and lectures. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to this day. After the war, condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. The birth control pill became the world's most popular method of birth control in the years after its 1960 début, but condoms remained a strong second. The U.S. Agency for International Development pushed condom use in developing countries to help solve the \"world population crises\": by 1970 hundreds of millions of condoms were being used each year in India alone.(This number has grown in recent decades: in 2004, the government of India purchased 1.9 billion condoms for distribution at family planning clinics.)",
"title": "History"
},
{
"paragraph_id": 73,
"text": "In the 1960s and 1970s quality regulations tightened, and more legal barriers to condom use were removed. In Ireland, legal condom sales were allowed for the first time in 1978. Advertising, however was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television; this policy remained in place until 1979.",
"title": "History"
},
{
"paragraph_id": 74,
"text": "After it was discovered in the early 1980s that AIDS can be a sexually transmitted infection, the use of condoms was encouraged to prevent transmission of HIV. Despite opposition by some political, religious, and other figures, national condom promotion campaigns occurred in the U.S. and Europe. These campaigns increased condom use significantly.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "Due to increased demand and greater social acceptance, condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Walmart. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. The phenomenon of decreasing use of condoms as disease preventatives has been called prevention fatigue or condom fatigue. Observers have cited condom fatigue in both Europe and North America. As one response, manufacturers have changed the tone of their advertisements from scary to humorous.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "New developments continued to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms by 2015. As of September 2013, condoms are available inside prisons in Canada, most of the European Union, Australia, Brazil, Indonesia, South Africa, and the US states of Vermont (on 17 September 2013, the Californian Senate approved a bill for condom distribution inside the state's prisons, but the bill was not yet law at the time of approval).",
"title": "History"
},
{
"paragraph_id": 77,
"text": "The global condom market was estimated at US$9.2 billion in 2020.",
"title": "History"
},
{
"paragraph_id": 78,
"text": "The term condom first appears in the early 18th century: early forms include condum (1706 and 1717), condon (1708) and cundum (1744). The word's etymology is unknown. In popular tradition, the invention and naming of the condom came to be attributed to an associate of England's King Charles II, one \"Dr. Condom\" or \"Earl of Condom\". There is however no evidence of the existence of such a person, and condoms had been used for over one hundred years before King Charles II acceded to the throne in 1660.",
"title": "History"
},
{
"paragraph_id": 79,
"text": "A variety of unproven Latin etymologies have been proposed, including condon (receptacle), condamina (house), and cumdum (scabbard or case). It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove. William E. Kruck wrote an article in 1981 concluding that, \"As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology.\" Modern dictionaries may also list the etymology as \"unknown\".",
"title": "History"
},
{
"paragraph_id": 80,
"text": "Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters or rubber johnnies. Additionally, condoms may be referred to using the manufacturer's name.",
"title": "History"
},
{
"paragraph_id": 81,
"text": "Some moral and scientific criticism of condoms exists despite the many benefits of condoms agreed on by scientific consensus and sexual health experts.",
"title": "Society and culture"
},
{
"paragraph_id": 82,
"text": "Condom usage is typically recommended for new couples who have yet to develop full trust in their partner with regard to STIs. Established couples on the other hand have few concerns about STIs, and can use other methods of birth control such as the pill, which does not act as a barrier to intimate sexual contact. Note that the polar debate with regard to condom usage is attenuated by the target group the argument is directed. Notably the age category and stable partner question are factors, as well as the distinction between heterosexual and homosexuals, who have different kinds of sex and have different risk consequences and factors.",
"title": "Society and culture"
},
{
"paragraph_id": 83,
"text": "Among the prime objections to condom usage is the blocking of erotic sensation, or the intimacy that barrier-free sex provides. As the condom is held tightly to the skin of the penis, it diminishes the delivery of stimulation through rubbing and friction. Condom proponents claim this has the benefit of making sex last longer, by diminishing sensation and delaying male ejaculation. Those who promote condom-free heterosexual sex (slang: \"bareback\") claim that the condom puts a barrier between partners, diminishing what is normally a highly sensual, intimate, and spiritual connection between partners.",
"title": "Society and culture"
},
{
"paragraph_id": 84,
"text": "The United Church of Christ (UCC), a Reformed denomination of the Congregationalist tradition, promotes the distribution of condoms in churches and faith-based educational settings. Michael Shuenemeyer, a UCC minister, has stated that \"The practice of safer sex is a matter of life and death. People of faith make condoms available because we have chosen life so that we and our children may live.\"",
"title": "Society and culture"
},
{
"paragraph_id": 85,
"text": "On the other hand, the Roman Catholic Church opposes all kinds of sexual acts outside of marriage, as well as any sexual act in which the chance of successful conception has been reduced by direct and intentional acts (for example, surgery to prevent conception) or foreign objects (for example, condoms).",
"title": "Society and culture"
},
{
"paragraph_id": 86,
"text": "The use of condoms to prevent STI transmission is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, the majority view—including all statements from the Vatican—is that condom-promotion programs encourage promiscuity, thereby actually increasing STI transmission. This view was most recently reiterated in 2009 by Pope Benedict XVI.",
"title": "Society and culture"
},
{
"paragraph_id": 87,
"text": "The Roman Catholic Church is the largest organized body of any world religion. The church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa, but its opposition to condom use in these programs has been highly controversial.",
"title": "Society and culture"
},
{
"paragraph_id": 88,
"text": "In a November 2011 interview, Pope Benedict XVI discussed for the first time the use of condoms to prevent STI transmission. He said that the use of a condom can be justified in a few individual cases if the purpose is to reduce the risk of an HIV infection. He gave as an example male prostitutes. There was some confusion at first whether the statement applied only to homosexual prostitutes and thus not to heterosexual intercourse at all. However, Federico Lombardi, spokesman for the Vatican, clarified that it applied to heterosexual and transsexual prostitutes, whether male or female, as well. He did, however, also clarify that the Vatican's principles on sexuality and contraception had not been changed.",
"title": "Society and culture"
},
{
"paragraph_id": 89,
"text": "More generally, some scientific researchers have expressed objective concern over certain ingredients sometimes added to condoms, notably talc and nitrosamines. Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, but cornstarch is currently the most popular dusting powder. Although rare during normal use, talc is known to be potentially irritant to mucous membranes (such as in the vagina). Cornstarch is generally believed to be safe; however, some researchers have raised concerns over its use as well.",
"title": "Society and culture"
},
{
"paragraph_id": 90,
"text": "Nitrosamines, which are potentially carcinogenic in humans, are believed to be present in a substance used to improve elasticity in latex condoms. A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low. However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold.",
"title": "Society and culture"
},
{
"paragraph_id": 91,
"text": "In addition, the large-scale use of disposable condoms has resulted in concerns over their environmental impact via littering and in landfills, where they can eventually wind up in wildlife environments if not incinerated or otherwise permanently disposed of first. Polyurethane condoms in particular, given they are a form of plastic, are not biodegradable, and latex condoms take a very long time to break down. Experts, such as AVERT, recommend condoms be disposed of in a garbage receptacle, as flushing them down the toilet (which some people do) may cause plumbing blockages and other problems. Furthermore, the plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass. Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem.",
"title": "Society and culture"
},
{
"paragraph_id": 92,
"text": "While biodegradable, latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food.",
"title": "Society and culture"
},
{
"paragraph_id": 93,
"text": "In much of the Western world, the introduction of the pill in the 1960s was associated with a decline in condom use. In Japan, oral contraceptives were not approved for use until September 1999, and even then access was more restricted than in other industrialized nations. Perhaps because of this restricted access to hormonal contraception, Japan has the highest rate of condom usage in the world: in 2008, 80% of contraceptive users relied on condoms.",
"title": "Society and culture"
},
{
"paragraph_id": 94,
"text": "Cultural attitudes toward gender roles, contraception, and sexual activity vary greatly around the world, and range from extremely conservative to extremely liberal. But in places where condoms are misunderstood, mischaracterised, demonised, or looked upon with overall cultural disapproval, the prevalence of condom use is directly affected. In less-developed countries and among less-educated populations, misperceptions about how disease transmission and conception work negatively affect the use of condoms; additionally, in cultures with more traditional gender roles, women may feel uncomfortable demanding that their partners use condoms.",
"title": "Society and culture"
},
{
"paragraph_id": 95,
"text": "As an example, Latino immigrants in the United States often face cultural barriers to condom use. A study on female HIV prevention published in the Journal of Sex Health Research asserts that Latino women often lack the attitudes needed to negotiate safe sex due to traditional gender-role norms in the Latino community, and may be afraid to bring up the subject of condom use with their partners. Women who participated in the study often reported that because of the general machismo subtly encouraged in Latino culture, their male partners would be angry or possibly violent at the woman's suggestion that they use condoms. A similar phenomenon has been noted in a survey of low-income American black women; the women in this study also reported a fear of violence at the suggestion to their male partners that condoms be used.",
"title": "Society and culture"
},
{
"paragraph_id": 96,
"text": "A telephone survey conducted by Rand Corporation and Oregon State University, and published in the Journal of Acquired Immune Deficiency Syndromes showed that belief in AIDS conspiracy theories among United States black men is linked to rates of condom use. As conspiracy beliefs about AIDS grow in a given sector of these black men, consistent condom use drops in that same sector. Female use of condoms was not similarly affected.",
"title": "Society and culture"
},
{
"paragraph_id": 97,
"text": "In the African continent, condom promotion in some areas has been impeded by anti-condom campaigns by some Muslim and Catholic clerics. Among the Maasai in Tanzania, condom use is hampered by an aversion to \"wasting\" sperm, which is given sociocultural importance beyond reproduction. Sperm is believed to be an \"elixir\" to women and to have beneficial health effects. Maasai women believe that, after conceiving a child, they must have sexual intercourse repeatedly so that the additional sperm aids the child's development. Frequent condom use is also considered by some Maasai to cause impotence. Some women in Africa believe that condoms are \"for prostitutes\" and that respectable women should not use them. A few clerics even promote the lie that condoms are deliberately laced with HIV. In the United States, possession of many condoms has been used by police to accuse women of engaging in prostitution. The Presidential Advisory Council on HIV/AIDS has condemned this practice and there are efforts to end it.",
"title": "Society and culture"
},
{
"paragraph_id": 98,
"text": "Middle-Eastern couples who have not had children, because of the strong desire and social pressure to establish fertility as soon as possible within marriage, rarely use condoms.",
"title": "Society and culture"
},
{
"paragraph_id": 99,
"text": "In 2017, India restricted TV advertisements for condoms to between the hours of 10 pm to 6 am. Family planning advocates were against this, saying it was liable to \"undo decades of progress on sexual and reproductive health\".",
"title": "Society and culture"
},
{
"paragraph_id": 100,
"text": "One analyst described the size of the condom market as something that \"boggles the mind\". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations. Most large manufacturers have ties to the business that reach back to the end of the 19th century.",
"title": "Society and culture"
},
{
"paragraph_id": 101,
"text": "In the United States condoms usually cost less than US$1.00.",
"title": "Society and culture"
},
{
"paragraph_id": 102,
"text": "A spray-on condom made of latex is intended to be easier to apply and more successful in preventing the transmission of diseases. As of 2009, the spray-on condom was not going to market because the drying time could not be reduced below two to three minutes.",
"title": "Research"
},
{
"paragraph_id": 103,
"text": "The Invisible Condom, developed at Université Laval in Quebec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. As of 2005, the invisible condom is in the clinical trial phase, and has not yet been approved for use.",
"title": "Research"
},
{
"paragraph_id": 104,
"text": "Also developed in 2005 is a condom treated with an erectogenic compound. The drug-treated condom is intended to help the wearer maintain his erection, which should also help reduce slippage. If approved, the condom would be marketed under the Durex brand. As of 2007, it was still in clinical trials. In 2009, Ansell Healthcare, the makers of Lifestyle condoms, introduced the X2 condom lubricated with \"Excite Gel\" which contains the amino acid L-arginine and is intended to improve the strength of the erectile response.",
"title": "Research"
},
{
"paragraph_id": 105,
"text": "In March 2013, philanthropist Bill Gates offered US$100,000 grants through his foundation for a condom design that \"significantly preserves or enhances pleasure\" to encourage more males to adopt the use of condoms for safer sex. The grant information stated: \"The primary drawback from the male perspective is that condoms decrease pleasure as compared to no condom, creating a trade-off that many men find unacceptable, particularly given that the decisions about use must be made just prior to intercourse. Is it possible to develop a product without this stigma, or better, one that is felt to enhance pleasure?\" In November of the same year, 11 research teams were selected to receive the grant money.",
"title": "Research"
}
] | A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms. The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times. With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis. Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year. | 2001-11-05T23:43:28Z | 2023-12-18T09:07:52Z | [
"Template:Cite news",
"Template:Cite journal",
"Template:Rp",
"Template:As of",
"Template:Lang",
"Template:Unreferenced section",
"Template:Reflist",
"Template:Webarchive",
"Template:Commons category",
"Template:Portal bar",
"Template:Cite conference",
"Template:About",
"Template:Use dmy dates",
"Template:Cs1 config",
"Template:Anchor",
"Template:Authority control",
"Template:Page needed",
"Template:Cbignore",
"Template:Wikibooks",
"Template:Birth control methods",
"Template:Cite book",
"Template:Cite web",
"Template:Cite press release",
"Template:Sex",
"Template:ISBN",
"Template:OED",
"Template:Condom",
"Template:Sprotect",
"Template:Infobox Birth control",
"Template:See also",
"Template:Main",
"Template:Cite report",
"Template:Dead link",
"Template:Citation-attribution",
"Template:US patent",
"Template:Short description",
"Template:Pp",
"Template:TOC limit",
"Template:Citation needed",
"Template:Human sexuality"
] | https://en.wikipedia.org/wiki/Condom |
5,375 | Country code | A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed.
The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU).
The standard ISO 3166-1 defines short identification codes for most countries and dependent areas:
The two-letter codes are used as the basis for other codes and applications, for example,
Other applications are defined in ISO 3166-1 alpha-2.
In telecommunication, a country code, or international subscriber dialing (ISD) code, is a telephone number prefix used in international direct dialing (IDD) and for destination routing of telephone calls to a country other than the caller's. A country or region with an autonomous telephone administration must apply for membership in the International Telecommunication Union (ITU) to participate in the international public switched telephone network (PSTN). County codes are defined by the ITU-T section of the ITU in standards E.123 and E.164.
Country codes constitute the international telephone numbering plan, and are dialed only when calling a telephone number in another country. They are dialed before the national telephone number. International calls require at least one additional prefix to be dialing before the country code, to connect the call to international circuits, the international call prefix. When printing telephone numbers this is indicated by a plus-sign (+) in front of a complete international telephone number, per recommendation E164 by the ITU.
The developers of ISO 3166 intended that in time it would replace other coding systems.
Country identities may be encoded in the following coding systems:
A - B - C - D–E - F - G - H–I - J–K - L - M - N - O–Q - R - S - T - U–Z | [
{
"paragraph_id": 0,
"text": "A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The standard ISO 3166-1 defines short identification codes for most countries and dependent areas:",
"title": "ISO 3166-1"
},
{
"paragraph_id": 3,
"text": "The two-letter codes are used as the basis for other codes and applications, for example,",
"title": "ISO 3166-1"
},
{
"paragraph_id": 4,
"text": "Other applications are defined in ISO 3166-1 alpha-2.",
"title": "ISO 3166-1"
},
{
"paragraph_id": 5,
"text": "In telecommunication, a country code, or international subscriber dialing (ISD) code, is a telephone number prefix used in international direct dialing (IDD) and for destination routing of telephone calls to a country other than the caller's. A country or region with an autonomous telephone administration must apply for membership in the International Telecommunication Union (ITU) to participate in the international public switched telephone network (PSTN). County codes are defined by the ITU-T section of the ITU in standards E.123 and E.164.",
"title": "ITU country codes"
},
{
"paragraph_id": 6,
"text": "Country codes constitute the international telephone numbering plan, and are dialed only when calling a telephone number in another country. They are dialed before the national telephone number. International calls require at least one additional prefix to be dialing before the country code, to connect the call to international circuits, the international call prefix. When printing telephone numbers this is indicated by a plus-sign (+) in front of a complete international telephone number, per recommendation E164 by the ITU.",
"title": "ITU country codes"
},
{
"paragraph_id": 7,
"text": "The developers of ISO 3166 intended that in time it would replace other coding systems.",
"title": "Other country codes"
},
{
"paragraph_id": 8,
"text": "Country identities may be encoded in the following coding systems:",
"title": "Other codings"
},
{
"paragraph_id": 9,
"text": "A - B - C - D–E - F - G - H–I - J–K - L - M - N - O–Q - R - S - T - U–Z",
"title": "Lists of country codes by country"
}
] | A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed. The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU). | 2001-11-17T17:48:59Z | 2023-12-12T01:05:27Z | [
"Template:Cite journal",
"Template:Statoids",
"Template:Telecommunications",
"Template:Short description",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Country_code |
5,376 | Cladistics | Cladistics (/kləˈdɪstɪks/; from Ancient Greek κλάδος (kládos) 'branch') is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.
Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.
The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.)
Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.
Cladistics is now the most commonly used method to classify organisms.
The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field.
What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr.
Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.
In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics.
The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets
Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:
If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the ▼ lived earlier than the last common ancestor of lizards and birds, near the ♦. Most molecular evidence, however, produces cladograms more like this:
If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.
The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea.
Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.
The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:
The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.
Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.
Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.
The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.
An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction.
Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted.
Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.
If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branche on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.
The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.
Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.
Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.
Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita.
Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics).
Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.
Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
Biology portal Evolutionary biology portal | [
{
"paragraph_id": 0,
"text": "Cladistics (/kləˈdɪstɪks/; from Ancient Greek κλάδος (kládos) 'branch') is an approach to biological classification in which organisms are categorized in groups (\"clades\") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. (\"Evolved from\" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.)",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Cladistics is now the most commonly used method to classify organisms.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); the terms \"cladistics\" and \"clade\" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term \"clade\" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, \"cladogenesis\" in 1958, \"cladistic\" by Arthur Cain and Harrison in 1960, \"cladist\" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and \"cladistics\" in 1966. Hennig referred to his own approach as \"phylogenetic systematics\". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more \"sophisticated\" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more \"true\" or \"correct\" results from actual empirical data sets",
"title": "Methodology"
},
{
"paragraph_id": 11,
"text": "Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.",
"title": "Methodology"
},
{
"paragraph_id": 12,
"text": "Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:",
"title": "Methodology"
},
{
"paragraph_id": 13,
"text": "If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the ▼ lived earlier than the last common ancestor of lizards and birds, near the ♦. Most molecular evidence, however, produces cladograms more like this:",
"title": "Methodology"
},
{
"paragraph_id": 14,
"text": "If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.",
"title": "Methodology"
},
{
"paragraph_id": 15,
"text": "The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The \"prosimians\", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the \"prosimians\" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea.",
"title": "Methodology"
},
{
"paragraph_id": 16,
"text": "Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.",
"title": "Methodology"
},
{
"paragraph_id": 17,
"text": "The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:",
"title": "Terminology for character states"
},
{
"paragraph_id": 18,
"text": "The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.",
"title": "Terminology for character states"
},
{
"paragraph_id": 19,
"text": "It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.",
"title": "Terminology for character states"
},
{
"paragraph_id": 20,
"text": "Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.",
"title": "Terminology for taxa"
},
{
"paragraph_id": 21,
"text": "Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.",
"title": "Criticism"
},
{
"paragraph_id": 22,
"text": "Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.",
"title": "Criticism"
},
{
"paragraph_id": 23,
"text": "The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.",
"title": "Issues"
},
{
"paragraph_id": 24,
"text": "An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction.",
"title": "Issues"
},
{
"paragraph_id": 25,
"text": "Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted.",
"title": "Issues"
},
{
"paragraph_id": 26,
"text": "Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.",
"title": "Issues"
},
{
"paragraph_id": 27,
"text": "If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branche on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.",
"title": "Issues"
},
{
"paragraph_id": 28,
"text": "The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.",
"title": "In disciplines other than biology"
},
{
"paragraph_id": 29,
"text": "Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.",
"title": "In disciplines other than biology"
},
{
"paragraph_id": 30,
"text": "Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.",
"title": "In disciplines other than biology"
},
{
"paragraph_id": 31,
"text": "Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita.",
"title": "In disciplines other than biology"
},
{
"paragraph_id": 32,
"text": "Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics).",
"title": "In disciplines other than biology"
},
{
"paragraph_id": 33,
"text": "Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.",
"title": "In disciplines other than biology"
},
{
"paragraph_id": 34,
"text": "Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.",
"title": "In disciplines other than biology"
},
{
"paragraph_id": 35,
"text": "Biology portal Evolutionary biology portal",
"title": "See also"
}
] | Cladistics is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings. As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group.. Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished. Branches down to the divergence to the next significant sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found. The techniques and nomenclature of cladistics have been applied to disciplines other than biology. Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent. Cladistics is now the most commonly used method to classify organisms. | 2001-09-23T19:43:12Z | 2023-12-17T14:19:18Z | [
"Template:IPAc-en",
"Template:Citation",
"Template:About",
"Template:Clear",
"Template:Cite book",
"Template:Refbegin",
"Template:Spoken Wikipedia",
"Template:See also",
"Template:Phylogenetics",
"Template:Refend",
"Template:Etymology",
"Template:Cn",
"Template:Anchor",
"Template:Cite OEtymD",
"Template:Dead link",
"Template:Cbignore",
"Template:Use dmy dates",
"Template:Font",
"Template:Unreferenced section",
"Template:Portal inline",
"Template:Reflist",
"Template:Short description",
"Template:Sfn",
"Template:Main",
"Template:Cite web",
"Template:Cite journal",
"Template:Authority control",
"Template:Why",
"Template:Div col end",
"Template:Commons category-inline",
"Template:Color",
"Template:Clade",
"Template:Div col",
"Template:Full citation needed",
"Template:Harvnb",
"Template:More citations needed section"
] | https://en.wikipedia.org/wiki/Cladistics |
5,377 | Calendar | A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills.
Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term.
The term calendar is taken from kalendae, the term for the first day of the month in the Roman calendar, related to the verb calare 'to call out', referring to the "calling" of the new moon when it was first seen. Latin calendarium meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as calendier and from there in Middle English as calender by the 13th century (the spelling calendar is early modern).
The course of the sun and the moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year.
The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars.
During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures.
A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar.
A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars.
Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar.
The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year.
The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year.
There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke–Henry Permanent Calendar. Such ideas are mooted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity.
A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years.
The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction.
Other calendars have one (or multiple) larger units of time.
Calendars that contain one level of cycles:
Calendars with two levels of cycles:
Cycles can be synchronized with periodic phenomena:
Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements.
Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia.
Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day.
Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton (c. 25,000 BC) represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar.
A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle.
Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week.
Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length.
Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito.
An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult.
An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar.
Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as "winter", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar.
The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season.
Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase.
The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely-used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days).
The calendar was introduced in 1582 as a refinement to the Julian calendar, which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923.
The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era).
The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days.
While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes.
Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season.
Eastern Christians, including the Orthodox Church, use the Julian calendar.
The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622). With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years.
Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states.
The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar.
Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century).
The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques).
Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days.
The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes.
The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar.
A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival.
In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar.
The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc.
In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word.
In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain.
It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary.
When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row.
Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list. Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server). | [
{
"paragraph_id": 0,
"text": "A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term calendar is taken from kalendae, the term for the first day of the month in the Roman calendar, related to the verb calare 'to call out', referring to the \"calling\" of the new moon when it was first seen. Latin calendarium meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as calendier and from there in Middle English as calender by the 13th century (the spelling calendar is early modern).",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "The course of the sun and the moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Roman calendar was reformed by Julius Caesar in 46 BC. His \"Julian\" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke–Henry Permanent Calendar. Such ideas are mooted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years.",
"title": "Systems"
},
{
"paragraph_id": 13,
"text": "The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction.",
"title": "Systems"
},
{
"paragraph_id": 14,
"text": "Other calendars have one (or multiple) larger units of time.",
"title": "Systems"
},
{
"paragraph_id": 15,
"text": "Calendars that contain one level of cycles:",
"title": "Systems"
},
{
"paragraph_id": 16,
"text": "Calendars with two levels of cycles:",
"title": "Systems"
},
{
"paragraph_id": 17,
"text": "Cycles can be synchronized with periodic phenomena:",
"title": "Systems"
},
{
"paragraph_id": 18,
"text": "Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements.",
"title": "Systems"
},
{
"paragraph_id": 19,
"text": "Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia.",
"title": "Systems"
},
{
"paragraph_id": 20,
"text": "Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day.",
"title": "Systems"
},
{
"paragraph_id": 21,
"text": "Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton (c. 25,000 BC) represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar.",
"title": "Systems"
},
{
"paragraph_id": 22,
"text": "A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle.",
"title": "Systems"
},
{
"paragraph_id": 23,
"text": "Nearly all calendar systems group consecutive days into \"months\" and also into \"years\". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week.",
"title": "Subdivisions"
},
{
"paragraph_id": 24,
"text": "Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length.",
"title": "Subdivisions"
},
{
"paragraph_id": 25,
"text": "Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito.",
"title": "Subdivisions"
},
{
"paragraph_id": 26,
"text": "",
"title": "Other types"
},
{
"paragraph_id": 27,
"text": "An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult.",
"title": "Other types"
},
{
"paragraph_id": 28,
"text": "An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar.",
"title": "Other types"
},
{
"paragraph_id": 29,
"text": "Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as \"winter\", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar.",
"title": "Other types"
},
{
"paragraph_id": 30,
"text": "The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season.",
"title": "Usage"
},
{
"paragraph_id": 31,
"text": "Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase.",
"title": "Usage"
},
{
"paragraph_id": 32,
"text": "The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely-used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days).",
"title": "Usage"
},
{
"paragraph_id": 33,
"text": "The calendar was introduced in 1582 as a refinement to the Julian calendar, which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923.",
"title": "Usage"
},
{
"paragraph_id": 34,
"text": "The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era).",
"title": "Usage"
},
{
"paragraph_id": 35,
"text": "The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days.",
"title": "Usage"
},
{
"paragraph_id": 36,
"text": "While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes.",
"title": "Usage"
},
{
"paragraph_id": 37,
"text": "Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season.",
"title": "Usage"
},
{
"paragraph_id": 38,
"text": "Eastern Christians, including the Orthodox Church, use the Julian calendar.",
"title": "Usage"
},
{
"paragraph_id": 39,
"text": "The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622). With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years.",
"title": "Usage"
},
{
"paragraph_id": 40,
"text": "Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states.",
"title": "Usage"
},
{
"paragraph_id": 41,
"text": "The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar.",
"title": "Usage"
},
{
"paragraph_id": 42,
"text": "Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century).",
"title": "Usage"
},
{
"paragraph_id": 43,
"text": "The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques).",
"title": "Usage"
},
{
"paragraph_id": 44,
"text": "Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days.",
"title": "Usage"
},
{
"paragraph_id": 45,
"text": "The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes.",
"title": "Usage"
},
{
"paragraph_id": 46,
"text": "The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar.",
"title": "Usage"
},
{
"paragraph_id": 47,
"text": "A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival.",
"title": "Usage"
},
{
"paragraph_id": 48,
"text": "In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar.",
"title": "Usage"
},
{
"paragraph_id": 49,
"text": "The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc.",
"title": "Formats"
},
{
"paragraph_id": 50,
"text": "In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word.",
"title": "Formats"
},
{
"paragraph_id": 51,
"text": "In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain.",
"title": "Formats"
},
{
"paragraph_id": 52,
"text": "It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary.",
"title": "Formats"
},
{
"paragraph_id": 53,
"text": "When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row.",
"title": "Formats"
},
{
"paragraph_id": 54,
"text": "Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list. Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server).",
"title": "Software"
}
] | A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills. Periods in a calendar are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term. | 2001-11-10T19:24:44Z | 2023-12-24T03:34:08Z | [
"Template:Main",
"Template:Unreferenced section",
"Template:Commons category",
"Template:Chronology",
"Template:Reflist",
"Template:Cite news",
"Template:Globalize",
"Template:See also",
"Template:Cn",
"Template:Div col",
"Template:Citation needed",
"Template:Citation",
"Template:Authority control",
"Template:Sfn",
"Template:Cite journal",
"Template:Cite EB9",
"Template:Time topics",
"Template:Pp",
"Template:Short description",
"Template:Lang",
"Template:Further",
"Template:Circa",
"Template:Webarchive",
"Template:Cite web",
"Template:About",
"Template:Redirect",
"Template:Use dmy dates",
"Template:Use American English",
"Template:Cite Americana",
"Template:Cite EB1911",
"Template:Distinguish",
"Template:Anchor",
"Template:Disputed section",
"Template:Wiktionary",
"Template:Time measurement and standards",
"Template:Calendar",
"Template:Div col end",
"Template:Cite book",
"Template:Calendars"
] | https://en.wikipedia.org/wiki/Calendar |
5,378 | Physical cosmology | Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics.
Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.
In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time.
For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s.
An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented.
In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.
The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies.
Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy.
Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.
As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.
The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model.
Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool down and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago.
During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.
As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is 1 / H {\displaystyle 1/H} with H {\displaystyle H} being the Hubble parameter, which varies with time. The expansion timescale 1 / H {\displaystyle 1/H} is roughly equal to the age of the universe at each point in time.
Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses. Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.
Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.
The early, hot universe appears to be well explained by the Big Bang from roughly 10 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.
Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.
Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.
Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino.
The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology.
The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 10. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses.
Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.
On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way.
Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.
Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.
Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
These will help cosmologists settle the question of when and how structure formed in the universe.
Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing.
If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.
Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between:
Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.
A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario.
Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang.
In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction.
Cosmologists also study: | [
{
"paragraph_id": 0,
"text": "Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.",
"title": "Subject history"
},
{
"paragraph_id": 5,
"text": "In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the \"explosion\" of a \"primeval atom\"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.",
"title": "Subject history"
},
{
"paragraph_id": 6,
"text": "Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time.",
"title": "Subject history"
},
{
"paragraph_id": 7,
"text": "For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s.",
"title": "Subject history"
},
{
"paragraph_id": 8,
"text": "An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented.",
"title": "Subject history"
},
{
"paragraph_id": 9,
"text": "In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.",
"title": "Subject history"
},
{
"paragraph_id": 10,
"text": "The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies.",
"title": "Energy of the cosmos"
},
{
"paragraph_id": 11,
"text": "Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.",
"title": "Energy of the cosmos"
},
{
"paragraph_id": 12,
"text": "There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy.",
"title": "Energy of the cosmos"
},
{
"paragraph_id": 13,
"text": "Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.",
"title": "Energy of the cosmos"
},
{
"paragraph_id": 14,
"text": "As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.",
"title": "Energy of the cosmos"
},
{
"paragraph_id": 15,
"text": "The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model.",
"title": "History of the universe"
},
{
"paragraph_id": 16,
"text": "Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool down and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago.",
"title": "History of the universe"
},
{
"paragraph_id": 17,
"text": "During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.",
"title": "History of the universe"
},
{
"paragraph_id": 18,
"text": "As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is 1 / H {\\displaystyle 1/H} with H {\\displaystyle H} being the Hubble parameter, which varies with time. The expansion timescale 1 / H {\\displaystyle 1/H} is roughly equal to the age of the universe at each point in time.",
"title": "History of the universe"
},
{
"paragraph_id": 19,
"text": "Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses. Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.",
"title": "History of the universe"
},
{
"paragraph_id": 20,
"text": "Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.",
"title": "Areas of study"
},
{
"paragraph_id": 21,
"text": "The early, hot universe appears to be well explained by the Big Bang from roughly 10 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.",
"title": "Areas of study"
},
{
"paragraph_id": 22,
"text": "Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.",
"title": "Areas of study"
},
{
"paragraph_id": 23,
"text": "Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.",
"title": "Areas of study"
},
{
"paragraph_id": 24,
"text": "Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth \"sterile\" species of neutrino.",
"title": "Areas of study"
},
{
"paragraph_id": 25,
"text": "The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology.",
"title": "Areas of study"
},
{
"paragraph_id": 26,
"text": "The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 10. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses.",
"title": "Areas of study"
},
{
"paragraph_id": 27,
"text": "Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.",
"title": "Areas of study"
},
{
"paragraph_id": 28,
"text": "On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way.",
"title": "Areas of study"
},
{
"paragraph_id": 29,
"text": "Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.",
"title": "Areas of study"
},
{
"paragraph_id": 30,
"text": "Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.",
"title": "Areas of study"
},
{
"paragraph_id": 31,
"text": "Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:",
"title": "Areas of study"
},
{
"paragraph_id": 32,
"text": "These will help cosmologists settle the question of when and how structure formed in the universe.",
"title": "Areas of study"
},
{
"paragraph_id": 33,
"text": "Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing.",
"title": "Areas of study"
},
{
"paragraph_id": 34,
"text": "If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.",
"title": "Areas of study"
},
{
"paragraph_id": 35,
"text": "Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between:",
"title": "Areas of study"
},
{
"paragraph_id": 36,
"text": "Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.",
"title": "Areas of study"
},
{
"paragraph_id": 37,
"text": "A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario.",
"title": "Areas of study"
},
{
"paragraph_id": 38,
"text": "Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang.",
"title": "Areas of study"
},
{
"paragraph_id": 39,
"text": "In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction.",
"title": "Areas of study"
},
{
"paragraph_id": 40,
"text": "Cosmologists also study:",
"title": "Areas of study"
}
] | Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood. Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations. Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations. Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics. | 2001-10-02T04:47:53Z | 2023-11-27T00:52:30Z | [
"Template:Cosmology",
"Template:Reflist",
"Template:Cite news",
"Template:Cite conference",
"Template:Commonscat",
"Template:Astronomy subfields",
"Template:Use dmy dates",
"Template:Main",
"Template:Div col end",
"Template:Cite book",
"Template:Astronomy navbox",
"Template:Authority control",
"Template:Clarify",
"Template:Cite journal",
"Template:Cbignore",
"Template:Cosmology topics",
"Template:Big History",
"Template:See also",
"Template:Nature timeline",
"Template:Vague",
"Template:Speculation inline",
"Template:About",
"Template:Cite web",
"Template:Cite arXiv",
"Template:Portal bar",
"Template:Div col",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Physical_cosmology |
5,382 | Inflation (cosmology) | In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the early universe. The inflationary epoch is believed to have lasted from 10 seconds to between 10 and 10 seconds after the Big Bang. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).
Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize "for pioneering the theory of cosmic inflation". It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.
The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.
In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize "for development of the concept of inflation in cosmology". In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.
Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted; the more remote, the more shifted. This implies that the galaxies are receding from the Earth, with more distant galaxies receding more rapidly, such that galaxies also recede from each other. This expansion of the universe was previously predicted by Alexander Friedmann and Georges Lemaître from the theory of general relativity. It can be understood as a consequence of an initial impulse, which sent the contents of the universe flying apart at such a rate that their mutual gravitational attraction has not reversed their separation.
Inflation may provide this initial impulse. According to the Friedmann equations that describe the dynamics of an expanding universe, a fluid with sufficiently negative pressure exerts gravitational repulsion in the cosmological context. A field in a positive-energy false vacuum state could represent such a fluid, and the resulting repulsion would set the universe into exponential expansion. This inflation phase was originally proposed by Alan Guth in 1979 because the exponential expansion could dilute exotic relics, such as magnetic monopoles, that were predicted by grand unified theories at the time. This would explain why such relics were not seen. It was quickly realized that such accelerated expansion would resolve the horizon problem and the flatness problem. These problems arise from the notion that to look like it does today, the Universe must have started from very finely tuned, or "special", initial conditions at the Big Bang.
An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly.
The observable universe is one causal patch of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone.
Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous.
As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space.
The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed.
In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially).
In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric:
This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ; the equation of state is p=−ρ.
Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases.
Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" by analogy with the no hair theorem for black holes.
The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for not testable disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins.
A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the Universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the Universe expanded by a factor of at least 10 during inflation.
Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from 10 K down to 10 K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflaton field is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance.
Inflation resolves several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.
The horizon problem is the problem of determining why the Universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy.
The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry).
Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the Universe is flat to within a few percent.
The magnetic monopole problem, sometimes called "the exotic-relics problem", says that if the early universe were very hot, a large number of very heavy, stable magnetic monopoles would have been produced.
Stable magnetic monopoles are a problem for Grand Unified Theories, which propose that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field.
Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe.
A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: Monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written,
In the early days of General Relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe.
In the early 1970s, Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Belinski and Khalatnikov to analyze the chaotic BKL singularity in General Relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success.
In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology.
The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum.
In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of f(R) modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action
which corresponds to the potential
in the Einstein frame. This results in the observables: n s = 1 − 2 N , r = 12 N 2 . {\displaystyle n_{s}=1-{\frac {2}{N}},\quad \quad r={\frac {12}{N^{2}}}.}
In 1978, Zeldovich noted the magnetic monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details.
Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles; it was Guth who coined the term "inflation". At the same time, Starobinsky argued that quantum corrections to gravity would replace the supposed initial singularity of the Universe with an exponentially expanding de Sitter phase. In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, while Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981 Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions.
Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate any radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate.
... Kazanas (1980) called this phase of the early Universe "de Sitter's phase." The name "inflation" was given by Guth (1981). ... Guth himself did not refer to work of Kazanas until he published a book on the subject under the title The Inflationary Universe: The quest for a new theory of cosmic origin (1997), where he apologizes for not having referenced the work of Kazanas and of others, related to inflation.
The bubble collision problem was solved by Linde and independently by Andreas Albrecht and Paul Steinhardt in a model named new inflation or slow-roll inflation (Guth's model then became known as old inflation). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur.
Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Guth and So-Young Pi; and Bardeen, Steinhardt and Turner.
Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. This analysis shows that the Universe is flat to within 1 /2 percent, and that it is homogeneous and isotropic to one part in 100,000.
Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the spectral index, which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe). The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1 .
Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called adiabatic or isentropic perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The spectral index, ns is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that ns is between 0.92 and 0.98 . This is the range that is possible without fine-tuning of the parameters related to energy. From Planck data it can be inferred that ns=0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11 . These are considered an important confirmation of the theory of inflation.
Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine-tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics.
Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias.
An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (10~10 GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio r was between 0.15 and 0.27 (rejecting the null hypothesis; r is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that r {\displaystyle r} is 0.06 or lower: consistent with the null hypothesis, but still also consistent with many remaining models of inflation.
Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere. Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great.
Is the theory of cosmological inflation correct, and if so, what are the details of this epoch? What is the hypothetical inflaton field giving rise to inflation?
In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field although the recent discovery of the Higgs boson has increased the number of works considering the Higgs field as inflaton. One problem of this identification is the current tension with experimental data at the electroweak scale, which is currently under study at the Large Hadron Collider (LHC). Other models of inflation relied on the properties of Grand Unified Theories. Since the simplest models of grand unification have failed, it is now thought by many physicists that inflation will be included in a supersymmetric theory such as string theory or a supersymmetric grand unified theory. At present, while inflation is understood principally by its detailed predictions of the initial conditions for the hot early universe, the particle physics is largely ad hoc modelling. As such, although predictions of inflation have been consistent with the results of observational tests, many open questions remain.
One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass. New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory.
Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy. However, in his model the inflaton field necessarily takes values larger than one Planck unit: for this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation. This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models. While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories.
Brandenberger commented on fine-tuning in another situation. The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around 10 GeV or 10 times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): the energy density given by the scalar potential is down by 10 compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification.
In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time.
All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model.
Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic.
Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions.
In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating don't. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible.
Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason.
Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever.
Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally.
Guth described the inflationary universe as the "ultimate free lunch": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase.
Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle-Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations.
Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable.
Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state.
In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation.
Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, 10 GeV, roughly 27 orders of magnitude less than the scale of inflation.
The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac-Born-Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism.
When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back.
Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation.
The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. The flatness and horizon problems are naturally solved in the Einstein-Cartan-Sciama-Kibble theory of gravity, without needing an exotic form of matter or free parameters. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.
The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years.
String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. The original model did not "solve the entropy and flatness problems of standard cosmology", although Brandenburger and coauthors later argued that these problems can be eliminated by implementing string gas cosmology in the context of a bouncing-universe scenario.
Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.
Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding,
As pointed out by Roger Penrose from 1986 on, in order to work, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved:
The problem of specific or "fine-tuned" initial conditions would not have been solved; it would have gotten worse. At a conference in 2015, Penrose said that
A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, has recently become one of its sharpest critics. He calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them:
Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite.
Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura and by Andrei Linde, saying that | [
{
"paragraph_id": 0,
"text": "In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the early universe. The inflationary epoch is believed to have lasted from 10 seconds to between 10 and 10 seconds after the Big Bang. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize \"for pioneering the theory of cosmic inflation\". It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize \"for development of the concept of inflation in cosmology\". In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted; the more remote, the more shifted. This implies that the galaxies are receding from the Earth, with more distant galaxies receding more rapidly, such that galaxies also recede from each other. This expansion of the universe was previously predicted by Alexander Friedmann and Georges Lemaître from the theory of general relativity. It can be understood as a consequence of an initial impulse, which sent the contents of the universe flying apart at such a rate that their mutual gravitational attraction has not reversed their separation.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "Inflation may provide this initial impulse. According to the Friedmann equations that describe the dynamics of an expanding universe, a fluid with sufficiently negative pressure exerts gravitational repulsion in the cosmological context. A field in a positive-energy false vacuum state could represent such a fluid, and the resulting repulsion would set the universe into exponential expansion. This inflation phase was originally proposed by Alan Guth in 1979 because the exponential expansion could dilute exotic relics, such as magnetic monopoles, that were predicted by grand unified theories at the time. This would explain why such relics were not seen. It was quickly realized that such accelerated expansion would resolve the horizon problem and the flatness problem. These problems arise from the notion that to look like it does today, the Universe must have started from very finely tuned, or \"special\", initial conditions at the Big Bang.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly.",
"title": "Theory"
},
{
"paragraph_id": 7,
"text": "The observable universe is one causal patch of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone.",
"title": "Theory"
},
{
"paragraph_id": 8,
"text": "Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous.",
"title": "Theory"
},
{
"paragraph_id": 9,
"text": "As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space.",
"title": "Theory"
},
{
"paragraph_id": 10,
"text": "The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed.",
"title": "Theory"
},
{
"paragraph_id": 11,
"text": "In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially).",
"title": "Theory"
},
{
"paragraph_id": 12,
"text": "In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric:",
"title": "Theory"
},
{
"paragraph_id": 13,
"text": "This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ; the equation of state is p=−ρ.",
"title": "Theory"
},
{
"paragraph_id": 14,
"text": "Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases.",
"title": "Theory"
},
{
"paragraph_id": 15,
"text": "Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary \"no-hair theorem\" by analogy with the no hair theorem for black holes.",
"title": "Theory"
},
{
"paragraph_id": 16,
"text": "The \"no-hair\" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for not testable disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary \"cold\" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins.",
"title": "Theory"
},
{
"paragraph_id": 17,
"text": "A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the Universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the Universe expanded by a factor of at least 10 during inflation.",
"title": "Theory"
},
{
"paragraph_id": 18,
"text": "Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from 10 K down to 10 K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflaton field is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance.",
"title": "Theory"
},
{
"paragraph_id": 19,
"text": "Inflation resolves several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or \"special\" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.",
"title": "Motivations"
},
{
"paragraph_id": 20,
"text": "The horizon problem is the problem of determining why the Universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy.",
"title": "Motivations"
},
{
"paragraph_id": 21,
"text": "The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry).",
"title": "Motivations"
},
{
"paragraph_id": 22,
"text": "Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the Universe is flat to within a few percent.",
"title": "Motivations"
},
{
"paragraph_id": 23,
"text": "The magnetic monopole problem, sometimes called \"the exotic-relics problem\", says that if the early universe were very hot, a large number of very heavy, stable magnetic monopoles would have been produced.",
"title": "Motivations"
},
{
"paragraph_id": 24,
"text": "Stable magnetic monopoles are a problem for Grand Unified Theories, which propose that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy \"charge\" of magnetic field.",
"title": "Motivations"
},
{
"paragraph_id": 25,
"text": "Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe.",
"title": "Motivations"
},
{
"paragraph_id": 26,
"text": "A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: Monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written,",
"title": "Motivations"
},
{
"paragraph_id": 27,
"text": "In the early days of General Relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In the early 1970s, Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Belinski and Khalatnikov to analyze the chaotic BKL singularity in General Relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of f(R) modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action",
"title": "History"
},
{
"paragraph_id": 32,
"text": "which corresponds to the potential",
"title": "History"
},
{
"paragraph_id": 33,
"text": "in the Einstein frame. This results in the observables: n s = 1 − 2 N , r = 12 N 2 . {\\displaystyle n_{s}=1-{\\frac {2}{N}},\\quad \\quad r={\\frac {12}{N^{2}}}.}",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In 1978, Zeldovich noted the magnetic monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles; it was Guth who coined the term \"inflation\". At the same time, Starobinsky argued that quantum corrections to gravity would replace the supposed initial singularity of the Universe with an exponentially expanding de Sitter phase. In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, while Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981 Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate any radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "... Kazanas (1980) called this phase of the early Universe \"de Sitter's phase.\" The name \"inflation\" was given by Guth (1981). ... Guth himself did not refer to work of Kazanas until he published a book on the subject under the title The Inflationary Universe: The quest for a new theory of cosmic origin (1997), where he apologizes for not having referenced the work of Kazanas and of others, related to inflation.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The bubble collision problem was solved by Linde and independently by Andreas Albrecht and Paul Steinhardt in a model named new inflation or slow-roll inflation (Guth's model then became known as old inflation). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Guth and So-Young Pi; and Bardeen, Steinhardt and Turner.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. This analysis shows that the Universe is flat to within 1 /2 percent, and that it is homogeneous and isotropic to one part in 100,000.",
"title": "Observational status"
},
{
"paragraph_id": 41,
"text": "Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the spectral index, which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe). The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1 .",
"title": "Observational status"
},
{
"paragraph_id": 42,
"text": "Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called adiabatic or isentropic perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The spectral index, ns is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that ns is between 0.92 and 0.98 . This is the range that is possible without fine-tuning of the parameters related to energy. From Planck data it can be inferred that ns=0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11 . These are considered an important confirmation of the theory of inflation.",
"title": "Observational status"
},
{
"paragraph_id": 43,
"text": "Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine-tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics.",
"title": "Observational status"
},
{
"paragraph_id": 44,
"text": "Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias.",
"title": "Observational status"
},
{
"paragraph_id": 45,
"text": "An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called \"B-modes\" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (10~10 GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio r was between 0.15 and 0.27 (rejecting the null hypothesis; r is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that r {\\displaystyle r} is 0.06 or lower: consistent with the null hypothesis, but still also consistent with many remaining models of inflation.",
"title": "Observational status"
},
{
"paragraph_id": 46,
"text": "Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere. Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great.",
"title": "Observational status"
},
{
"paragraph_id": 47,
"text": "Is the theory of cosmological inflation correct, and if so, what are the details of this epoch? What is the hypothetical inflaton field giving rise to inflation?",
"title": "Theoretical status"
},
{
"paragraph_id": 48,
"text": "In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field although the recent discovery of the Higgs boson has increased the number of works considering the Higgs field as inflaton. One problem of this identification is the current tension with experimental data at the electroweak scale, which is currently under study at the Large Hadron Collider (LHC). Other models of inflation relied on the properties of Grand Unified Theories. Since the simplest models of grand unification have failed, it is now thought by many physicists that inflation will be included in a supersymmetric theory such as string theory or a supersymmetric grand unified theory. At present, while inflation is understood principally by its detailed predictions of the initial conditions for the hot early universe, the particle physics is largely ad hoc modelling. As such, although predictions of inflation have been consistent with the results of observational tests, many open questions remain.",
"title": "Theoretical status"
},
{
"paragraph_id": 49,
"text": "One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass. New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory.",
"title": "Theoretical status"
},
{
"paragraph_id": 50,
"text": "Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy. However, in his model the inflaton field necessarily takes values larger than one Planck unit: for this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation. This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models. While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories.",
"title": "Theoretical status"
},
{
"paragraph_id": 51,
"text": "Brandenberger commented on fine-tuning in another situation. The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around 10 GeV or 10 times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): the energy density given by the scalar potential is down by 10 compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification.",
"title": "Theoretical status"
},
{
"paragraph_id": 52,
"text": "In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time.",
"title": "Theoretical status"
},
{
"paragraph_id": 53,
"text": "All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model.",
"title": "Theoretical status"
},
{
"paragraph_id": 54,
"text": "Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic.",
"title": "Theoretical status"
},
{
"paragraph_id": 55,
"text": "Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions.",
"title": "Theoretical status"
},
{
"paragraph_id": 56,
"text": "In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating don't. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible.",
"title": "Theoretical status"
},
{
"paragraph_id": 57,
"text": "Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason.",
"title": "Theoretical status"
},
{
"paragraph_id": 58,
"text": "Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever.",
"title": "Theoretical status"
},
{
"paragraph_id": 59,
"text": "Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally.",
"title": "Theoretical status"
},
{
"paragraph_id": 60,
"text": "Guth described the inflationary universe as the \"ultimate free lunch\": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase.",
"title": "Theoretical status"
},
{
"paragraph_id": 61,
"text": "Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle-Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the \"seed\" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations.",
"title": "Theoretical status"
},
{
"paragraph_id": 62,
"text": "Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable.",
"title": "Theoretical status"
},
{
"paragraph_id": 63,
"text": "Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state.",
"title": "Theoretical status"
},
{
"paragraph_id": 64,
"text": "In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation.",
"title": "Theoretical status"
},
{
"paragraph_id": 65,
"text": "Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, 10 GeV, roughly 27 orders of magnitude less than the scale of inflation.",
"title": "Theoretical status"
},
{
"paragraph_id": 66,
"text": "The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac-Born-Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism.",
"title": "Theoretical status"
},
{
"paragraph_id": 67,
"text": "When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back.",
"title": "Theoretical status"
},
{
"paragraph_id": 68,
"text": "Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation.",
"title": "Alternatives and adjuncts"
},
{
"paragraph_id": 69,
"text": "The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. The flatness and horizon problems are naturally solved in the Einstein-Cartan-Sciama-Kibble theory of gravity, without needing an exotic form of matter or free parameters. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.",
"title": "Alternatives and adjuncts"
},
{
"paragraph_id": 70,
"text": "The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years.",
"title": "Alternatives and adjuncts"
},
{
"paragraph_id": 71,
"text": "String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. The original model did not \"solve the entropy and flatness problems of standard cosmology\", although Brandenburger and coauthors later argued that these problems can be eliminated by implementing string gas cosmology in the context of a bouncing-universe scenario.",
"title": "Alternatives and adjuncts"
},
{
"paragraph_id": 72,
"text": "Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.",
"title": "Alternatives and adjuncts"
},
{
"paragraph_id": 73,
"text": "Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding,",
"title": "Criticisms"
},
{
"paragraph_id": 74,
"text": "As pointed out by Roger Penrose from 1986 on, in order to work, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved:",
"title": "Criticisms"
},
{
"paragraph_id": 75,
"text": "The problem of specific or \"fine-tuned\" initial conditions would not have been solved; it would have gotten worse. At a conference in 2015, Penrose said that",
"title": "Criticisms"
},
{
"paragraph_id": 76,
"text": "A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, has recently become one of its sharpest critics. He calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them:",
"title": "Criticisms"
},
{
"paragraph_id": 77,
"text": "Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite.",
"title": "Criticisms"
},
{
"paragraph_id": 78,
"text": "Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura and by Andrei Linde, saying that",
"title": "Criticisms"
}
] | In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the early universe. The inflationary epoch is believed to have lasted from 10−36 seconds to between 10−33 and 10−32 seconds after the Big Bang. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old. Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize "for pioneering the theory of cosmic inflation". It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe. Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed. The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton. In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize "for development of the concept of inflation in cosmology". In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology. | 2001-04-05T00:12:05Z | 2023-12-06T14:25:52Z | [
"Template:Cite arXiv",
"Template:10^",
"Template:Nowrap end",
"Template:Rp",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Short description",
"Template:Sfrac",
"Template:Cite book",
"Template:Cite news",
"Template:Efn",
"Template:Sub",
"Template:Better source needed",
"Template:Redirect2",
"Template:Cosmology",
"Template:Dead link",
"Template:Authority control",
"Template:Div col",
"Template:Cite web",
"Template:Cite conference",
"Template:ISBN",
"Template:Use dmy dates",
"Template:Main",
"Template:Why",
"Template:Unsolved",
"Template:Cite arxiv",
"Template:Nowrap begin",
"Template:Clarify",
"Template:Notelist",
"Template:Cbignore",
"Template:Unreferenced section",
"Template:Further",
"Template:Reflist",
"Template:Cite serial",
"Template:Cite journal",
"Template:Wikiquote",
"Template:Portal bar",
"Template:See also",
"Template:Clear",
"Template:Blockquote",
"Template:Mvar"
] | https://en.wikipedia.org/wiki/Inflation_(cosmology) |
5,385 | Candela | The candela (/kænˈdɛlə/ or /kænˈdiːlə/; symbol: cd) is the unit of luminous intensity in the International System of Units (SI). It measures luminous power per unit solid angle emitted by a light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the luminosity function, the model of the sensitivity of the human eye to different wavelengths, standardized by the CIE and ISO. A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured.
The word candela is Latin for candle. The old name "candle" is still sometimes used, as in foot-candle and the modern definition of candlepower.
The 26th General Conference on Weights and Measures (CGPM) redefined the candela in 2018. The new definition, which took effect on 20 May 2019, is:
The candela [...] is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency 540×10 Hz, Kcd, to be 683 when expressed in the unit lm W, which is equal to cd sr W, or cd sr kg m s, where the kilogram, metre and second are defined in terms of h, c and ΔνCs.
The frequency chosen is in the visible spectrum near green, corresponding to a wavelength of about 555 nanometres. The human eye, when adapted for bright conditions, is most sensitive near this frequency. Under these conditions, photopic vision dominates the visual perception of our eyes over the scotopic vision. At other frequencies, more radiant intensity is required to achieve the same luminous intensity, according to the frequency response of the human eye. The luminous intensity for light of a particular wavelength λ is given by
where Iv(λ) is the luminous intensity, Ie(λ) is the radiant intensity and y ¯ ( λ ) {\textstyle \textstyle {\overline {y}}(\lambda )} is the photopic luminosity function. If more than one wavelength is present (as is usually the case), one must integrate over the spectrum of wavelengths to get the total luminous intensity.
Prior to 1948, various standards for luminous intensity were in use in a number of countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these was the English standard of candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp.
A better standard for luminous intensity was needed. In 1884, Jules Violle had proposed a standard based on the light emitted by 1 cm of platinum at its melting point (or freezing point). The resulting unit of intensity, called the "violle", was roughly equal to 60 English candlepower. Platinum was convenient for this purpose because it had a high enough melting point, was not prone to oxidation, and could be obtained in pure form. Violle showed that the intensity emitted by pure platinum was strictly dependent on its temperature, and so platinum at its melting point should have a consistent luminous intensity.
In practice, realizing a standard based on Violle's proposal turned out to be more difficult than expected. Impurities on the surface of the platinum could directly affect its emissivity, and in addition impurities could affect the luminous intensity by altering the melting point. Over the following half century various scientists tried to make a practical intensity standard based on incandescent platinum. The successful approach was to suspend a hollow shell of thorium dioxide with a small hole in it in a bath of molten platinum. The shell (cavity) serves as a black body, producing black-body radiation that depends on the temperature and is not sensitive to details of how the device is constructed.
In 1937, the Commission Internationale de l'Éclairage (International Commission on Illumination) and the CIPM proposed a "new candle" based on this concept, with value chosen to make it similar to the earlier unit candlepower. The decision was promulgated by the CIPM in 1946:
The value of the new candle is such that the brightness of the full radiator at the temperature of solidification of platinum is 60 new candles per square centimetre.
It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the candela. In 1967 the 13th CGPM removed the term "new candle" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum:
The candela is the luminous intensity, in the perpendicular direction, of a surface of 1 / 600 000 square metre of a black body at the temperature of freezing platinum under a pressure of 101 325 newtons per square metre.
In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela:
The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540×10 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian.
The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminosity function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminosity function. An appendix to the SI Brochure makes it clear that the luminosity function is not uniquely specified, but must be selected to fully define the candela.
The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition.
The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 redefinition of SI base units, which redefined the SI base units in terms of fundamental physical constants.
If a source emits a known luminous intensity Iv (in candelas) in a well-defined cone, the total luminous flux Φv in lumens is given by
where A is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps.
If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4π: a uniform 1 candela source emits 12.6 lumens.
For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally uniform intensities, the contributions to illuminance from incoherent light being additive, it is mathematically estimated as follows. If ri is the position of the ith source of uniform intensity Ii, and â is the unit vector normal to the illuminated elemental opaque area dA being measured, and provided that all light sources lie in the same half-space divided by the plane of this area,
In the case of a single point light source of intensity Iv, at a distance r and normally incident, this reduces to
Like other SI units, the candela can also be modified by adding a metric prefix that multiplies it by a power of 10, for example millicandela (mcd) for 10 candela. | [
{
"paragraph_id": 0,
"text": "The candela (/kænˈdɛlə/ or /kænˈdiːlə/; symbol: cd) is the unit of luminous intensity in the International System of Units (SI). It measures luminous power per unit solid angle emitted by a light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the luminosity function, the model of the sensitivity of the human eye to different wavelengths, standardized by the CIE and ISO. A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word candela is Latin for candle. The old name \"candle\" is still sometimes used, as in foot-candle and the modern definition of candlepower.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The 26th General Conference on Weights and Measures (CGPM) redefined the candela in 2018. The new definition, which took effect on 20 May 2019, is:",
"title": "Definition"
},
{
"paragraph_id": 3,
"text": "The candela [...] is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency 540×10 Hz, Kcd, to be 683 when expressed in the unit lm W, which is equal to cd sr W, or cd sr kg m s, where the kilogram, metre and second are defined in terms of h, c and ΔνCs.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "The frequency chosen is in the visible spectrum near green, corresponding to a wavelength of about 555 nanometres. The human eye, when adapted for bright conditions, is most sensitive near this frequency. Under these conditions, photopic vision dominates the visual perception of our eyes over the scotopic vision. At other frequencies, more radiant intensity is required to achieve the same luminous intensity, according to the frequency response of the human eye. The luminous intensity for light of a particular wavelength λ is given by",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "where Iv(λ) is the luminous intensity, Ie(λ) is the radiant intensity and y ¯ ( λ ) {\\textstyle \\textstyle {\\overline {y}}(\\lambda )} is the photopic luminosity function. If more than one wavelength is present (as is usually the case), one must integrate over the spectrum of wavelengths to get the total luminous intensity.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "Prior to 1948, various standards for luminous intensity were in use in a number of countries. These were typically based on the brightness of the flame from a \"standard candle\" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these was the English standard of candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A better standard for luminous intensity was needed. In 1884, Jules Violle had proposed a standard based on the light emitted by 1 cm of platinum at its melting point (or freezing point). The resulting unit of intensity, called the \"violle\", was roughly equal to 60 English candlepower. Platinum was convenient for this purpose because it had a high enough melting point, was not prone to oxidation, and could be obtained in pure form. Violle showed that the intensity emitted by pure platinum was strictly dependent on its temperature, and so platinum at its melting point should have a consistent luminous intensity.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In practice, realizing a standard based on Violle's proposal turned out to be more difficult than expected. Impurities on the surface of the platinum could directly affect its emissivity, and in addition impurities could affect the luminous intensity by altering the melting point. Over the following half century various scientists tried to make a practical intensity standard based on incandescent platinum. The successful approach was to suspend a hollow shell of thorium dioxide with a small hole in it in a bath of molten platinum. The shell (cavity) serves as a black body, producing black-body radiation that depends on the temperature and is not sensitive to details of how the device is constructed.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1937, the Commission Internationale de l'Éclairage (International Commission on Illumination) and the CIPM proposed a \"new candle\" based on this concept, with value chosen to make it similar to the earlier unit candlepower. The decision was promulgated by the CIPM in 1946:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The value of the new candle is such that the brightness of the full radiator at the temperature of solidification of platinum is 60 new candles per square centimetre.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the candela. In 1967 the 13th CGPM removed the term \"new candle\" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum:",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The candela is the luminous intensity, in the perpendicular direction, of a surface of 1 / 600 000 square metre of a black body at the temperature of freezing platinum under a pressure of 101 325 newtons per square metre.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela:",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540×10 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminosity function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminosity function. An appendix to the SI Brochure makes it clear that the luminosity function is not uniquely specified, but must be selected to fully define the candela.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 redefinition of SI base units, which redefined the SI base units in terms of fundamental physical constants.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "",
"title": "SI photometric light units"
},
{
"paragraph_id": 19,
"text": "If a source emits a known luminous intensity Iv (in candelas) in a well-defined cone, the total luminous flux Φv in lumens is given by",
"title": "SI photometric light units"
},
{
"paragraph_id": 20,
"text": "where A is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps.",
"title": "SI photometric light units"
},
{
"paragraph_id": 21,
"text": "If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4π: a uniform 1 candela source emits 12.6 lumens.",
"title": "SI photometric light units"
},
{
"paragraph_id": 22,
"text": "For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally uniform intensities, the contributions to illuminance from incoherent light being additive, it is mathematically estimated as follows. If ri is the position of the ith source of uniform intensity Ii, and â is the unit vector normal to the illuminated elemental opaque area dA being measured, and provided that all light sources lie in the same half-space divided by the plane of this area,",
"title": "SI photometric light units"
},
{
"paragraph_id": 23,
"text": "In the case of a single point light source of intensity Iv, at a distance r and normally incident, this reduces to",
"title": "SI photometric light units"
},
{
"paragraph_id": 24,
"text": "Like other SI units, the candela can also be modified by adding a metric prefix that multiplies it by a power of 10, for example millicandela (mcd) for 10 candela.",
"title": "SI multiples"
}
] | The candela is the unit of luminous intensity in the International System of Units (SI). It measures luminous power per unit solid angle emitted by a light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the luminosity function, the model of the sensitivity of the human eye to different wavelengths, standardized by the CIE and ISO. A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured. The word candela is Latin for candle. The old name "candle" is still sometimes used, as in foot-candle and the modern definition of candlepower. | 2001-04-05T08:21:19Z | 2023-10-27T09:51:48Z | [
"Template:Use dmy dates",
"Template:IPAc-en",
"Template:Sfrac",
"Template:Citation",
"Template:Cite web",
"Template:About",
"Template:Infobox Unit",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:Short description",
"Template:Nowrap",
"Template:Val",
"Template:Pi",
"Template:SI light units",
"Template:SI units",
"Template:Authority control",
"Template:Math"
] | https://en.wikipedia.org/wiki/Candela |
5,387 | Condensed matter physics | Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases that arise from electromagnetic forces between atoms and electrons. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperatures, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensates found in ultracold atomic systems, and liquid crystals. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models and predict the properties of extremely large groups of atoms.
The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division of the American Physical Society. These include solid state and soft matter physicists, who study quantum and non-quantum physical properties of matter respectively. Both types study a great range of materials, providing many research, funding and employment opportunities. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.
A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to the founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics.
According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time.
References to "condensed" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'".
One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.
In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively.
Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.
In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas."
Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.
In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developed across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current. This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later.
Magnetism as a property of matter has been known in China since 4000 BC. However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets. The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.
The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair.
The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.
The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant e 2 / h {\displaystyle e^{2}/h} .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant e 2 / h {\displaystyle e^{2}/h} . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators.
In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic.
In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics.
In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations.
Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries.
Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity.
The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem.
Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids.
Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry.
Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.
Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature, pressure, or molar composition. In a single-component system, a classical phase transition occurs at a temperature (at a specific pressure) where there is an abrupt change in the order of the system For example, when ice melts and becomes water, the ordered hexagonal crystal structure of ice is modified to a hydrogen bonded, mobile arrangement of water molecules.
In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances.
Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system.
The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.
Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.
Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction.
Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as the dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density and crystal structure.
Neutrons can also probe atomic length scales and are used to study the scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes. Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy.
In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual nuclei, thus giving information about the atomic, molecular, and bond structure of their environment. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data. Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimental testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.
The local structure, the structure of the nearest neighbour atoms, of condensed matter can be investigated with methods of nuclear spectroscopy, which are very sensitive to small changes. Using specific and radioactive nuclei, the nucleus becomes the probe that interacts with its surrounding electric and magnetic fields (hyperfine interactions). The methods are suitable to study defects, diffusion, phase change and magnetism. Common methods are e.g. NMR, Mössbauer spectroscopy, or perturbed angular correlation (PAC). PAC is especially ideal for the study of phase changes at extreme temperatures above 2000 °C due to the temperature independence of the method.
Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering.
In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state.
Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, magnetic storage, liquid crystals, optical fibres and several phenomena studied in the context of nanotechnology. Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laurates in chemistry Ben Feringa, Jean-Pierre Sauvage and Fraser Stoddart. Feringa and his team developed multiple molecular machines such as the molecular car, molecular windmill and many more.
In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states.
Condensed matter physics also has important uses for biomedicine, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis. | [
{
"paragraph_id": 0,
"text": "Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases that arise from electromagnetic forces between atoms and electrons. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperatures, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensates found in ultracold atomic systems, and liquid crystals. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models and predict the properties of extremely large groups of atoms.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division of the American Physical Society. These include solid state and soft matter physicists, who study quantum and non-quantum physical properties of matter respectively. Both types study a great range of materials, providing many research, funding and employment opportunities. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to the founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics.",
"title": ""
},
{
"paragraph_id": 3,
"text": "According to physicist Philip Warren Anderson, the use of the term \"condensed matter\" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name \"condensed matter\", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name \"condensed matter physics\" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas \"solid state physics\" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "References to \"condensed\" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that \"The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'\".",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that \"with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas.\"",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developed across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current. This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Magnetism as a property of matter has been known in China since 4000 BC. However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets. The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant e 2 / h {\\displaystyle e^{2}/h} .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant e 2 / h {\\displaystyle e^{2}/h} . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries.",
"title": "Theoretical"
},
{
"paragraph_id": 19,
"text": "Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity.",
"title": "Theoretical"
},
{
"paragraph_id": 20,
"text": "The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem.",
"title": "Theoretical"
},
{
"paragraph_id": 21,
"text": "Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids.",
"title": "Theoretical"
},
{
"paragraph_id": 22,
"text": "Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry.",
"title": "Theoretical"
},
{
"paragraph_id": 23,
"text": "Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.",
"title": "Theoretical"
},
{
"paragraph_id": 24,
"text": "Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature, pressure, or molar composition. In a single-component system, a classical phase transition occurs at a temperature (at a specific pressure) where there is an abrupt change in the order of the system For example, when ice melts and becomes water, the ordered hexagonal crystal structure of ice is modified to a hydrogen bonded, mobile arrangement of water molecules.",
"title": "Theoretical"
},
{
"paragraph_id": 25,
"text": "In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances.",
"title": "Theoretical"
},
{
"paragraph_id": 26,
"text": "Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system.",
"title": "Theoretical"
},
{
"paragraph_id": 27,
"text": "The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.",
"title": "Theoretical"
},
{
"paragraph_id": 28,
"text": "Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.",
"title": "Theoretical"
},
{
"paragraph_id": 29,
"text": "Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction.",
"title": "Experimental"
},
{
"paragraph_id": 30,
"text": "Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as the dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density and crystal structure.",
"title": "Experimental"
},
{
"paragraph_id": 31,
"text": "Neutrons can also probe atomic length scales and are used to study the scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes. Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy.",
"title": "Experimental"
},
{
"paragraph_id": 32,
"text": "In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual nuclei, thus giving information about the atomic, molecular, and bond structure of their environment. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data. Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimental testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.",
"title": "Experimental"
},
{
"paragraph_id": 33,
"text": "The local structure, the structure of the nearest neighbour atoms, of condensed matter can be investigated with methods of nuclear spectroscopy, which are very sensitive to small changes. Using specific and radioactive nuclei, the nucleus becomes the probe that interacts with its surrounding electric and magnetic fields (hyperfine interactions). The methods are suitable to study defects, diffusion, phase change and magnetism. Common methods are e.g. NMR, Mössbauer spectroscopy, or perturbed angular correlation (PAC). PAC is especially ideal for the study of phase changes at extreme temperatures above 2000 °C due to the temperature independence of the method.",
"title": "Experimental"
},
{
"paragraph_id": 34,
"text": "Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering.",
"title": "Experimental"
},
{
"paragraph_id": 35,
"text": "In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state.",
"title": "Experimental"
},
{
"paragraph_id": 36,
"text": "Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, magnetic storage, liquid crystals, optical fibres and several phenomena studied in the context of nanotechnology. Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laurates in chemistry Ben Feringa, Jean-Pierre Sauvage and Fraser Stoddart. Feringa and his team developed multiple molecular machines such as the molecular car, molecular windmill and many more.",
"title": "Applications"
},
{
"paragraph_id": 37,
"text": "In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states.",
"title": "Applications"
},
{
"paragraph_id": 38,
"text": "Condensed matter physics also has important uses for biomedicine, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis.",
"title": "Applications"
},
{
"paragraph_id": 39,
"text": "",
"title": "External links"
}
] | Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases that arise from electromagnetic forces between atoms and electrons. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperatures, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensates found in ultracold atomic systems, and liquid crystals. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models and predict the properties of extremely large groups of atoms. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division of the American Physical Society. These include solid state and soft matter physicists, who study quantum and non-quantum physical properties of matter respectively. Both types study a great range of materials, providing many research, funding and employment opportunities. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to the founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics. | 2001-04-06T02:04:42Z | 2023-12-29T17:50:39Z | [
"Template:Div col end",
"Template:Cite journal",
"Template:Cite arXiv",
"Template:ISBN",
"Template:Good article",
"Template:Further",
"Template:Main",
"Template:Reflist",
"Template:Condensed matter physics topics",
"Template:Authority control",
"Template:Short description",
"Template:Clarify",
"Template:NoteFoot",
"Template:Cite web",
"Template:Physics-footer",
"Template:Condensed matter physics",
"Template:Div col",
"Template:Annotated link",
"Template:Cite book",
"Template:Cite news",
"Template:Commons category-inline",
"Template:NoteTag",
"Template:Rp"
] | https://en.wikipedia.org/wiki/Condensed_matter_physics |
5,388 | Cultural anthropology | Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The term sociocultural anthropology includes both cultural and social anthropology traditions.
Anthropologists have pointed out that through culture, people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local (particular cultures) and the global (a universal human nature, or the web of connections between people in distinct places/circumstances).
Cultural anthropology has a rich methodology, including participant observation (often called fieldwork because it requires the anthropologist spending an extended period of time at the research location), interviews, and surveys.
The rise of cultural anthropology took place within the context of the late 19th century, when questions regarding which cultures were "primitive" and which were "civilized" occupied the mind of not only Freud, but many others. Colonialism and its processes increasingly brought European thinkers into direct or indirect contact with "primitive others". The first generation of cultural anthropologists were interested in the relative status of various humans, some of whom had modern advanced technologies, while others lacked anything but face-to-face communication techniques and still lived a Paleolithic lifestyle.
One of the earliest articulations of the anthropological meaning of the term "culture" came from Sir Edward Tylor: "Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society." The term "civilization" later gave way to definitions given by V. Gordon Childe, with culture forming an umbrella term and civilization becoming a particular kind of culture.
According to Kay Milton, former director of anthropology research at Queens University Belfast, culture can be general or specific. This means culture can be something applied to all human beings or it can be specific to a certain group of people such as African American culture or Irish American culture. Specific cultures are structured systems which means they are organized very specifically and adding or taking away any element from that system may disrupt it.
Anthropology is concerned with the lives of people in different parts of the world, particularly in relation to the discourse of beliefs and practices. In addressing this question, ethnologists in the 19th century divided into two schools of thought. Some, like Grafton Elliot Smith, argued that different groups must have learned from one another somehow, however indirectly; in other words, they argued that cultural traits spread from one place to another, or "diffused".
Other ethnologists argued that different groups had the capability of creating similar beliefs and practices independently. Some of those who advocated "independent invention", like Lewis Henry Morgan, additionally supposed that similarities meant that different groups had passed through the same stages of cultural evolution (See also classical social evolutionism). Morgan, in particular, acknowledged that certain forms of society and culture could not possibly have arisen before others. For example, industrial farming could not have been invented before simple farming, and metallurgy could not have developed without previous non-smelting processes involving metals (such as simple ground collection or mining). Morgan, like other 19th century social evolutionists, believed there was a more or less orderly progression from the primitive to the civilized.
20th-century anthropologists largely reject the notion that all human societies must pass through the same stages in the same order, on the grounds that such a notion does not fit the empirical facts. Some 20th-century ethnologists, like Julian Steward, have instead argued that such similarities reflected similar adaptations to similar environments. Although 19th-century ethnologists saw "diffusion" and "independent invention" as mutually exclusive and competing theories, most ethnographers quickly reached a consensus that both processes occur, and that both can plausibly account for cross-cultural similarities. But these ethnographers also pointed out the superficiality of many such similarities. They noted that even traits that spread through diffusion often were given different meanings and function from one society to another. Analyses of large human concentrations in big cities, in multidisciplinary studies by Ronald Daus, show how new methods may be applied to the understanding of man living in a global world and how it was caused by the action of extra-European nations, so highlighting the role of Ethics in modern anthropology.
Accordingly, most of these anthropologists showed less interest in comparing cultures, generalizing about human nature, or discovering universal laws of cultural development, than in understanding particular cultures in those cultures' own terms. Such ethnographers and their students promoted the idea of "cultural relativism", the view that one can only understand another person's beliefs and behaviors in the context of the culture in which they live or lived.
Others, such as Claude Lévi-Strauss (who was influenced both by American cultural anthropology and by French Durkheimian sociology), have argued that apparently similar patterns of development reflect fundamental similarities in the structure of human thought (see structuralism). By the mid-20th century, the number of examples of people skipping stages, such as going from hunter-gatherers to post-industrial service occupations in one generation, were so numerous that 19th-century evolutionism was effectively disproved.
Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas and later popularized by his students. Boas first articulated the idea in 1887: "...civilization is not something absolute, but ... is relative, and ... our ideas and conceptions are true only so far as our civilization goes." Although Boas did not coin the term, it became common among anthropologists after Boas' death in 1942, to express their synthesis of a number of ideas Boas had developed. Boas believed that the sweep of cultures, to be found in connection with any sub-species, is so vast and pervasive that there cannot be a relationship between culture and race. Cultural relativism involves specific epistemological and methodological claims. Whether or not these claims require a specific ethical stance is a matter of debate. This principle should not be confused with moral relativism.
Cultural relativism was in part a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one's people's arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Boas, originally trained in physics and geography, and heavily influenced by the thought of Kant, Herder, and von Humboldt, argued that one's culture may mediate and thus limit one's perceptions in less obvious ways. This understanding of culture confronts anthropologists with two problems: first, how to escape the unconscious bonds of one's own culture, which inevitably bias our perceptions of and reactions to the world, and second, how to make sense of an unfamiliar culture. The principle of cultural relativism thus forced anthropologists to develop innovative methods and heuristic strategies.
Boas and his students realized that if they were to conduct scientific research in other cultures, they would need to employ methods that would help them escape the limits of their own ethnocentrism. One such method is that of ethnography: basically, they advocated living with people of another culture for an extended period of time, so that they could learn the local language and be enculturated, at least partially, into that culture. In this context, cultural relativism is of fundamental methodological importance, because it calls attention to the importance of the local context in understanding the meaning of particular human beliefs and activities. Thus, in 1948 Virginia Heyer wrote, "Cultural relativity, to phrase it in starkest abstraction, states the relativity of the part to the whole. The part gains its cultural significance by its place in the whole, and cannot retain its integrity in a different situation."
The rubric cultural anthropology is generally applied to ethnographic works that are holistic in approach, are oriented to the ways in which culture affects individual experience, or aim to provide a rounded view of the knowledge, customs, and institutions of a people. Social anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry.
Parallel with the rise of cultural anthropology in the United States, social anthropology developed as an academic discipline in Britain and in France.
Lewis Henry Morgan (1818–1881), a lawyer from Rochester, New York, became an advocate for and ethnological scholar of the Iroquois. His comparative analyses of religion, government, material culture, and especially kinship patterns proved to be influential contributions to the field of anthropology. Like other scholars of his day (such as Edward Tylor), Morgan argued that human societies could be classified into categories of cultural evolution on a scale of progression that ranged from savagery, to barbarism, to civilization. Generally, Morgan used technology (such as bowmaking or pottery) as an indicator of position on this scale.
Franz Boas (1858–1942) established academic anthropology in the United States in opposition to Morgan's evolutionary perspective. His approach was empirical, skeptical of overgeneralizations, and eschewed attempts to establish universal laws. For example, Boas studied immigrant children to demonstrate that biological race was not immutable, and that human conduct and behavior resulted from nurture, rather than nature.
Influenced by the German tradition, Boas argued that the world was full of distinct cultures, rather than societies whose evolution could be measured by how much or how little "civilization" they had. He believed that each culture has to be studied in its particularity, and argued that cross-cultural generalizations, like those made in the natural sciences, were not possible.
In doing so, he fought discrimination against immigrants, blacks, and indigenous peoples of the Americas. Many American anthropologists adopted his agenda for social reform, and theories of race continue to be popular subjects for anthropologists today. The so-called "Four Field Approach" has its origins in Boasian Anthropology, dividing the discipline in the four crucial and interrelated fields of sociocultural, biological, linguistic, and archaic anthropology (e.g. archaeology). Anthropology in the United States continues to be deeply influenced by the Boasian tradition, especially its emphasis on culture.
Boas used his positions at Columbia University and the American Museum of Natural History (AMNH) to train and develop multiple generations of students. His first generation of students included Alfred Kroeber, Robert Lowie, Edward Sapir, and Ruth Benedict, who each produced richly detailed studies of indigenous North American cultures. They provided a wealth of details used to attack the theory of a single evolutionary process. Kroeber and Sapir's focus on Native American languages helped establish linguistics as a truly general science and free it from its historical focus on Indo-European languages.
The publication of Alfred Kroeber's textbook Anthropology (1923) marked a turning point in American anthropology. After three decades of amassing material, Boasians felt a growing urge to generalize. This was most obvious in the 'Culture and Personality' studies carried out by younger Boasians such as Margaret Mead and Ruth Benedict. Influenced by psychoanalytic psychologists including Sigmund Freud and Carl Jung, these authors sought to understand the way that individual personalities were shaped by the wider cultural and social forces in which they grew up.
Though such works as Mead's Coming of Age in Samoa (1928) and Benedict's The Chrysanthemum and the Sword (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH.
In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche—an approach popularized by Marvin Harris.
Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors, and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in the 1960s and 1970s, including cognitive anthropology and componential analysis.
In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War; Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as Reinventing Anthropology worried about anthropology's relevance.
Since the 1980s issues of power, such as those examined in Eric Wolf's Europe and the People Without History, have been central to the discipline. In the 1980s books like Anthropology and the Colonial Encounter pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan.
Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state:
"Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning."
Geertz's interpretive method involved what he called "thick description". The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies.
Prominent British symbolic anthropologists include Victor Turner and Mary Douglas.
In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on the ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies.
Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others—usually missionaries, traders, explorers, or colonial officials—earning them the moniker of "arm-chair anthropologists".
Participant observation is one of the principal research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior.
Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and their subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist.
Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, they will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding.
The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions they are trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision.
One common criticism of participant observation is its lack of objectivity. Because each anthropologist has their own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what they will eventually write about a culture, because each researcher is influenced by their own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead.
Participant observation has also raised ethical questions, since an anthropologist is in control of what they report about a culture. In terms of representation, an anthropologist has greater power than their subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence.
In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group.
Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research.
Bronisław Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown's students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements.
In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European "social anthropologists" focused on observed social behaviors and on "social structure", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics).
American "cultural anthropologists" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors, and have an equal interest in what people do and in what people say.
One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called "human universals" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not. The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the Cross-Cultural Survey (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. eHRAF World Cultures includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, eHRAF Archaeology, covers major archaeological traditions and many more sub-traditions and sites around the world.
Comparison across cultures includies the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small scale societies are:
Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus.
A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, "Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities.
Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a "thing", such as a particular commodity, as it is transported through the networks of global capitalism.
Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft.
Sociocultural anthropologists have increasingly turned their investigative eye on to "Western" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for In Search of Respect, a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees.
Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another and how those relationships operate within and define social organization.
Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell:
Kinship is the bedrock of all human societies that we know. All humans recognize fathers and mothers, sons and daughters, brothers and sisters, uncles and aunts, husbands and wives, grandparents, cousins, and often many more complex types of relationships in the terminologies that they use. That is the matrix into which human children are born in the great majority of cases, and their first words are often kinship terms.
Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit.
There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen.
In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to "more ethnographic voices, human agency, intersecting power structures, and historical contex". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely "foreign", non-Western communities.
Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of "Third World feminism", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation, and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. In addition, there is a common practice of Jamaican women artificially lightening their skin tones in order to secure economic survival. These anthropological findings, according to Third World feminism, cannot see gender, racial, or class differences as separate entities, and instead must acknowledge that they interact together to produce unique individual experiences.
Kinship studies have also experienced a rise in the interest of reproductive anthropology with the advancement of assisted reproductive technologies (ARTs), including in vitro fertilization (IVF). These advancements have led to new dimensions of anthropological research, as they challenge the Western standard of biogenetically based kinship, relatedness, and parenthood. According to anthropologists Maria C. Inhorn and Daphna Birenbaum-Carmeli, "ARTs have pluralized notions of relatedness and led to a more dynamic notion of "kinning" namely, kinship as a process, as something under construction, rather than a natural given". With this technology, questions of kinship have emerged over the difference between biological and genetic relatedness, as gestational surrogates can provide a biological environment for the embryo while the genetic ties remain with a third party. If genetic, surrogate, and adoptive maternities are involved, anthropologists have acknowledged that there can be the possibility for three "biological" mothers to a single child. With ARTs, there are also anthropological questions concerning the intersections between wealth and fertility: ARTs are generally only available to those in the highest income bracket, meaning the infertile poor are inherently devalued in the system. There have also been issues of reproductive tourism and bodily commodification, as individuals seek economic security through hormonal stimulation and egg harvesting, which are potentially harmful procedures. With IVF, specifically, there have been many questions of embryotic value and the status of life, particularly as it relates to the manufacturing of stem cells, testing, and research.
Current issues in kinship studies, such as adoption, have revealed and challenged the Western cultural disposition towards the genetic, "blood" tie. Western biases against single parent homes have also been explored through similar anthropological research, uncovering that a household with a single parent experiences "greater levels of scrutiny and [is] routinely seen as the 'other' of the nuclear, patriarchal family". The power dynamics in reproduction, when explored through a comparative analysis of "conventional" and "unconventional" families, have been used to dissect the Western assumptions of child bearing and child rearing in contemporary kinship studies.
Kinship, as an anthropological field of inquiry, has been heavily criticized across the discipline. One critique is that, as its inception, the framework of kinship studies was far too structured and formulaic, relying on dense language and stringent rules. Another critique, explored at length by American anthropologist David Schneider, argues that kinship has been limited by its inherent Western ethnocentrism. Schneider proposes that kinship is not a field that can be applied cross-culturally, as the theory itself relies on European assumptions of normalcy. He states in the widely circulated 1984 book A critique of the study of kinship that "[K]inship has been defined by European social scientists, and European social scientists use their own folk culture as the source of many, if not all of their ways of formulating and understanding the world about them". However, this critique has been challenged by the argument that it is linguistics, not cultural divergence, that has allowed for a European bias, and that the bias can be lifted by centering the methodology on fundamental human concepts. Polish anthropologist Anna Wierzbicka argues that "mother" and "father" are examples of such fundamental human concepts, and can only be Westernized when conflated with English concepts such as "parent" and "sibling".
A more recent critique of kinship studies is its solipsistic focus on privileged, Western human relations and its promotion of normative ideals of human exceptionalism. In Critical Kinship Studies, social psychologists Elizabeth Peel and Damien Riggs argue for a move beyond this human-centered framework, opting instead to explore kinship through a "posthumanist" vantage point where anthropologists focus on the intersecting relationships of human animals, non-human animals, technologies and practices.
The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government.
The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals' day-to-day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems.
The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed.
In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did.
Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public is not directly controlled by the institution and could potentially be damaging. | [
{
"paragraph_id": 0,
"text": "Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The term sociocultural anthropology includes both cultural and social anthropology traditions.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Anthropologists have pointed out that through culture, people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local (particular cultures) and the global (a universal human nature, or the web of connections between people in distinct places/circumstances).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cultural anthropology has a rich methodology, including participant observation (often called fieldwork because it requires the anthropologist spending an extended period of time at the research location), interviews, and surveys.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The rise of cultural anthropology took place within the context of the late 19th century, when questions regarding which cultures were \"primitive\" and which were \"civilized\" occupied the mind of not only Freud, but many others. Colonialism and its processes increasingly brought European thinkers into direct or indirect contact with \"primitive others\". The first generation of cultural anthropologists were interested in the relative status of various humans, some of whom had modern advanced technologies, while others lacked anything but face-to-face communication techniques and still lived a Paleolithic lifestyle.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "One of the earliest articulations of the anthropological meaning of the term \"culture\" came from Sir Edward Tylor: \"Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society.\" The term \"civilization\" later gave way to definitions given by V. Gordon Childe, with culture forming an umbrella term and civilization becoming a particular kind of culture.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 5,
"text": "According to Kay Milton, former director of anthropology research at Queens University Belfast, culture can be general or specific. This means culture can be something applied to all human beings or it can be specific to a certain group of people such as African American culture or Irish American culture. Specific cultures are structured systems which means they are organized very specifically and adding or taking away any element from that system may disrupt it.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 6,
"text": "Anthropology is concerned with the lives of people in different parts of the world, particularly in relation to the discourse of beliefs and practices. In addressing this question, ethnologists in the 19th century divided into two schools of thought. Some, like Grafton Elliot Smith, argued that different groups must have learned from one another somehow, however indirectly; in other words, they argued that cultural traits spread from one place to another, or \"diffused\".",
"title": "Theoretical foundations"
},
{
"paragraph_id": 7,
"text": "Other ethnologists argued that different groups had the capability of creating similar beliefs and practices independently. Some of those who advocated \"independent invention\", like Lewis Henry Morgan, additionally supposed that similarities meant that different groups had passed through the same stages of cultural evolution (See also classical social evolutionism). Morgan, in particular, acknowledged that certain forms of society and culture could not possibly have arisen before others. For example, industrial farming could not have been invented before simple farming, and metallurgy could not have developed without previous non-smelting processes involving metals (such as simple ground collection or mining). Morgan, like other 19th century social evolutionists, believed there was a more or less orderly progression from the primitive to the civilized.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 8,
"text": "20th-century anthropologists largely reject the notion that all human societies must pass through the same stages in the same order, on the grounds that such a notion does not fit the empirical facts. Some 20th-century ethnologists, like Julian Steward, have instead argued that such similarities reflected similar adaptations to similar environments. Although 19th-century ethnologists saw \"diffusion\" and \"independent invention\" as mutually exclusive and competing theories, most ethnographers quickly reached a consensus that both processes occur, and that both can plausibly account for cross-cultural similarities. But these ethnographers also pointed out the superficiality of many such similarities. They noted that even traits that spread through diffusion often were given different meanings and function from one society to another. Analyses of large human concentrations in big cities, in multidisciplinary studies by Ronald Daus, show how new methods may be applied to the understanding of man living in a global world and how it was caused by the action of extra-European nations, so highlighting the role of Ethics in modern anthropology.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 9,
"text": "Accordingly, most of these anthropologists showed less interest in comparing cultures, generalizing about human nature, or discovering universal laws of cultural development, than in understanding particular cultures in those cultures' own terms. Such ethnographers and their students promoted the idea of \"cultural relativism\", the view that one can only understand another person's beliefs and behaviors in the context of the culture in which they live or lived.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 10,
"text": "Others, such as Claude Lévi-Strauss (who was influenced both by American cultural anthropology and by French Durkheimian sociology), have argued that apparently similar patterns of development reflect fundamental similarities in the structure of human thought (see structuralism). By the mid-20th century, the number of examples of people skipping stages, such as going from hunter-gatherers to post-industrial service occupations in one generation, were so numerous that 19th-century evolutionism was effectively disproved.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 11,
"text": "Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas and later popularized by his students. Boas first articulated the idea in 1887: \"...civilization is not something absolute, but ... is relative, and ... our ideas and conceptions are true only so far as our civilization goes.\" Although Boas did not coin the term, it became common among anthropologists after Boas' death in 1942, to express their synthesis of a number of ideas Boas had developed. Boas believed that the sweep of cultures, to be found in connection with any sub-species, is so vast and pervasive that there cannot be a relationship between culture and race. Cultural relativism involves specific epistemological and methodological claims. Whether or not these claims require a specific ethical stance is a matter of debate. This principle should not be confused with moral relativism.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 12,
"text": "Cultural relativism was in part a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one's people's arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Boas, originally trained in physics and geography, and heavily influenced by the thought of Kant, Herder, and von Humboldt, argued that one's culture may mediate and thus limit one's perceptions in less obvious ways. This understanding of culture confronts anthropologists with two problems: first, how to escape the unconscious bonds of one's own culture, which inevitably bias our perceptions of and reactions to the world, and second, how to make sense of an unfamiliar culture. The principle of cultural relativism thus forced anthropologists to develop innovative methods and heuristic strategies.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 13,
"text": "Boas and his students realized that if they were to conduct scientific research in other cultures, they would need to employ methods that would help them escape the limits of their own ethnocentrism. One such method is that of ethnography: basically, they advocated living with people of another culture for an extended period of time, so that they could learn the local language and be enculturated, at least partially, into that culture. In this context, cultural relativism is of fundamental methodological importance, because it calls attention to the importance of the local context in understanding the meaning of particular human beliefs and activities. Thus, in 1948 Virginia Heyer wrote, \"Cultural relativity, to phrase it in starkest abstraction, states the relativity of the part to the whole. The part gains its cultural significance by its place in the whole, and cannot retain its integrity in a different situation.\"",
"title": "Theoretical foundations"
},
{
"paragraph_id": 14,
"text": "The rubric cultural anthropology is generally applied to ethnographic works that are holistic in approach, are oriented to the ways in which culture affects individual experience, or aim to provide a rounded view of the knowledge, customs, and institutions of a people. Social anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 15,
"text": "Parallel with the rise of cultural anthropology in the United States, social anthropology developed as an academic discipline in Britain and in France.",
"title": "Theoretical foundations"
},
{
"paragraph_id": 16,
"text": "Lewis Henry Morgan (1818–1881), a lawyer from Rochester, New York, became an advocate for and ethnological scholar of the Iroquois. His comparative analyses of religion, government, material culture, and especially kinship patterns proved to be influential contributions to the field of anthropology. Like other scholars of his day (such as Edward Tylor), Morgan argued that human societies could be classified into categories of cultural evolution on a scale of progression that ranged from savagery, to barbarism, to civilization. Generally, Morgan used technology (such as bowmaking or pottery) as an indicator of position on this scale.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 17,
"text": "Franz Boas (1858–1942) established academic anthropology in the United States in opposition to Morgan's evolutionary perspective. His approach was empirical, skeptical of overgeneralizations, and eschewed attempts to establish universal laws. For example, Boas studied immigrant children to demonstrate that biological race was not immutable, and that human conduct and behavior resulted from nurture, rather than nature.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 18,
"text": "Influenced by the German tradition, Boas argued that the world was full of distinct cultures, rather than societies whose evolution could be measured by how much or how little \"civilization\" they had. He believed that each culture has to be studied in its particularity, and argued that cross-cultural generalizations, like those made in the natural sciences, were not possible.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 19,
"text": "In doing so, he fought discrimination against immigrants, blacks, and indigenous peoples of the Americas. Many American anthropologists adopted his agenda for social reform, and theories of race continue to be popular subjects for anthropologists today. The so-called \"Four Field Approach\" has its origins in Boasian Anthropology, dividing the discipline in the four crucial and interrelated fields of sociocultural, biological, linguistic, and archaic anthropology (e.g. archaeology). Anthropology in the United States continues to be deeply influenced by the Boasian tradition, especially its emphasis on culture.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 20,
"text": "Boas used his positions at Columbia University and the American Museum of Natural History (AMNH) to train and develop multiple generations of students. His first generation of students included Alfred Kroeber, Robert Lowie, Edward Sapir, and Ruth Benedict, who each produced richly detailed studies of indigenous North American cultures. They provided a wealth of details used to attack the theory of a single evolutionary process. Kroeber and Sapir's focus on Native American languages helped establish linguistics as a truly general science and free it from its historical focus on Indo-European languages.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 21,
"text": "The publication of Alfred Kroeber's textbook Anthropology (1923) marked a turning point in American anthropology. After three decades of amassing material, Boasians felt a growing urge to generalize. This was most obvious in the 'Culture and Personality' studies carried out by younger Boasians such as Margaret Mead and Ruth Benedict. Influenced by psychoanalytic psychologists including Sigmund Freud and Carl Jung, these authors sought to understand the way that individual personalities were shaped by the wider cultural and social forces in which they grew up.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 22,
"text": "Though such works as Mead's Coming of Age in Samoa (1928) and Benedict's The Chrysanthemum and the Sword (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 23,
"text": "In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche—an approach popularized by Marvin Harris.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 24,
"text": "Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors, and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in the 1960s and 1970s, including cognitive anthropology and componential analysis.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 25,
"text": "In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War; Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as Reinventing Anthropology worried about anthropology's relevance.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 26,
"text": "Since the 1980s issues of power, such as those examined in Eric Wolf's Europe and the People Without History, have been central to the discipline. In the 1980s books like Anthropology and the Colonial Encounter pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 27,
"text": "Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state:",
"title": "Foundational thinkers"
},
{
"paragraph_id": 28,
"text": "\"Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning.\"",
"title": "Foundational thinkers"
},
{
"paragraph_id": 29,
"text": "Geertz's interpretive method involved what he called \"thick description\". The cultural symbols of rituals, political and economic action, and of kinship, are \"read\" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the \"experience-near\" but foreign concepts of the other culture, into the \"experience-distant\" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of \"religion as a cultural system\" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on \"blood connections\" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 30,
"text": "Prominent British symbolic anthropologists include Victor Turner and Mary Douglas.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 31,
"text": "In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on the ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies.",
"title": "Foundational thinkers"
},
{
"paragraph_id": 32,
"text": "Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others—usually missionaries, traders, explorers, or colonial officials—earning them the moniker of \"arm-chair anthropologists\".",
"title": "Methods"
},
{
"paragraph_id": 33,
"text": "Participant observation is one of the principal research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior.",
"title": "Methods"
},
{
"paragraph_id": 34,
"text": "Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and their subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist.",
"title": "Methods"
},
{
"paragraph_id": 35,
"text": "Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, they will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding.",
"title": "Methods"
},
{
"paragraph_id": 36,
"text": "The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions they are trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision.",
"title": "Methods"
},
{
"paragraph_id": 37,
"text": "One common criticism of participant observation is its lack of objectivity. Because each anthropologist has their own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what they will eventually write about a culture, because each researcher is influenced by their own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead.",
"title": "Methods"
},
{
"paragraph_id": 38,
"text": "Participant observation has also raised ethical questions, since an anthropologist is in control of what they report about a culture. In terms of representation, an anthropologist has greater power than their subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence.",
"title": "Methods"
},
{
"paragraph_id": 39,
"text": "In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group.",
"title": "Methods"
},
{
"paragraph_id": 40,
"text": "Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research.",
"title": "Methods"
},
{
"paragraph_id": 41,
"text": "Bronisław Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown's students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements.",
"title": "Methods"
},
{
"paragraph_id": 42,
"text": "In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European \"social anthropologists\" focused on observed social behaviors and on \"social structure\", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics).",
"title": "Methods"
},
{
"paragraph_id": 43,
"text": "American \"cultural anthropologists\" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors, and have an equal interest in what people do and in what people say.",
"title": "Methods"
},
{
"paragraph_id": 44,
"text": "One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called \"human universals\" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not. The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the Cross-Cultural Survey (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. eHRAF World Cultures includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, eHRAF Archaeology, covers major archaeological traditions and many more sub-traditions and sites around the world.",
"title": "Methods"
},
{
"paragraph_id": 45,
"text": "Comparison across cultures includies the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small scale societies are:",
"title": "Methods"
},
{
"paragraph_id": 46,
"text": "Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus.",
"title": "Methods"
},
{
"paragraph_id": 47,
"text": "A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, \"Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography\". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities.",
"title": "Methods"
},
{
"paragraph_id": 48,
"text": "Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a \"thing\", such as a particular commodity, as it is transported through the networks of global capitalism.",
"title": "Methods"
},
{
"paragraph_id": 49,
"text": "Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft.",
"title": "Methods"
},
{
"paragraph_id": 50,
"text": "Sociocultural anthropologists have increasingly turned their investigative eye on to \"Western\" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for In Search of Respect, a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees.",
"title": "Methods"
},
{
"paragraph_id": 51,
"text": "Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another and how those relationships operate within and define social organization.",
"title": "Topics"
},
{
"paragraph_id": 52,
"text": "Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell:",
"title": "Topics"
},
{
"paragraph_id": 53,
"text": "Kinship is the bedrock of all human societies that we know. All humans recognize fathers and mothers, sons and daughters, brothers and sisters, uncles and aunts, husbands and wives, grandparents, cousins, and often many more complex types of relationships in the terminologies that they use. That is the matrix into which human children are born in the great majority of cases, and their first words are often kinship terms.",
"title": "Topics"
},
{
"paragraph_id": 54,
"text": "Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit.",
"title": "Topics"
},
{
"paragraph_id": 55,
"text": "There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen.",
"title": "Topics"
},
{
"paragraph_id": 56,
"text": "In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to \"more ethnographic voices, human agency, intersecting power structures, and historical contex\". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely \"foreign\", non-Western communities.",
"title": "Topics"
},
{
"paragraph_id": 57,
"text": "Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of \"Third World feminism\", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation, and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. In addition, there is a common practice of Jamaican women artificially lightening their skin tones in order to secure economic survival. These anthropological findings, according to Third World feminism, cannot see gender, racial, or class differences as separate entities, and instead must acknowledge that they interact together to produce unique individual experiences.",
"title": "Topics"
},
{
"paragraph_id": 58,
"text": "Kinship studies have also experienced a rise in the interest of reproductive anthropology with the advancement of assisted reproductive technologies (ARTs), including in vitro fertilization (IVF). These advancements have led to new dimensions of anthropological research, as they challenge the Western standard of biogenetically based kinship, relatedness, and parenthood. According to anthropologists Maria C. Inhorn and Daphna Birenbaum-Carmeli, \"ARTs have pluralized notions of relatedness and led to a more dynamic notion of \"kinning\" namely, kinship as a process, as something under construction, rather than a natural given\". With this technology, questions of kinship have emerged over the difference between biological and genetic relatedness, as gestational surrogates can provide a biological environment for the embryo while the genetic ties remain with a third party. If genetic, surrogate, and adoptive maternities are involved, anthropologists have acknowledged that there can be the possibility for three \"biological\" mothers to a single child. With ARTs, there are also anthropological questions concerning the intersections between wealth and fertility: ARTs are generally only available to those in the highest income bracket, meaning the infertile poor are inherently devalued in the system. There have also been issues of reproductive tourism and bodily commodification, as individuals seek economic security through hormonal stimulation and egg harvesting, which are potentially harmful procedures. With IVF, specifically, there have been many questions of embryotic value and the status of life, particularly as it relates to the manufacturing of stem cells, testing, and research.",
"title": "Topics"
},
{
"paragraph_id": 59,
"text": "Current issues in kinship studies, such as adoption, have revealed and challenged the Western cultural disposition towards the genetic, \"blood\" tie. Western biases against single parent homes have also been explored through similar anthropological research, uncovering that a household with a single parent experiences \"greater levels of scrutiny and [is] routinely seen as the 'other' of the nuclear, patriarchal family\". The power dynamics in reproduction, when explored through a comparative analysis of \"conventional\" and \"unconventional\" families, have been used to dissect the Western assumptions of child bearing and child rearing in contemporary kinship studies.",
"title": "Topics"
},
{
"paragraph_id": 60,
"text": "Kinship, as an anthropological field of inquiry, has been heavily criticized across the discipline. One critique is that, as its inception, the framework of kinship studies was far too structured and formulaic, relying on dense language and stringent rules. Another critique, explored at length by American anthropologist David Schneider, argues that kinship has been limited by its inherent Western ethnocentrism. Schneider proposes that kinship is not a field that can be applied cross-culturally, as the theory itself relies on European assumptions of normalcy. He states in the widely circulated 1984 book A critique of the study of kinship that \"[K]inship has been defined by European social scientists, and European social scientists use their own folk culture as the source of many, if not all of their ways of formulating and understanding the world about them\". However, this critique has been challenged by the argument that it is linguistics, not cultural divergence, that has allowed for a European bias, and that the bias can be lifted by centering the methodology on fundamental human concepts. Polish anthropologist Anna Wierzbicka argues that \"mother\" and \"father\" are examples of such fundamental human concepts, and can only be Westernized when conflated with English concepts such as \"parent\" and \"sibling\".",
"title": "Topics"
},
{
"paragraph_id": 61,
"text": "A more recent critique of kinship studies is its solipsistic focus on privileged, Western human relations and its promotion of normative ideals of human exceptionalism. In Critical Kinship Studies, social psychologists Elizabeth Peel and Damien Riggs argue for a move beyond this human-centered framework, opting instead to explore kinship through a \"posthumanist\" vantage point where anthropologists focus on the intersecting relationships of human animals, non-human animals, technologies and practices.",
"title": "Topics"
},
{
"paragraph_id": 62,
"text": "The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government.",
"title": "Topics"
},
{
"paragraph_id": 63,
"text": "The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals' day-to-day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems.",
"title": "Topics"
},
{
"paragraph_id": 64,
"text": "The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed.",
"title": "Topics"
},
{
"paragraph_id": 65,
"text": "In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did.",
"title": "Topics"
},
{
"paragraph_id": 66,
"text": "Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public is not directly controlled by the institution and could potentially be damaging.",
"title": "Topics"
}
] | Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The term sociocultural anthropology includes both cultural and social anthropology traditions. Anthropologists have pointed out that through culture, people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local and the global. Cultural anthropology has a rich methodology, including participant observation, interviews, and surveys. | 2001-11-09T19:13:14Z | 2023-12-06T13:31:15Z | [
"Template:Main",
"Template:Cite book",
"Template:Cite web",
"Template:Wikibooks",
"Template:Commons category",
"Template:For",
"Template:Citation needed",
"Template:Div col end",
"Template:Quotation",
"Template:Endflatlist",
"Template:Authority control",
"Template:Flatlist",
"Template:Nowrap",
"Template:Blockquote",
"Template:Reflist",
"Template:Webarchive",
"Template:Social sciences",
"Template:Culture",
"Template:Short description",
"Template:Anthropology",
"Template:Cleanup",
"Template:Div col",
"Template:Annotated link",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Cultural_anthropology |
5,390 | Conversion of units | Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors which change the measured quantity value without changing its effects. Unit conversion is often easier within the metric or the SI than in others, due to the regular 10-base in all units and the prefixes that increase or decrease by 3 powers of 10 at a time.
The process of conversion depends on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as:
Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called soft conversion. It does not involve changing the physical configuration of the item being measured.
By contrast, a hard conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used.
The factor-label method, also known as the unit-factor method or the unity bracket method, is a widely used technique for unit conversions using the rules of algebra.
The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to metres per second by using a sequence of conversion factors as shown below:
Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being re-arranged to create a factor that cancels out the original unit. For example, as "mile" is the numerator in the original fraction and 1 m i = 1609.344 m {\displaystyle \mathrm {1~mi} =\mathrm {1609.344~m} } , "mile" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields 1 m i 1 m i = 1609.344 m 1 m i {\displaystyle {\frac {\mathrm {1~mi} }{\mathrm {1~mi} }}={\frac {\mathrm {1609.344~m} }{\mathrm {1~mi} }}} , which when simplified results in the dimensionless 1 = 1609.344 m 1 m i {\displaystyle 1={\frac {\mathrm {1609.344~m} }{\mathrm {1~mi} }}} . Because of the identity property of multiplication, multiplying any quantity (physical or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units mile and hour, 10 miles per hour converts to 4.4704 metres per second.
As a more complex example, the concentration of nitrogen oxides (NOx) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (g/h) of NOx by using the following information as shown below:
After canceling out any dimensional units that appear both in the numerators and denominators of the fractions in the above equation, the NOx concentration of 10 ppmv converts to mass flow rate of 24.63 grams per hour.
The factor-label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the right hand side of the equation. Having the same units on both sides of an equation does not ensure that the equation is correct, but having different units on the two sides (when expressed in terms of base units) of an equation implies that the equation is wrong.
For example, check the universal gas law equation of PV = nRT, when:
As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units. Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal hitherto unknown or overlooked properties of matter, in the form of left-over dimensions – dimensional adjusters – that can then be assigned physical significance. It is important to point out that such 'mathematical manipulation' is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck constant, a fundamental physical constant, was 'discovered' as a purely mathematical abstraction or representation that built on the Rayleigh–Jeans law for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier.
The factor-label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0. (Ratio scale in Stevens's typology) Most units fit this paradigm. An example for which it cannot be used is the conversion between degrees Celsius and kelvins (or degrees Fahrenheit). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform ( x ↦ a x + b {\displaystyle x\mapsto ax+b} , rather than a linear transform x ↦ a x {\displaystyle x\mapsto ax} ) between them.
For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, though this would yield the same formula at the end.
Hence, to convert the numerical quantity value of a temperature T[F] in degrees Fahrenheit to a numerical quantity value T[C] in degrees Celsius, this formula may be used:
To convert T[C] in degrees Celsius to T[F] in degrees Fahrenheit, this formula may be used:
Starting with:
replace the original unit [ Z ] i {\displaystyle [Z]_{i}} with its meaning in terms of the desired unit [ Z ] j {\displaystyle [Z]_{j}} , e.g. if [ Z ] i = c i j × [ Z ] j {\displaystyle [Z]_{i}=c_{ij}\times [Z]_{j}} , then:
Now n i {\displaystyle n_{i}} and c i j {\displaystyle c_{ij}} are both numerical values, so just calculate their product.
Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z:
For example, you have an expression for a physical value Z involving the unit feet per second ( [ Z ] i {\displaystyle [Z]_{i}} ) and you want it in terms of the unit miles per hour ( [ Z ] j {\displaystyle [Z]_{j}} ):
Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre:
In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the pre-factor, and then plug in the numerical values of the given/known quantities.
For example, in the study of Bose–Einstein condensate, atomic mass m is usually given in daltons, instead of kilograms, and chemical potential μ is often given in the Boltzmann constant times nanokelvin. The condensate's healing length is given by:
For a Na condensate with chemical potential of (the Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps:
Assume that m = 1 Da , μ = k B ⋅ 1 nK , {\displaystyle m=1\,{\text{Da}},\mu =k_{\text{B}}\cdot 1\,{\text{nK}}\,,} this gives
which is our pre-factor.
Now, make use of the fact that ξ ∝ 1 m μ {\displaystyle \xi \propto {\frac {1}{\sqrt {m\mu }}}} . With m = 23 Da , μ = 128 k B ⋅ nK {\displaystyle m=23\,{\text{Da}},\mu =128\,k_{\text{B}}\cdot {\text{nK}}} , ξ = 15.574 23 ⋅ 128 μm = 0.287 μm {\displaystyle \xi ={\frac {15.574}{\sqrt {23\cdot 128}}}\,{\text{μm}}=0.287\,{\text{μm}}} .
This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the pre-factor calculated above, it is very easy to see that the healing length of Yb with chemical potential 20.3 nK is ξ = 15.574 174 ⋅ 20.3 μm = 0.262 μm {\displaystyle \xi ={\frac {15.574}{\sqrt {174\cdot 20.3}}}\,{\text{μm}}=0.262\,{\text{μm}}} .
There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications.
There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and Windows. The Unified Code for Units of Measure is also a popular option. | [
{
"paragraph_id": 0,
"text": "Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors which change the measured quantity value without changing its effects. Unit conversion is often easier within the metric or the SI than in others, due to the regular 10-base in all units and the prefixes that increase or decrease by 3 powers of 10 at a time.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The process of conversion depends on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as:",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called soft conversion. It does not involve changing the physical configuration of the item being measured.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "By contrast, a hard conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "The factor-label method, also known as the unit-factor method or the unity bracket method, is a widely used technique for unit conversions using the rules of algebra.",
"title": "Factor-label method"
},
{
"paragraph_id": 5,
"text": "The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to metres per second by using a sequence of conversion factors as shown below:",
"title": "Factor-label method"
},
{
"paragraph_id": 6,
"text": "Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being re-arranged to create a factor that cancels out the original unit. For example, as \"mile\" is the numerator in the original fraction and 1 m i = 1609.344 m {\\displaystyle \\mathrm {1~mi} =\\mathrm {1609.344~m} } , \"mile\" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields 1 m i 1 m i = 1609.344 m 1 m i {\\displaystyle {\\frac {\\mathrm {1~mi} }{\\mathrm {1~mi} }}={\\frac {\\mathrm {1609.344~m} }{\\mathrm {1~mi} }}} , which when simplified results in the dimensionless 1 = 1609.344 m 1 m i {\\displaystyle 1={\\frac {\\mathrm {1609.344~m} }{\\mathrm {1~mi} }}} . Because of the identity property of multiplication, multiplying any quantity (physical or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units mile and hour, 10 miles per hour converts to 4.4704 metres per second.",
"title": "Factor-label method"
},
{
"paragraph_id": 7,
"text": "As a more complex example, the concentration of nitrogen oxides (NOx) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (g/h) of NOx by using the following information as shown below:",
"title": "Factor-label method"
},
{
"paragraph_id": 8,
"text": "After canceling out any dimensional units that appear both in the numerators and denominators of the fractions in the above equation, the NOx concentration of 10 ppmv converts to mass flow rate of 24.63 grams per hour.",
"title": "Factor-label method"
},
{
"paragraph_id": 9,
"text": "The factor-label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the right hand side of the equation. Having the same units on both sides of an equation does not ensure that the equation is correct, but having different units on the two sides (when expressed in terms of base units) of an equation implies that the equation is wrong.",
"title": "Factor-label method"
},
{
"paragraph_id": 10,
"text": "For example, check the universal gas law equation of PV = nRT, when:",
"title": "Factor-label method"
},
{
"paragraph_id": 11,
"text": "As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units. Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal hitherto unknown or overlooked properties of matter, in the form of left-over dimensions – dimensional adjusters – that can then be assigned physical significance. It is important to point out that such 'mathematical manipulation' is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck constant, a fundamental physical constant, was 'discovered' as a purely mathematical abstraction or representation that built on the Rayleigh–Jeans law for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier.",
"title": "Factor-label method"
},
{
"paragraph_id": 12,
"text": "The factor-label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0. (Ratio scale in Stevens's typology) Most units fit this paradigm. An example for which it cannot be used is the conversion between degrees Celsius and kelvins (or degrees Fahrenheit). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform ( x ↦ a x + b {\\displaystyle x\\mapsto ax+b} , rather than a linear transform x ↦ a x {\\displaystyle x\\mapsto ax} ) between them.",
"title": "Factor-label method"
},
{
"paragraph_id": 13,
"text": "For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, though this would yield the same formula at the end.",
"title": "Factor-label method"
},
{
"paragraph_id": 14,
"text": "Hence, to convert the numerical quantity value of a temperature T[F] in degrees Fahrenheit to a numerical quantity value T[C] in degrees Celsius, this formula may be used:",
"title": "Factor-label method"
},
{
"paragraph_id": 15,
"text": "To convert T[C] in degrees Celsius to T[F] in degrees Fahrenheit, this formula may be used:",
"title": "Factor-label method"
},
{
"paragraph_id": 16,
"text": "Starting with:",
"title": "Factor-label method"
},
{
"paragraph_id": 17,
"text": "replace the original unit [ Z ] i {\\displaystyle [Z]_{i}} with its meaning in terms of the desired unit [ Z ] j {\\displaystyle [Z]_{j}} , e.g. if [ Z ] i = c i j × [ Z ] j {\\displaystyle [Z]_{i}=c_{ij}\\times [Z]_{j}} , then:",
"title": "Factor-label method"
},
{
"paragraph_id": 18,
"text": "Now n i {\\displaystyle n_{i}} and c i j {\\displaystyle c_{ij}} are both numerical values, so just calculate their product.",
"title": "Factor-label method"
},
{
"paragraph_id": 19,
"text": "Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z:",
"title": "Factor-label method"
},
{
"paragraph_id": 20,
"text": "For example, you have an expression for a physical value Z involving the unit feet per second ( [ Z ] i {\\displaystyle [Z]_{i}} ) and you want it in terms of the unit miles per hour ( [ Z ] j {\\displaystyle [Z]_{j}} ):",
"title": "Factor-label method"
},
{
"paragraph_id": 21,
"text": "Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre:",
"title": "Factor-label method"
},
{
"paragraph_id": 22,
"text": "In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the pre-factor, and then plug in the numerical values of the given/known quantities.",
"title": "Calculation involving non-SI Units"
},
{
"paragraph_id": 23,
"text": "For example, in the study of Bose–Einstein condensate, atomic mass m is usually given in daltons, instead of kilograms, and chemical potential μ is often given in the Boltzmann constant times nanokelvin. The condensate's healing length is given by:",
"title": "Calculation involving non-SI Units"
},
{
"paragraph_id": 24,
"text": "For a Na condensate with chemical potential of (the Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps:",
"title": "Calculation involving non-SI Units"
},
{
"paragraph_id": 25,
"text": "Assume that m = 1 Da , μ = k B ⋅ 1 nK , {\\displaystyle m=1\\,{\\text{Da}},\\mu =k_{\\text{B}}\\cdot 1\\,{\\text{nK}}\\,,} this gives",
"title": "Calculation involving non-SI Units"
},
{
"paragraph_id": 26,
"text": "which is our pre-factor.",
"title": "Calculation involving non-SI Units"
},
{
"paragraph_id": 27,
"text": "Now, make use of the fact that ξ ∝ 1 m μ {\\displaystyle \\xi \\propto {\\frac {1}{\\sqrt {m\\mu }}}} . With m = 23 Da , μ = 128 k B ⋅ nK {\\displaystyle m=23\\,{\\text{Da}},\\mu =128\\,k_{\\text{B}}\\cdot {\\text{nK}}} , ξ = 15.574 23 ⋅ 128 μm = 0.287 μm {\\displaystyle \\xi ={\\frac {15.574}{\\sqrt {23\\cdot 128}}}\\,{\\text{μm}}=0.287\\,{\\text{μm}}} .",
"title": "Calculation involving non-SI Units"
},
{
"paragraph_id": 28,
"text": "This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the pre-factor calculated above, it is very easy to see that the healing length of Yb with chemical potential 20.3 nK is ξ = 15.574 174 ⋅ 20.3 μm = 0.262 μm {\\displaystyle \\xi ={\\frac {15.574}{\\sqrt {174\\cdot 20.3}}}\\,{\\text{μm}}=0.262\\,{\\text{μm}}} .",
"title": "Calculation involving non-SI Units"
},
{
"paragraph_id": 29,
"text": "There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications.",
"title": "Software tools"
},
{
"paragraph_id": 30,
"text": "There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and Windows. The Unified Code for Units of Measure is also a popular option.",
"title": "Software tools"
}
] | Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors which change the measured quantity value without changing its effects. Unit conversion is often easier within the metric or the SI than in others, due to the regular 10-base in all units and the prefixes that increase or decrease by 3 powers of 10 at a time. | 2001-10-02T20:34:43Z | 2023-12-22T17:13:02Z | [
"Template:Reflist",
"Template:Cite book",
"Template:In lang",
"Template:Systems of measurement",
"Template:Extlinks",
"Template:Wikibooks",
"Template:Curlie",
"Template:Nowrap",
"Template:Math",
"Template:Div col",
"Template:Div col end",
"Template:Wikivoyage",
"Template:Small",
"Template:Short description",
"Template:Clarify",
"Template:Further",
"Template:Cite web",
"Template:Ordered list",
"Template:Main listing",
"Template:UK SI",
"Template:SI units"
] | https://en.wikipedia.org/wiki/Conversion_of_units |
5,391 | City | A city is a human settlement of a notable size. The term "city" has different meanings around the world and in some places the settlement can be very small. Even where the term is limited to larger settlements, there is no fixed definition of the lower boundary for their size. In a more narrow sense, a city can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organizations, and businesses, sometimes benefiting different parties in the process, such as improving the efficiency of goods and service distribution.
Historically, city dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling toward city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, climate change, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element in fighting climate change. However, this concentration can also have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources.
Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Athens, Beijing, Jakarta, Kuala Lumpur, London, Manila, Mexico City, Moscow, Nairobi, New Delhi, Paris, Rome, Seoul, Singapore, Tokyo, and Washington, D.C. reflect the identity and apex of their respective nations. Some historic capitals, such as Kyoto, Yogyakarta, and Xi'an, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion; examples include Jerusalem, Mecca, Varanasi, Ayodhya, Haridwar, and Prayagraj.
A city can be distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there and can be used in a general sense to mean urban rather than rural territory.
National censuses use a variety of definitions – invoking factors such as population, population density, number of dwellings, economic function, and infrastructure – to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanent. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population of 12,000 as of 2018, and St Davids, with a population of 1,841 as of 2011.) According to the "functional definition", a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas.
The presence of a literate elite is often associated with cities because of the cultural diversities present in a city. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or the leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations.
The degree of urbanization is a modern metric to help define what comprises a city: "a population of at least 50,000 inhabitants in contiguous dense grid cells (>1,500 inhabitants per square kilometer)". This metric was "devised over years by the European Commission, OECD, World Bank and others, and endorsed in March [2021] by the United Nations ... largely for the purpose of international statistical comparison".
The word city and the related civilization come from the Latin root civitas, originally meaning 'citizenship' or 'community member' and eventually coming to correspond with urbs, meaning 'city' in a more physical sense. The Roman civitas was closely linked with the Greek polis—another common root appearing in English words such as metropolis.
In toponymic terminology, names of individual cities and towns are called astionyms (from Ancient Greek ἄστυ 'city or town' and ὄνομα 'name').
Urban geography deals both with cities in their larger context and with their internal structure. Cities are estimated to cover about 3% of the land surface of the Earth.
Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river.
Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland that sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would, in theory, favor the creation of marketplaces in optimal mutually reachable locations.
The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district.
Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban green spaces are another component of public space that provides the benefit of mitigating the urban heat island effect, especially in cities that are in warmer climates. These spaces prevent carbon imbalances, extreme habitat losses, electricity and water consumption, and human health risks.
The urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. The physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structures may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these "geomorphic" features, cities can develop internal patterns, due to natural growth or to city planning.
In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible.
A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley civilization built Mohenjo-Daro, Harappa, and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean.
The urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary.
Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.)
The emergence of cities from proto-urban settlements, such as Çatalhöyük, is a non-linear development that demonstrates the varied experiences of early urbanization.
The cities of Jericho, Aleppo, Faiyum, Yerevan, Athens, Matera, Damascus, and Argos are among those laying claim to the longest continual inhabitation.
Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city were both followed by the development of agriculture, which enabled the production of surplus food and thus a social division of labor (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal.
Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. However, the Mesopotamian city of Uruk from the mid-fourth millennium BC (ancient Iraq) is considered by most archaeologists to be the first true city, innovating many characteristics for cities to follow, with its name attributed to the Uruk period.
In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations.
Among the early Old World cities, Mohenjo-Daro of the Indus Valley civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms.
The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes.
In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostered multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz.
In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of the athletic, artistic, spiritual, and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities (Colonia), and with them brought its principles of urban architecture, design, and society.
In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu, and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th and 18th centuries BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilizations, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac.
Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao.
In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, the capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453.
In the Holy Roman Empire, beginning in the 12th century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, and Nijmegen became a privileged elite among towns having won self-governance from their local lord or having been granted self-governance by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet.
By the 13th and 14th centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy, medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed considerable autonomy in late medieval Japan.
In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 km and possibly supporting up to one million people.
In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small.
During the Spanish colonization of the Americas, the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories and were bound to several laws regarding administration, finances, and urbanism.
The growth of the modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas.
Some industrialized cities were confronted with health challenges associated with overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape.
In the second half of the 20th century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, China has undergone concomitant urbanization and industrialization and become the world's leading manufacturer.
Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city dwellers. Some companies are building brand-new master-planned cities from scratch on greenfield sites.
Urbanization is the process of migration from rural to urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and demographic expansion. In England, the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world's population lived in cities. The cultural appeal of cities also plays a role in attracting residents.
Urbanization rapidly spread across Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs reported in 2014 that for the first time, more than half of the world population lives in cities.
Latin America is the most urban continent, with four-fifths` of its population living in cities, including one-fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China, and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion city dwellers (and 300 million fewer country dwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa.
Megacities, cities with populations in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as the relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides the rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions.
Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground.
Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels.
The local government of cities takes different forms including prominently the municipality (especially in England, in the United States, India, and other British colonies; legally, the municipal corporation; municipio in Spain and Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and Chile; or comune in Italy).
The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city.
Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas. Municipal officials may be appointed from a higher level of government or elected locally.
Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, but some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968.
The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradeable financial public contracts and other related rights). This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings.
Governance includes government but refers to a wider domain of social control functions implemented by many actors including non-governmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, the industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners.
The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in emergent megacities, where international organizations consider existing governments inadequate for their large populations.
Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions.
Government is legally the final authority on planning but in practice, the process involves both public and private elements. The legal principle of eminent domain is used by the government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation.
The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems.
Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic, and racial lines. People living relatively close together may live, work, and play in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development that surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the West, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods.
Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status of factory workers which in the nineteenth century provided access to the means of production.
Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market.
As hubs of trade, cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism.
In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities; however, in very dense cities, increased crowding and waiting times may lead to some negative effects.
Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, and housekeeping to grey-collar work in law, financial consulting, and administration.
According to a scientific model of cities by Professor Geoffrey West, with the doubling of a city's size, salaries per capita will generally increase by 15%.
Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves play some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, human history, and social change.
Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful.
Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; attract businesses, investors, residents, and tourists; and to create shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city.
Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Paris, a city known for its cultural history, is the site of the next Olympics in the summer of 2024.
Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities.
Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people to concentrate in cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside.
During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, and functionally extends modern urban crime prevention, which already uses concepts such as defensible space.
Although capture is the more common objective, warfare has in some cases spelled complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombings of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "counter-value" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces.
Because of the high density and effects like the urban heat island affect, weather changes due to climate change are likely to greatly effect cities, exacerbating existing problems, such as air pollution, water scarcity, and heat illness in the metropolitan areas. Studies have shown that if body temperature exceeds 39 °C for a period of time, serious heat stroke may occur. Some of the other extreme weather conditions caused by climate change include extreme floods, deathly snowstorms, ice storms, heat waves, droughts, and hurricanes, which are often deathly and harmful. Studies have shown that heat waves are three times more likely to occur and have become more intense since the 1960s. According to World Health Organization, from 1998-2017, heatwaves cost the lives of over 166,000 people. Moreover, because most cities have been built on rivers or coastal areas, cities are frequently vulnerable to the subsequent effects of sea level rise, which cause flooding and erosion, and those effects are deeply connected with other urban environmental problems, like subsidence and aquifer depletion.
A report by the C40 Cities Climate Leadership Group described consumption based emissions as having significantly more impact than production-based emissions within cities. The report estimates that 85% of the emissions associated with goods within a city is generated outside of that city. Climate change adaptation and mitigation investments in cities will be important in reducing the impacts of some of the largest contributors of greenhouse gas emissions: for example, increased density allows for redistribution of land use for agriculture and reforestation, improving transportation efficiencies, and greening construction (largely due to cement's outsized role in climate change and improvements in sustainable construction practices and weatherization). In the most recent past, increasing urbanization has also been proposed as a phenomenon that has a reducing effect on the global rate of carbon emission primarily because with urbanization comes technical prowess which can help drive sustainability. Lists of high impact climate change solutions tend to include city-focused solutions; for example, Project Drawdown recommends several major urban investments, including improved bicycle infrastructure, building retrofitting, district heating, public transit, and walkable cities as important solutions. There are many cities that are attempting to reduce the effect of urban heat islands by painting the roads white. Temperatures on the roads with the coat were ~12 F less than roads without in Phoenix.
Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private.
Infrastructure in general plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already.
Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from the national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance.
Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives.
Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace.
Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide.
Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, street lights, and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications.
Because cities rely on specialization and an economic system based on wage labor, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. City dwellers travel by foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas.
City streets historically were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the West, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In Western cities, industrializing, expanding, and electrifying public transit systems, and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown.
Since the mid-20th century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with the accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks.
The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. The economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia.
Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic.
The housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity.
Homeownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Because cities generally have higher population densities than rural areas, city dwellers are more likely to reside in apartments and less likely to live in a single-family home.
Urban ecosystems, influenced as they are by the density of human buildings and activities, differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in the wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species that never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions.
Typical urban fauna includes insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. However, in North America, large predators such as coyotes and white-tailed deer roam in urban wildlife
Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) that envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in the comparable wilderness.
Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby countries.
Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it).
One of the main methods of improving the urban ecology is including in the cities more urban green spaces: parks, gardens, lawns, and trees. These areas improve the health and well-being of the human, animal, and plant populations of the cities. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city.
A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits are applied to men and women of all ages, as well as across different ethnicities, socioeconomic statuses, and even those with long-term illnesses and disabilities. People who did not get at least two hours – even if they surpassed an hour per week – did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles of home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit."
As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media.
A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity.
Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems.
Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities.
Large cities have a great divide between populations of both ends of the financial spectrum. Regulations on immigration promote the exploitation of low- and high-skilled immigrant workers from poor areas. During employment, migrant workers may be subject to unfair working conditions, including working overtime, low wages, and lack of safety in workplaces.
Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels.
New urban dwellers are increasingly transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes.
Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance.
Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network.
Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest. South Africa has one of the highest rate of protests in the world. Pretoria, a city in South Africa had a rally where 5 thousand people took part in order to advocate for increasing wages to afford living costs.
The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization.
UN-Habitat coordinates the U.N. urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank.
The World Bank, a U.N. specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance.
The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding.
Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk.
Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies.
Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of descriptiones which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film Metropolis while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as The Fast Lady (1962) and Playtime (1967).
Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (New York, London, Hong Kong) and visions of a single world-encompassing ecumenopolis. | [
{
"paragraph_id": 0,
"text": "A city is a human settlement of a notable size. The term \"city\" has different meanings around the world and in some places the settlement can be very small. Even where the term is limited to larger settlements, there is no fixed definition of the lower boundary for their size. In a more narrow sense, a city can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organizations, and businesses, sometimes benefiting different parties in the process, such as improving the efficiency of goods and service distribution.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Historically, city dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling toward city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, climate change, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element in fighting climate change. However, this concentration can also have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Athens, Beijing, Jakarta, Kuala Lumpur, London, Manila, Mexico City, Moscow, Nairobi, New Delhi, Paris, Rome, Seoul, Singapore, Tokyo, and Washington, D.C. reflect the identity and apex of their respective nations. Some historic capitals, such as Kyoto, Yogyakarta, and Xi'an, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion; examples include Jerusalem, Mecca, Varanasi, Ayodhya, Haridwar, and Prayagraj.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A city can be distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there and can be used in a general sense to mean urban rather than rural territory.",
"title": "Meaning"
},
{
"paragraph_id": 4,
"text": "National censuses use a variety of definitions – invoking factors such as population, population density, number of dwellings, economic function, and infrastructure – to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanent. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population of 12,000 as of 2018, and St Davids, with a population of 1,841 as of 2011.) According to the \"functional definition\", a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas.",
"title": "Meaning"
},
{
"paragraph_id": 5,
"text": "The presence of a literate elite is often associated with cities because of the cultural diversities present in a city. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or the leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations.",
"title": "Meaning"
},
{
"paragraph_id": 6,
"text": "The degree of urbanization is a modern metric to help define what comprises a city: \"a population of at least 50,000 inhabitants in contiguous dense grid cells (>1,500 inhabitants per square kilometer)\". This metric was \"devised over years by the European Commission, OECD, World Bank and others, and endorsed in March [2021] by the United Nations ... largely for the purpose of international statistical comparison\".",
"title": "Meaning"
},
{
"paragraph_id": 7,
"text": "The word city and the related civilization come from the Latin root civitas, originally meaning 'citizenship' or 'community member' and eventually coming to correspond with urbs, meaning 'city' in a more physical sense. The Roman civitas was closely linked with the Greek polis—another common root appearing in English words such as metropolis.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "In toponymic terminology, names of individual cities and towns are called astionyms (from Ancient Greek ἄστυ 'city or town' and ὄνομα 'name').",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "Urban geography deals both with cities in their larger context and with their internal structure. Cities are estimated to cover about 3% of the land surface of the Earth.",
"title": "Geography"
},
{
"paragraph_id": 10,
"text": "Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river.",
"title": "Geography"
},
{
"paragraph_id": 11,
"text": "Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland that sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would, in theory, favor the creation of marketplaces in optimal mutually reachable locations.",
"title": "Geography"
},
{
"paragraph_id": 12,
"text": "The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district.",
"title": "Geography"
},
{
"paragraph_id": 13,
"text": "Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban green spaces are another component of public space that provides the benefit of mitigating the urban heat island effect, especially in cities that are in warmer climates. These spaces prevent carbon imbalances, extreme habitat losses, electricity and water consumption, and human health risks.",
"title": "Geography"
},
{
"paragraph_id": 14,
"text": "The urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. The physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structures may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these \"geomorphic\" features, cities can develop internal patterns, due to natural growth or to city planning.",
"title": "Geography"
},
{
"paragraph_id": 15,
"text": "In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible.",
"title": "Geography"
},
{
"paragraph_id": 16,
"text": "A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley civilization built Mohenjo-Daro, Harappa, and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean.",
"title": "Geography"
},
{
"paragraph_id": 17,
"text": "The urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary.",
"title": "Geography"
},
{
"paragraph_id": 18,
"text": "Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.)",
"title": "Geography"
},
{
"paragraph_id": 19,
"text": "The emergence of cities from proto-urban settlements, such as Çatalhöyük, is a non-linear development that demonstrates the varied experiences of early urbanization.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The cities of Jericho, Aleppo, Faiyum, Yerevan, Athens, Matera, Damascus, and Argos are among those laying claim to the longest continual inhabitation.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city were both followed by the development of agriculture, which enabled the production of surplus food and thus a social division of labor (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. However, the Mesopotamian city of Uruk from the mid-fourth millennium BC (ancient Iraq) is considered by most archaeologists to be the first true city, innovating many characteristics for cities to follow, with its name attributed to the Uruk period.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Among the early Old World cities, Mohenjo-Daro of the Indus Valley civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostered multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning \"gathering place\" or \"assembly\", was the center of the athletic, artistic, spiritual, and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities (Colonia), and with them brought its principles of urban architecture, design, and society.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu, and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th and 18th centuries BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilizations, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, the capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In the Holy Roman Empire, beginning in the 12th century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, and Nijmegen became a privileged elite among towns having won self-governance from their local lord or having been granted self-governance by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "By the 13th and 14th centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy, medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed considerable autonomy in late medieval Japan.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 km and possibly supporting up to one million people.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "During the Spanish colonization of the Americas, the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories and were bound to several laws regarding administration, finances, and urbanism.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The growth of the modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Some industrialized cities were confronted with health challenges associated with overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "In the second half of the 20th century, deindustrialization (or \"economic restructuring\") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's \"Steel Belt\" became a \"Rust Belt\" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, China has undergone concomitant urbanization and industrialization and become the world's leading manufacturer.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city dwellers. Some companies are building brand-new master-planned cities from scratch on greenfield sites.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Urbanization is the process of migration from rural to urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and demographic expansion. In England, the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world's population lived in cities. The cultural appeal of cities also plays a role in attracting residents.",
"title": "Urbanization"
},
{
"paragraph_id": 41,
"text": "Urbanization rapidly spread across Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs reported in 2014 that for the first time, more than half of the world population lives in cities.",
"title": "Urbanization"
},
{
"paragraph_id": 42,
"text": "Latin America is the most urban continent, with four-fifths` of its population living in cities, including one-fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China, and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the \"Global North\" remain more urbanized than the less developed countries of the \"Global South\"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion city dwellers (and 300 million fewer country dwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa.",
"title": "Urbanization"
},
{
"paragraph_id": 43,
"text": "Megacities, cities with populations in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as the relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides the rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions.",
"title": "Urbanization"
},
{
"paragraph_id": 44,
"text": "Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground.",
"title": "Urbanization"
},
{
"paragraph_id": 45,
"text": "Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels.",
"title": "Urbanization"
},
{
"paragraph_id": 46,
"text": "The local government of cities takes different forms including prominently the municipality (especially in England, in the United States, India, and other British colonies; legally, the municipal corporation; municipio in Spain and Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and Chile; or comune in Italy).",
"title": "Government"
},
{
"paragraph_id": 47,
"text": "The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city.",
"title": "Government"
},
{
"paragraph_id": 48,
"text": "Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas. Municipal officials may be appointed from a higher level of government or elected locally.",
"title": "Government"
},
{
"paragraph_id": 49,
"text": "Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, but some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968.",
"title": "Government"
},
{
"paragraph_id": 50,
"text": "The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradeable financial public contracts and other related rights). This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings.",
"title": "Government"
},
{
"paragraph_id": 51,
"text": "Governance includes government but refers to a wider domain of social control functions implemented by many actors including non-governmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the \"urban regime theory\" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, the industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners.",
"title": "Government"
},
{
"paragraph_id": 52,
"text": "The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in emergent megacities, where international organizations consider existing governments inadequate for their large populations.",
"title": "Government"
},
{
"paragraph_id": 53,
"text": "Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions.",
"title": "Government"
},
{
"paragraph_id": 54,
"text": "Government is legally the final authority on planning but in practice, the process involves both public and private elements. The legal principle of eminent domain is used by the government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation.",
"title": "Government"
},
{
"paragraph_id": 55,
"text": "The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems.",
"title": "Government"
},
{
"paragraph_id": 56,
"text": "Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic, and racial lines. People living relatively close together may live, work, and play in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development that surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the West, and, increasingly, gated communities and other forms of \"privatopia\" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods.",
"title": "Society"
},
{
"paragraph_id": 57,
"text": "Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status of factory workers which in the nineteenth century provided access to the means of production.",
"title": "Society"
},
{
"paragraph_id": 58,
"text": "Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market.",
"title": "Society"
},
{
"paragraph_id": 59,
"text": "As hubs of trade, cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism.",
"title": "Society"
},
{
"paragraph_id": 60,
"text": "In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities; however, in very dense cities, increased crowding and waiting times may lead to some negative effects.",
"title": "Society"
},
{
"paragraph_id": 61,
"text": "Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, and housekeeping to grey-collar work in law, financial consulting, and administration.",
"title": "Society"
},
{
"paragraph_id": 62,
"text": "According to a scientific model of cities by Professor Geoffrey West, with the doubling of a city's size, salaries per capita will generally increase by 15%.",
"title": "Society"
},
{
"paragraph_id": 63,
"text": "Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves play some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, human history, and social change.",
"title": "Society"
},
{
"paragraph_id": 64,
"text": "Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful.",
"title": "Society"
},
{
"paragraph_id": 65,
"text": "Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; attract businesses, investors, residents, and tourists; and to create shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—\"purchasing\" (the brand of)—a city.",
"title": "Society"
},
{
"paragraph_id": 66,
"text": "Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Paris, a city known for its cultural history, is the site of the next Olympics in the summer of 2024.",
"title": "Society"
},
{
"paragraph_id": 67,
"text": "Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities.",
"title": "Society"
},
{
"paragraph_id": 68,
"text": "Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people to concentrate in cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside.",
"title": "Society"
},
{
"paragraph_id": 69,
"text": "During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, and functionally extends modern urban crime prevention, which already uses concepts such as defensible space.",
"title": "Society"
},
{
"paragraph_id": 70,
"text": "Although capture is the more common objective, warfare has in some cases spelled complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombings of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of \"counter-value\" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces.",
"title": "Society"
},
{
"paragraph_id": 71,
"text": "Because of the high density and effects like the urban heat island affect, weather changes due to climate change are likely to greatly effect cities, exacerbating existing problems, such as air pollution, water scarcity, and heat illness in the metropolitan areas. Studies have shown that if body temperature exceeds 39 °C for a period of time, serious heat stroke may occur. Some of the other extreme weather conditions caused by climate change include extreme floods, deathly snowstorms, ice storms, heat waves, droughts, and hurricanes, which are often deathly and harmful. Studies have shown that heat waves are three times more likely to occur and have become more intense since the 1960s. According to World Health Organization, from 1998-2017, heatwaves cost the lives of over 166,000 people. Moreover, because most cities have been built on rivers or coastal areas, cities are frequently vulnerable to the subsequent effects of sea level rise, which cause flooding and erosion, and those effects are deeply connected with other urban environmental problems, like subsidence and aquifer depletion.",
"title": "Society"
},
{
"paragraph_id": 72,
"text": "A report by the C40 Cities Climate Leadership Group described consumption based emissions as having significantly more impact than production-based emissions within cities. The report estimates that 85% of the emissions associated with goods within a city is generated outside of that city. Climate change adaptation and mitigation investments in cities will be important in reducing the impacts of some of the largest contributors of greenhouse gas emissions: for example, increased density allows for redistribution of land use for agriculture and reforestation, improving transportation efficiencies, and greening construction (largely due to cement's outsized role in climate change and improvements in sustainable construction practices and weatherization). In the most recent past, increasing urbanization has also been proposed as a phenomenon that has a reducing effect on the global rate of carbon emission primarily because with urbanization comes technical prowess which can help drive sustainability. Lists of high impact climate change solutions tend to include city-focused solutions; for example, Project Drawdown recommends several major urban investments, including improved bicycle infrastructure, building retrofitting, district heating, public transit, and walkable cities as important solutions. There are many cities that are attempting to reduce the effect of urban heat islands by painting the roads white. Temperatures on the roads with the coat were ~12 F less than roads without in Phoenix.",
"title": "Society"
},
{
"paragraph_id": 73,
"text": "Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private.",
"title": "Infrastructure"
},
{
"paragraph_id": 74,
"text": "Infrastructure in general plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already.",
"title": "Infrastructure"
},
{
"paragraph_id": 75,
"text": "Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from the national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance.",
"title": "Infrastructure"
},
{
"paragraph_id": 76,
"text": "Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives.",
"title": "Infrastructure"
},
{
"paragraph_id": 77,
"text": "Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace.",
"title": "Infrastructure"
},
{
"paragraph_id": 78,
"text": "Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide.",
"title": "Infrastructure"
},
{
"paragraph_id": 79,
"text": "Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, street lights, and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications.",
"title": "Infrastructure"
},
{
"paragraph_id": 80,
"text": "Because cities rely on specialization and an economic system based on wage labor, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. City dwellers travel by foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas.",
"title": "Infrastructure"
},
{
"paragraph_id": 81,
"text": "City streets historically were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the West, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In Western cities, industrializing, expanding, and electrifying public transit systems, and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown.",
"title": "Infrastructure"
},
{
"paragraph_id": 82,
"text": "Since the mid-20th century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with the accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks.",
"title": "Infrastructure"
},
{
"paragraph_id": 83,
"text": "The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. The economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia.",
"title": "Infrastructure"
},
{
"paragraph_id": 84,
"text": "Walking and cycling (\"non-motorized transport\") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic.",
"title": "Infrastructure"
},
{
"paragraph_id": 85,
"text": "The housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity.",
"title": "Housing"
},
{
"paragraph_id": 86,
"text": "Homeownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Because cities generally have higher population densities than rural areas, city dwellers are more likely to reside in apartments and less likely to live in a single-family home.",
"title": "Housing"
},
{
"paragraph_id": 87,
"text": "Urban ecosystems, influenced as they are by the density of human buildings and activities, differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in the wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species that never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions.",
"title": "Ecology"
},
{
"paragraph_id": 88,
"text": "Typical urban fauna includes insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. However, in North America, large predators such as coyotes and white-tailed deer roam in urban wildlife",
"title": "Ecology"
},
{
"paragraph_id": 89,
"text": "Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) that envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in the comparable wilderness.",
"title": "Ecology"
},
{
"paragraph_id": 90,
"text": "Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby countries.",
"title": "Ecology"
},
{
"paragraph_id": 91,
"text": "Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it).",
"title": "Ecology"
},
{
"paragraph_id": 92,
"text": "One of the main methods of improving the urban ecology is including in the cities more urban green spaces: parks, gardens, lawns, and trees. These areas improve the health and well-being of the human, animal, and plant populations of the cities. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city.",
"title": "Ecology"
},
{
"paragraph_id": 93,
"text": "A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits are applied to men and women of all ages, as well as across different ethnicities, socioeconomic statuses, and even those with long-term illnesses and disabilities. People who did not get at least two hours – even if they surpassed an hour per week – did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles of home. \"Even visiting local urban green spaces seems to be a good thing,\" Dr. White said in a press release. \"Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit.\"",
"title": "Ecology"
},
{
"paragraph_id": 94,
"text": "As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media.",
"title": "World city system"
},
{
"paragraph_id": 95,
"text": "A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term \"global city\" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity.",
"title": "World city system"
},
{
"paragraph_id": 96,
"text": "Critics of the notion point to the different realms of power and interchange. The term \"global city\" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is \"reductive and skewed\" in its focus on financial systems.",
"title": "World city system"
},
{
"paragraph_id": 97,
"text": "Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities.",
"title": "World city system"
},
{
"paragraph_id": 98,
"text": "Large cities have a great divide between populations of both ends of the financial spectrum. Regulations on immigration promote the exploitation of low- and high-skilled immigrant workers from poor areas. During employment, migrant workers may be subject to unfair working conditions, including working overtime, low wages, and lack of safety in workplaces.",
"title": "World city system"
},
{
"paragraph_id": 99,
"text": "Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels.",
"title": "World city system"
},
{
"paragraph_id": 100,
"text": "New urban dwellers are increasingly transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes.",
"title": "World city system"
},
{
"paragraph_id": 101,
"text": "Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance.",
"title": "World city system"
},
{
"paragraph_id": 102,
"text": "Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network.",
"title": "World city system"
},
{
"paragraph_id": 103,
"text": "Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest. South Africa has one of the highest rate of protests in the world. Pretoria, a city in South Africa had a rally where 5 thousand people took part in order to advocate for increasing wages to afford living costs.",
"title": "World city system"
},
{
"paragraph_id": 104,
"text": "The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization.",
"title": "World city system"
},
{
"paragraph_id": 105,
"text": "UN-Habitat coordinates the U.N. urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank.",
"title": "World city system"
},
{
"paragraph_id": 106,
"text": "The World Bank, a U.N. specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance.",
"title": "World city system"
},
{
"paragraph_id": 107,
"text": "The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding.",
"title": "World city system"
},
{
"paragraph_id": 108,
"text": "Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk.",
"title": "Representation in culture"
},
{
"paragraph_id": 109,
"text": "Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies.",
"title": "Representation in culture"
},
{
"paragraph_id": 110,
"text": "Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of descriptiones which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film Metropolis while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as The Fast Lady (1962) and Playtime (1967).",
"title": "Representation in culture"
},
{
"paragraph_id": 111,
"text": "Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (New York, London, Hong Kong) and visions of a single world-encompassing ecumenopolis.",
"title": "Representation in culture"
},
{
"paragraph_id": 112,
"text": "",
"title": "Gallery"
}
] | A city is a human settlement of a notable size. The term "city" has different meanings around the world and in some places the settlement can be very small. Even where the term is limited to larger settlements, there is no fixed definition of the lower boundary for their size. In a more narrow sense, a city can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organizations, and businesses, sometimes benefiting different parties in the process, such as improving the efficiency of goods and service distribution. Historically, city dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling toward city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, climate change, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element in fighting climate change. However, this concentration can also have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources. Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Athens, Beijing, Jakarta, Kuala Lumpur, London, Manila, Mexico City, Moscow, Nairobi, New Delhi, Paris, Rome, Seoul, Singapore, Tokyo, and Washington, D.C. reflect the identity and apex of their respective nations. Some historic capitals, such as Kyoto, Yogyakarta, and Xi'an, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion; examples include Jerusalem, Mecca, Varanasi, Ayodhya, Haridwar, and Prayagraj. | 2001-11-13T18:34:20Z | 2023-12-30T07:05:53Z | [
"Template:Clarify",
"Template:Circa",
"Template:Gallery",
"Template:Portal-inline",
"Template:Notelist",
"Template:Doi",
"Template:Authority control",
"Template:Further",
"Template:See also",
"Template:Cite triumph",
"Template:Curlie",
"Template:Lang",
"Template:Webarchive",
"Template:ISBN",
"Template:Harv",
"Template:Wiktionary",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Sister project links",
"Template:As of",
"Template:Use dmy dates",
"Template:Cite news",
"Template:\"'",
"Template:City topics",
"Template:Short description",
"Template:Sfn",
"Template:Main",
"Template:Efn",
"Template:Cite journal",
"Template:Other uses",
"Template:Reflist",
"Template:Cite book",
"Template:Refbegin",
"Template:Refend",
"Template:Terms for types of administrative territorial entities",
"Template:Excerpt",
"Template:Dead link",
"Template:Cite magazine",
"Template:Land-use planning",
"Template:Wide image"
] | https://en.wikipedia.org/wiki/City |
5,394 | Chervil | Chervil (/ˈtʃɜːrˌvɪl/; Anthriscus cerefolium), sometimes called French parsley or garden chervil (to distinguish it from similar plants also called chervil), is a delicate annual herb related to parsley. It was formerly called myrhis due to its volatile oil with an aroma similar to the resinous substance myrrh. It is commonly used to season mild-flavoured dishes and is a constituent of the French herb mixture fines herbes.
The name chervil is from Anglo-Norman, from Latin chaerephylla or choerephyllum, meaning "leaves of joy"; the Latin is formed, as from an Ancient Greek word χαιρέφυλλον (chairephyllon).
A member of the Apiaceae, chervil is native to the Caucasus but was spread by the Romans through most of Europe, where it is now naturalised. It is also grown frequently in the United States, where it sometimes escapes cultivation. Such escape can be recognized, however, as garden chervil is distinguished from all other Anthriscus species growing in North America (i.e., A. caucalis and A. sylvestris) by its having lanceolate-linear bracteoles and a fruit with a relatively long beak.
The plants grow to 40–70 cm (16–28 in), with tripinnate leaves that may be curly. The small white flowers form small umbels, 2.5–5 cm (1–2 in) across. The fruit is about 1 cm long, oblong-ovoid with a slender, ridged beak.
Chervil is used, particularly in France, to season poultry, seafood, young spring vegetables (such as carrots), soups, and sauces. More delicate than parsley, it has a faint taste of liquorice or aniseed.
Chervil is one of the four traditional French fines herbes, along with tarragon, chives, and parsley, which are essential to French cooking. Unlike the more pungent, robust herbs such as thyme and rosemary, which can take prolonged cooking, the fines herbes are added at the last minute, to salads, omelettes, and soups.
Essential oil obtained via water distillation of wild Turkish Anthriscus cerefolium was analyzed by gas chromatography - mass spectrometry identifying 4 compounds: methyl chavicol (83.10%), 1-allyl-2,4-dimethoxybenzene (15.15%), undecane (1.75%) and β-pinene (<0.01%).
According to some, slugs are attracted to chervil and the plant is sometimes used to bait them.
Chervil has had various uses in folk medicine. It was claimed to be useful as a digestive aid, for lowering high blood pressure, and, infused with vinegar, for curing hiccups. Besides its digestive properties, it is used as a mild stimulant.
Chervil has also been implicated in "strimmer dermatitis", another name for phytophotodermatitis, due to spray from weed trimmers and similar forms of contact. Other plants in the family Apiaceae can have similar effects.
Transplanting chervil can be difficult, due to the long taproot. It prefers a cool and moist location; otherwise, it rapidly goes to seed (also known as bolting). It is usually grown as a cool-season crop, like lettuce, and should be planted in early spring and late fall or in a winter greenhouse. Regular harvesting of leaves also helps to prevent bolting. If plants bolt despite precautions, the plant can be periodically re-sown throughout the growing season, thus producing fresh plants as older plants bolt and go out of production.
Chervil grows to a height of 12 to 24 inches (30 to 60 cm), and a width of 6 to 12 inches (15 to 30 cm). | [
{
"paragraph_id": 0,
"text": "Chervil (/ˈtʃɜːrˌvɪl/; Anthriscus cerefolium), sometimes called French parsley or garden chervil (to distinguish it from similar plants also called chervil), is a delicate annual herb related to parsley. It was formerly called myrhis due to its volatile oil with an aroma similar to the resinous substance myrrh. It is commonly used to season mild-flavoured dishes and is a constituent of the French herb mixture fines herbes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The name chervil is from Anglo-Norman, from Latin chaerephylla or choerephyllum, meaning \"leaves of joy\"; the Latin is formed, as from an Ancient Greek word χαιρέφυλλον (chairephyllon).",
"title": "Name"
},
{
"paragraph_id": 2,
"text": "A member of the Apiaceae, chervil is native to the Caucasus but was spread by the Romans through most of Europe, where it is now naturalised. It is also grown frequently in the United States, where it sometimes escapes cultivation. Such escape can be recognized, however, as garden chervil is distinguished from all other Anthriscus species growing in North America (i.e., A. caucalis and A. sylvestris) by its having lanceolate-linear bracteoles and a fruit with a relatively long beak.",
"title": "Biology"
},
{
"paragraph_id": 3,
"text": "The plants grow to 40–70 cm (16–28 in), with tripinnate leaves that may be curly. The small white flowers form small umbels, 2.5–5 cm (1–2 in) across. The fruit is about 1 cm long, oblong-ovoid with a slender, ridged beak.",
"title": "Biology"
},
{
"paragraph_id": 4,
"text": "Chervil is used, particularly in France, to season poultry, seafood, young spring vegetables (such as carrots), soups, and sauces. More delicate than parsley, it has a faint taste of liquorice or aniseed.",
"title": "Uses and impact"
},
{
"paragraph_id": 5,
"text": "Chervil is one of the four traditional French fines herbes, along with tarragon, chives, and parsley, which are essential to French cooking. Unlike the more pungent, robust herbs such as thyme and rosemary, which can take prolonged cooking, the fines herbes are added at the last minute, to salads, omelettes, and soups.",
"title": "Uses and impact"
},
{
"paragraph_id": 6,
"text": "Essential oil obtained via water distillation of wild Turkish Anthriscus cerefolium was analyzed by gas chromatography - mass spectrometry identifying 4 compounds: methyl chavicol (83.10%), 1-allyl-2,4-dimethoxybenzene (15.15%), undecane (1.75%) and β-pinene (<0.01%).",
"title": "Uses and impact"
},
{
"paragraph_id": 7,
"text": "According to some, slugs are attracted to chervil and the plant is sometimes used to bait them.",
"title": "Uses and impact"
},
{
"paragraph_id": 8,
"text": "Chervil has had various uses in folk medicine. It was claimed to be useful as a digestive aid, for lowering high blood pressure, and, infused with vinegar, for curing hiccups. Besides its digestive properties, it is used as a mild stimulant.",
"title": "Uses and impact"
},
{
"paragraph_id": 9,
"text": "Chervil has also been implicated in \"strimmer dermatitis\", another name for phytophotodermatitis, due to spray from weed trimmers and similar forms of contact. Other plants in the family Apiaceae can have similar effects.",
"title": "Uses and impact"
},
{
"paragraph_id": 10,
"text": "Transplanting chervil can be difficult, due to the long taproot. It prefers a cool and moist location; otherwise, it rapidly goes to seed (also known as bolting). It is usually grown as a cool-season crop, like lettuce, and should be planted in early spring and late fall or in a winter greenhouse. Regular harvesting of leaves also helps to prevent bolting. If plants bolt despite precautions, the plant can be periodically re-sown throughout the growing season, thus producing fresh plants as older plants bolt and go out of production.",
"title": "Cultivation"
},
{
"paragraph_id": 11,
"text": "Chervil grows to a height of 12 to 24 inches (30 to 60 cm), and a width of 6 to 12 inches (15 to 30 cm).",
"title": "Cultivation"
}
] | Chervil, sometimes called French parsley or garden chervil, is a delicate annual herb related to parsley. It was formerly called myrhis due to its volatile oil with an aroma similar to the resinous substance myrrh. It is commonly used to season mild-flavoured dishes and is a constituent of the French herb mixture fines herbes. | 2001-07-04T06:18:16Z | 2023-11-23T08:35:04Z | [
"Template:IPAc-en",
"Template:Lang",
"Template:Cite web",
"Template:Cite journal",
"Template:Edible Apiaceae",
"Template:Taxonbar",
"Template:Short description",
"Template:About",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Cite book",
"Template:Herbs & spices",
"Template:Speciesbox",
"Template:Convert",
"Template:NIE Poster"
] | https://en.wikipedia.org/wiki/Chervil |
5,395 | Chives | Chives, scientific name Allium schoenoprasum, is a species of flowering plant in the family Amaryllidaceae that produces edible leaves and flowers. Their close relatives include the common onions, garlic, shallot, leek, scallion, and Chinese onion.
A perennial plant, it is widespread in nature across much of Europe, Asia, and North America.
A. schoenoprasum is the only species of Allium native to both the New and the Old Worlds.
Chives are a commonly used herb and can be found in grocery stores or grown in home gardens. In culinary use, the green stalks (scapes) and the unopened, immature flower buds are diced and used as an ingredient for omelettes, fish, potatoes, soups, and many other dishes. The edible flowers can be used in salads. Chives have insect-repelling properties that can be used in gardens to control pests.
The plant provides a great deal of nectar for pollinators. It was rated in the top 10 for most nectar production (nectar per unit cover per year) in a UK plants survey conducted by the AgriLand project which is supported by the UK Insect Pollinators Initiative.
Chives are a bulb-forming herbaceous perennial plant, growing to 30–50 cm (12–20 in) tall. The bulbs are slender, conical, 2–3 cm (3⁄4–1+1⁄4 in) long and 1 cm (1⁄2 in) broad, and grow in dense clusters from the roots. The scapes (or stems) are hollow and tubular, up to 50 cm (20 in) long and 2–3 mm (1⁄16–1⁄8 in) across, with a soft texture, although, prior to the emergence of a flower, they may appear stiffer than usual. The grass-like leaves, which are shorter than the scapes, are also hollow and tubular, or terete, (round in cross-section) which distinguishes it at a glance from garlic chives (Allium tuberosum).
The flowers are pale purple, and star-shaped with six petals, 1–2 cm (1⁄2–3⁄4 in) wide, and produced in a dense inflorescence of 10-30 together; before opening, the inflorescence is surrounded by a papery bract. The seeds are produced in a small, three-valved capsule, maturing in summer. The herb flowers from April to May in the southern parts of its habitat zones and in June in the northern parts.
Chives are the only species of Allium native to both the New and the Old Worlds. Sometimes, the plants found in North America are classified as A. schoenoprasum var. sibiricum, although this is disputed. Differences between specimens are significant. One example was found in northern Maine growing solitary, instead of in clumps, also exhibiting dingy grey flowers.
Although chives are repulsive to insects in general, due to their sulfur compounds, their flowers attract bees, and they are at times kept to increase desired insect life.
It was formally described by the Swedish botanist Carl Linnaeus in his seminal publication Species Plantarum in 1753.
The name of the species derives from the Greek σχοίνος, skhoínos (sedge or rush) and πράσον, práson (leek). Its English name, chives, derives from the French word cive, from cepa, the Latin word for onion. In the Middle Ages, it was known as 'rush leek'.
Some subspecies have been proposed, but are not accepted by Plants of the World Online, as of July 2021, which sinks them into the species:
Varieties have also been proposed, including A. schoenoprasum var. sibiricum. The Flora of North America notes that the species is very variable, and considers recognition of varieties as "unsound".
Chives are native to temperate areas of Europe, Asia and North America.
It is found in Asia within the Caucasus (in Armenia, Azerbaijan and Georgia), also in China, Iran, Iraq, Japan (within the islands of Hokkaido and Honshu), Kazakhstan, Kyrgyzstan, Mongolia, Pakistan, Russian Federation (within the krais of Kamchatka, Khabarovsk, and Primorye) Siberia and Turkey.
In middle Europe, it is found within Austria, the Czech Republic, Germany, the Netherlands, Poland and Switzerland. In northern Europe, in Denmark, Finland, Norway, Sweden and the United Kingdom. In southeastern Europe, within Bulgaria, Greece, Italy and Romania. It is also found in southwestern Europe, in France, Portugal and Spain.
In North America, it is found in Canada (within the provinces and territories of Alberta, British Columbia, Manitoba, Northwest Territories, Nova Scotia, New Brunswick, Newfoundland, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan and Yukon), and the United States (within the states of Alaska, Colorado, Connecticut, Idaho, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, New Hampshire, New Jersey, New York, Ohio, Oregon, Pennsylvania, Rhode Island, Vermont, Washington, West Virginia, Wisconsin and Wyoming).
Chives are grown for their scapes and leaves, which are used for culinary purposes as a flavoring herb, and provide a somewhat milder onion-like flavor than those of other Allium species.
Chives have a wide variety of culinary uses, such as in traditional dishes in France, Sweden, and elsewhere. In his 1806 book Attempt at a Flora (Försök til en flora), Anders Jahan Retzius describes how chives are used with pancakes, soups, fish, and sandwiches. They are also an ingredient of the gräddfil sauce with the traditional herring dish served at Swedish midsummer celebrations. The flowers may also be used to garnish dishes.
In Poland and Germany, chives are served with quark. Chives are one of the fines herbes of French cuisine, the others being tarragon, chervil and parsley. Chives can be found fresh at most markets year-round, making them readily available; they can also be dry-frozen without much impairment to the taste, giving home growers the opportunity to store large quantities harvested from their own gardens.
Retzius also describes how farmers would plant chives between the rocks making up the borders of their flowerbeds, to keep the plants free from pests (such as Japanese beetles). The growing plant repels unwanted insect life, and the juice of the leaves can be used for the same purpose, as well as fighting fungal infections, mildew, and scab.
Chives are cultivated both for their culinary uses and for their ornamental value; the violet flowers are often used in ornamental dry bouquets. The flowers are also edible and are used in salads, or used to make blossom vinegars.
Chives thrive in well-drained soil, rich in organic matter, with a pH of 6-7 and full sun. They can be grown from seed and mature in summer, or early the following spring. Typically, chives need to be germinated at a temperature of 15 to 20 °C (60-70 °F) and kept moist. They can also be planted under a cloche or germinated indoors in cooler climates, then planted out later. After at least four weeks, the young shoots should be ready to be planted out. They are also easily propagated by division.
In cold regions, chives die back to the underground bulbs in winter, with the new leaves appearing in early spring. Chives starting to look old can be cut back to about 2–5 cm. When harvesting, the needed number of stalks should be cut to the base. During the growing season, the plant continually regrows leaves, allowing for a continuous harvest.
Chives are susceptible to damage by leek moth larvae, which bore into the leaves or bulbs of the plant.
Chives have been cultivated in Europe since the Middle Ages (from the fifth until the 15th centuries), although their usage dates back 5,000 years. They were sometimes referred to as "rush leeks".
It was mentioned in 80 A.D. by Marcus Valerius Martialis in his "Epigrams".
He who bears chives on his breath, Is safe from being kissed to death.
The Romans believed chives could relieve the pain from sunburn or a sore throat. They believed eating chives could increase blood pressure and act as a diuretic.
Romani have used chives in fortune telling. Bunches of dried chives hung around a house were believed to ward off disease and evil.
In the 19th century, Dutch farmers fed cattle on the herb to give a different taste to their milk. | [
{
"paragraph_id": 0,
"text": "Chives, scientific name Allium schoenoprasum, is a species of flowering plant in the family Amaryllidaceae that produces edible leaves and flowers. Their close relatives include the common onions, garlic, shallot, leek, scallion, and Chinese onion.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A perennial plant, it is widespread in nature across much of Europe, Asia, and North America.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A. schoenoprasum is the only species of Allium native to both the New and the Old Worlds.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Chives are a commonly used herb and can be found in grocery stores or grown in home gardens. In culinary use, the green stalks (scapes) and the unopened, immature flower buds are diced and used as an ingredient for omelettes, fish, potatoes, soups, and many other dishes. The edible flowers can be used in salads. Chives have insect-repelling properties that can be used in gardens to control pests.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The plant provides a great deal of nectar for pollinators. It was rated in the top 10 for most nectar production (nectar per unit cover per year) in a UK plants survey conducted by the AgriLand project which is supported by the UK Insect Pollinators Initiative.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Chives are a bulb-forming herbaceous perennial plant, growing to 30–50 cm (12–20 in) tall. The bulbs are slender, conical, 2–3 cm (3⁄4–1+1⁄4 in) long and 1 cm (1⁄2 in) broad, and grow in dense clusters from the roots. The scapes (or stems) are hollow and tubular, up to 50 cm (20 in) long and 2–3 mm (1⁄16–1⁄8 in) across, with a soft texture, although, prior to the emergence of a flower, they may appear stiffer than usual. The grass-like leaves, which are shorter than the scapes, are also hollow and tubular, or terete, (round in cross-section) which distinguishes it at a glance from garlic chives (Allium tuberosum).",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "The flowers are pale purple, and star-shaped with six petals, 1–2 cm (1⁄2–3⁄4 in) wide, and produced in a dense inflorescence of 10-30 together; before opening, the inflorescence is surrounded by a papery bract. The seeds are produced in a small, three-valved capsule, maturing in summer. The herb flowers from April to May in the southern parts of its habitat zones and in June in the northern parts.",
"title": "Description"
},
{
"paragraph_id": 7,
"text": "Chives are the only species of Allium native to both the New and the Old Worlds. Sometimes, the plants found in North America are classified as A. schoenoprasum var. sibiricum, although this is disputed. Differences between specimens are significant. One example was found in northern Maine growing solitary, instead of in clumps, also exhibiting dingy grey flowers.",
"title": "Description"
},
{
"paragraph_id": 8,
"text": "Although chives are repulsive to insects in general, due to their sulfur compounds, their flowers attract bees, and they are at times kept to increase desired insect life.",
"title": "Description"
},
{
"paragraph_id": 9,
"text": "It was formally described by the Swedish botanist Carl Linnaeus in his seminal publication Species Plantarum in 1753.",
"title": "Taxonomy"
},
{
"paragraph_id": 10,
"text": "The name of the species derives from the Greek σχοίνος, skhoínos (sedge or rush) and πράσον, práson (leek). Its English name, chives, derives from the French word cive, from cepa, the Latin word for onion. In the Middle Ages, it was known as 'rush leek'.",
"title": "Taxonomy"
},
{
"paragraph_id": 11,
"text": "Some subspecies have been proposed, but are not accepted by Plants of the World Online, as of July 2021, which sinks them into the species:",
"title": "Taxonomy"
},
{
"paragraph_id": 12,
"text": "Varieties have also been proposed, including A. schoenoprasum var. sibiricum. The Flora of North America notes that the species is very variable, and considers recognition of varieties as \"unsound\".",
"title": "Taxonomy"
},
{
"paragraph_id": 13,
"text": "Chives are native to temperate areas of Europe, Asia and North America.",
"title": "Distribution and habitat"
},
{
"paragraph_id": 14,
"text": "It is found in Asia within the Caucasus (in Armenia, Azerbaijan and Georgia), also in China, Iran, Iraq, Japan (within the islands of Hokkaido and Honshu), Kazakhstan, Kyrgyzstan, Mongolia, Pakistan, Russian Federation (within the krais of Kamchatka, Khabarovsk, and Primorye) Siberia and Turkey.",
"title": "Distribution and habitat"
},
{
"paragraph_id": 15,
"text": "In middle Europe, it is found within Austria, the Czech Republic, Germany, the Netherlands, Poland and Switzerland. In northern Europe, in Denmark, Finland, Norway, Sweden and the United Kingdom. In southeastern Europe, within Bulgaria, Greece, Italy and Romania. It is also found in southwestern Europe, in France, Portugal and Spain.",
"title": "Distribution and habitat"
},
{
"paragraph_id": 16,
"text": "In North America, it is found in Canada (within the provinces and territories of Alberta, British Columbia, Manitoba, Northwest Territories, Nova Scotia, New Brunswick, Newfoundland, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan and Yukon), and the United States (within the states of Alaska, Colorado, Connecticut, Idaho, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, New Hampshire, New Jersey, New York, Ohio, Oregon, Pennsylvania, Rhode Island, Vermont, Washington, West Virginia, Wisconsin and Wyoming).",
"title": "Distribution and habitat"
},
{
"paragraph_id": 17,
"text": "Chives are grown for their scapes and leaves, which are used for culinary purposes as a flavoring herb, and provide a somewhat milder onion-like flavor than those of other Allium species.",
"title": "Uses"
},
{
"paragraph_id": 18,
"text": "Chives have a wide variety of culinary uses, such as in traditional dishes in France, Sweden, and elsewhere. In his 1806 book Attempt at a Flora (Försök til en flora), Anders Jahan Retzius describes how chives are used with pancakes, soups, fish, and sandwiches. They are also an ingredient of the gräddfil sauce with the traditional herring dish served at Swedish midsummer celebrations. The flowers may also be used to garnish dishes.",
"title": "Uses"
},
{
"paragraph_id": 19,
"text": "In Poland and Germany, chives are served with quark. Chives are one of the fines herbes of French cuisine, the others being tarragon, chervil and parsley. Chives can be found fresh at most markets year-round, making them readily available; they can also be dry-frozen without much impairment to the taste, giving home growers the opportunity to store large quantities harvested from their own gardens.",
"title": "Uses"
},
{
"paragraph_id": 20,
"text": "Retzius also describes how farmers would plant chives between the rocks making up the borders of their flowerbeds, to keep the plants free from pests (such as Japanese beetles). The growing plant repels unwanted insect life, and the juice of the leaves can be used for the same purpose, as well as fighting fungal infections, mildew, and scab.",
"title": "Uses"
},
{
"paragraph_id": 21,
"text": "Chives are cultivated both for their culinary uses and for their ornamental value; the violet flowers are often used in ornamental dry bouquets. The flowers are also edible and are used in salads, or used to make blossom vinegars.",
"title": "Cultivation"
},
{
"paragraph_id": 22,
"text": "Chives thrive in well-drained soil, rich in organic matter, with a pH of 6-7 and full sun. They can be grown from seed and mature in summer, or early the following spring. Typically, chives need to be germinated at a temperature of 15 to 20 °C (60-70 °F) and kept moist. They can also be planted under a cloche or germinated indoors in cooler climates, then planted out later. After at least four weeks, the young shoots should be ready to be planted out. They are also easily propagated by division.",
"title": "Cultivation"
},
{
"paragraph_id": 23,
"text": "In cold regions, chives die back to the underground bulbs in winter, with the new leaves appearing in early spring. Chives starting to look old can be cut back to about 2–5 cm. When harvesting, the needed number of stalks should be cut to the base. During the growing season, the plant continually regrows leaves, allowing for a continuous harvest.",
"title": "Cultivation"
},
{
"paragraph_id": 24,
"text": "Chives are susceptible to damage by leek moth larvae, which bore into the leaves or bulbs of the plant.",
"title": "Cultivation"
},
{
"paragraph_id": 25,
"text": "Chives have been cultivated in Europe since the Middle Ages (from the fifth until the 15th centuries), although their usage dates back 5,000 years. They were sometimes referred to as \"rush leeks\".",
"title": "History and cultural importance"
},
{
"paragraph_id": 26,
"text": "It was mentioned in 80 A.D. by Marcus Valerius Martialis in his \"Epigrams\".",
"title": "History and cultural importance"
},
{
"paragraph_id": 27,
"text": "He who bears chives on his breath, Is safe from being kissed to death.",
"title": "History and cultural importance"
},
{
"paragraph_id": 28,
"text": "The Romans believed chives could relieve the pain from sunburn or a sore throat. They believed eating chives could increase blood pressure and act as a diuretic.",
"title": "History and cultural importance"
},
{
"paragraph_id": 29,
"text": "Romani have used chives in fortune telling. Bunches of dried chives hung around a house were believed to ward off disease and evil.",
"title": "History and cultural importance"
},
{
"paragraph_id": 30,
"text": "In the 19th century, Dutch farmers fed cattle on the herb to give a different taste to their milk.",
"title": "History and cultural importance"
}
] | Chives, scientific name Allium schoenoprasum, is a species of flowering plant in the family Amaryllidaceae that produces edible leaves and flowers. Their close relatives include the common onions, garlic, shallot, leek, scallion, and Chinese onion. A perennial plant, it is widespread in nature across much of Europe, Asia, and North America. A. schoenoprasum is the only species of Allium native to both the New and the Old Worlds. Chives are a commonly used herb and can be found in grocery stores or grown in home gardens. In culinary use, the green stalks (scapes) and the unopened, immature flower buds are diced and used as an ingredient for omelettes, fish, potatoes, soups, and many other dishes. The edible flowers can be used in salads. Chives have insect-repelling properties that can be used in gardens to control pests. The plant provides a great deal of nectar for pollinators. It was rated in the top 10 for most nectar production in a UK plants survey conducted by the AgriLand project which is supported by the UK Insect Pollinators Initiative. | 2001-05-25T19:13:15Z | 2023-12-04T07:56:05Z | [
"Template:Blockquote",
"Template:Cite journal",
"Template:Herbs & spices",
"Template:Speciesbox",
"Template:Multiple image",
"Template:Cookbook",
"Template:Cite EB1911",
"Template:Convert",
"Template:As of",
"Template:ISBN",
"Template:Wikiversity-bc",
"Template:PFAF",
"Template:Taxonbar",
"Template:Short description",
"Template:Google books",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Commons category-inline",
"Template:Allium",
"Template:About",
"Template:Nutritional value"
] | https://en.wikipedia.org/wiki/Chives |
5,397 | Chris Morris (satirist) | Christopher J. Morris (born 15 June 1962) is an English comedian, radio presenter, actor, and filmmaker. Known for his deadpan, dark humour, surrealism, and controversial subject matter, he has been praised by the British Film Institute for his "uncompromising, moralistic drive".
In the early 1990s, Morris teamed up with his radio producer Armando Iannucci to create On the Hour, a satire of news programmes. This was expanded into a television spin off, The Day Today, which launched the career of comedian Steve Coogan and has since been hailed as one of the most important satirical shows of the 1990s. Morris further developed the satirical news format with Brass Eye, which lampooned celebrities whilst focusing on themes such as crime and drugs. For many, the apotheosis of Morris' career was a Brass Eye special, which dealt with the moral panic surrounding paedophilia. It quickly became one of the most complained-about programmes in British television history, leading the Daily Mail to describe him as "the most loathed man on TV".
Meanwhile, Morris' postmodern sketch comedy and ambient music radio show Blue Jam, which had seen controversy similar to Brass Eye, helped him to gain a cult following. Blue Jam was adapted into the TV series Jam, which some hailed as "the most radical and original television programme broadcast in years", and he went on to win the BAFTA Award for Best Short Film after expanding a Blue Jam sketch into My Wrongs 8245–8249 & 117, which starred Paddy Considine. This was followed by Nathan Barley, a sitcom written in collaboration with a then little-known Charlie Brooker that satirised hipsters, which had low ratings but found success upon its DVD release. Morris followed this by joining the cast of the sitcom The IT Crowd, his first project in which he did not have writing or producing input.
In 2010, Morris directed his first feature-length film, Four Lions, which satirised Islamic terrorism through a group of inept British Muslims. Reception of the film was largely positive, earning Morris his second BAFTA Film Award, this time for Outstanding Debut. Since 2012, he has directed four episodes of Iannucci's political comedy Veep and appeared onscreen in The Double and Stewart Lee's Comedy Vehicle. His second feature-length film, The Day Shall Come, was released in 2019.
Christopher J. Morris was born on 15 June 1962 in Colchester, Essex, the son of Rosemary Parrington and Paul Michael Morris. His father was a GP. Morris has a large red birthmark almost completely covering the left side of his face and neck, which he disguises with makeup when acting. He grew up in a Victorian farmhouse in the village of Buckden, Cambridgeshire, which he described as "very dull". He has two younger brothers, including theatre director Tom Morris. From an early age, he was a prankster and had a passion for radio. From the age of 10, he was educated at the independent Jesuit boarding school Stonyhurst College in Stonyhurst, Lancashire. He went to study zoology at the University of Bristol, where he gained a 2:1.
On graduating, Morris pursued a career as a musician in various bands, for which he played the bass guitar. He then went to work for Radio West, a local radio station in Bristol. He then took up a news traineeship with BBC Radio Cambridgeshire, where he took advantage of access to editing and recording equipment to create elaborate spoofs and parodies. He also spent time in early 1987 hosting a 2–4pm afternoon show and finally ended up presenting Saturday morning show I.T.
In July 1987, he moved on to BBC Radio Bristol to present his own show, No Known Cure, broadcast on Saturday and Sunday mornings. The show was surreal and satirical, with odd interviews conducted with unsuspecting members of the public. He was fired from Bristol in 1990 after "talking over the news bulletins and making silly noises". In 1988 he also joined, from its launch, Greater London Radio (GLR). He presented The Chris Morris Show on GLR until 1993, when one show got suspended after a sketch was broadcast involving a child "outing" celebrities.
In 1991, Morris joined Armando Iannucci's spoof news project On the Hour. Broadcast on BBC Radio 4, it saw him work alongside Iannucci, Steve Coogan, Stewart Lee, Richard Herring and Rebecca Front. In 1992, Morris hosted Danny Baker's Radio 5 Morning Edition show for a week whilst Baker was on holiday. In 1994, Morris began a weekly evening show, the Chris Morris Music Show, on BBC Radio 1 alongside Peter Baynham and 'man with a mobile phone' Paul Garner. In the shows, Morris perfected the spoof interview style that would become a central component of his Brass Eye programme. In the same year, Morris teamed up with Peter Cook (as Sir Arthur Streeb-Greebling) in a series of improvised conversations for BBC Radio 3 entitled Why Bother?.
"If you make a joke in an area which is for some reason, normally random, out of bounds, then you might find something out, you might put your finger on something."
Chris Morris
In 1994, a BBC Two television series based on On the Hour was broadcast under the name The Day Today. The Day Today made a star of Morris, and marked the television debut of Steve Coogan's Alan Partridge character. The programme ended on a high after just one series, with Morris winning the 1994 British Comedy Award for Best Newcomer for his lead role as the Paxmanesque news anchor.
In 1996, Morris appeared on the daytime programme The Time, The Place, posing as an academic, Thurston Lowe, in a discussion entitled "Are British Men Lousy Lovers?", but was found out when a producer alerted the show's host, John Stapleton.
In 1997, the black humour which had featured in On the Hour and The Day Today became more prominent in Brass Eye, another spoof of current affairs television documentary, shown on Channel 4. All three series satirised and exaggerated issues expected of news shows. The second episode of Brass Eye, for example, satirised drugs and the political rhetoric surrounding them. To help convey the satire, Morris invented a fictional drug by the name of "cake". In the episode, British celebrities and politicians describe the supposed symptoms in detail; David Amess mentioned the fictional drug at Parliament. In 2001, Morris' satirized the moral panic regarding pedophilia in the most controversial episode of Brass Eye, "Paedogeddon". Channel 4 apologised for the episode after receiving criticism from tabloids and around 3,000 complaints from viewers, which, at the time, was the most for an episode of British television.
From 1997 to 1999, Morris created Blue Jam for BBC Radio 1, a surreal taboo-breaking radio show set to an ambient soundtrack. In 2000, this was followed by Jam, a television reworking. Morris released a 'remix' version of this, entitled Jaaaaam.
In 2002, Morris ventured into film, directing the short My Wrongs#8245–8249 & 117, adapted from a Blue Jam monologue about a man led astray by a sinister talking dog. It was the first film project of Warp Films, a branch of Warp Records. In 2002 it won the BAFTA for best short film. In 2005 Morris worked on a sitcom entitled Nathan Barley, based on the character created by Charlie Brooker for his website TVGoHome (Morris had contributed to TVGoHome on occasion, under the pseudonym 'Sid Peach'). Co-written by Brooker and Morris, the series was broadcast on Channel 4 in early 2005.
Morris appeared in The IT Crowd, a Channel 4 sitcom which focuses on the information technology department of the fictional company Reynholm Industries. The series was written and directed by Graham Linehan (with whom Morris collaborated on The Day Today, Brass Eye and Jam) and produced by Ash Atalla. Morris played Denholm Reynholm, the eccentric managing director of the company. This marked the first time Morris had acted in a substantial role in a project which he has not developed himself. Morris' character appeared to leave the series during episode two of the second series. His character made a brief return in the first episode of the third series.
In November 2007, Morris wrote an article for The Observer in response to Ronan Bennett's article published six days earlier in The Guardian. Bennett's article, "Shame on us", accused the novelist Martin Amis of racism. Morris' response, "The absurd world of Martin Amis", was also highly critical of Amis; although he did not accede to Bennett's accusation of racism, Morris likened Amis to the Muslim cleric Abu Hamza (who was jailed for inciting racial hatred in 2006), suggesting that both men employ "mock erudition, vitriol and decontextualised quotes from the Qu'ran" to incite hatred.
Morris served as script editor for the 2009 series Stewart Lee's Comedy Vehicle, working with former colleagues Stewart Lee, Kevin Eldon and Armando Iannucci. He maintained this role for the second (2011) and third series (2014), also appearing as a mock interviewer dubbed the "hostile interrogator" in the third and fourth series.
"I don't really see the point of comedy unless there's something underpinning it. I mean, what are you doing? Are you doing some kind of exotic display for the court, to be patted on the head by the court, or are you trying to change something?"
— Morris discussing the motives behind his comedy
Morris completed his debut feature film Four Lions in late 2009, a satire based on a group of Islamist terrorists in Sheffield. It premiered at the Sundance Film Festival in January 2010 and was short-listed for the festival's World Cinema Narrative prize. The film (working title Boilerhouse) was picked up by Film Four. Morris told The Sunday Times that the film sought to do for Islamic terrorism what Dad's Army, the classic BBC comedy, did for the Nazis by showing them as "scary but also ridiculous".
In 2012, Morris directed the seventh and penultimate episode of the first season of Veep, an Armando Iannucci-devised American version of The Thick of It. In 2013, he returned to direct two episodes for the second season of Veep, and a further episode for season three in 2014.
In 2013, Morris appeared briefly in Richard Ayoade's The Double, a black comedy film based on the Fyodor Dostoyevsky novella of the same name. Morris had previously worked with Ayoade on Nathan Barley and The IT Crowd.
In February 2014, Morris made a surprise appearance at the beginning of a Stewart Lee live show, introducing the comedian with fictional anecdotes about their work together. The following month, Morris appeared in the third series of Stewart Lee's Comedy Vehicle as a "hostile interrogator", a role previously occupied by Armando Iannucci.
In December 2014, it was announced that a short radio collaboration with Noel Fielding and Richard Ayoade would be broadcast on BBC Radio 6. According to Fielding, the work had been in progress since around 2006. However, in January 2015 it was decided, 'in consultation with [Morris]', that the project was not yet complete, and so the intended broadcast did not go ahead.
A statement released by Film4 in February 2016 made reference to funding what would be Morris' second feature film. In November 2017 it was reported that Morris had shot the movie, starring Anna Kendrick, in the Dominican Republic but the title was not made public. It was later reported in January 2018 that Jim Gaffigan and Rupert Friend had joined the cast of the still-untitled film, and that the plot would revolve around an FBI hostage situation gone wrong. The completed film, titled The Day Shall Come, had its world premiere at South by Southwest on 11 March 2019.
Morris often co-writes and performs incidental music for his television shows, notably with Jam and the 'extended remix' version, Jaaaaam. In the early 1990s Morris contributed a Pixies parody track entitled "Motherbanger" to a flexi-disc given away with an edition of Select music magazine. Morris supplied sketches for British band Saint Etienne's 1993 single "You're in a Bad Way" (the sketch 'Spongbake' appears at the end of the 4th track on the CD single).
In 2000, he collaborated by mail with Amon Tobin to create the track "Bad Sex", which was released as a B-side on the Tobin single "Slowly". British band Stereolab's song "Nothing to Do with Me" from their 2001 album Sound-Dust featured various lines from Chris Morris sketches as lyrics.
Ramsey Ess of Vulture described Morris' comedy style as "crass" and "shocking", but noted an "underlying morality" and integrity, as well as the humor being Morris' priority.
In 2003, Morris was listed in The Observer as one of the 50 funniest acts in British comedy. In 2005, Channel 4 aired a show called The Comedian's Comedian in which foremost writers and performers of comedy ranked their 50 favourite acts. Morris was at number eleven. Morris won the BAFTA for outstanding debut with his film Four Lions. Adeel Akhtar and Nigel Lindsay collected the award in his absence. Lindsay stated that Morris had sent him a text message before they collected the award reading, 'Doused in petrol, Zippo at the ready'. In June 2012 Morris was placed at number 16 in the Top 100 People in UK Comedy.
In 2010, a biography, Disgusting Bliss: The Brass Eye of Chris Morris, was published. Written by Lucian Randall, the book depicted Morris as "brilliant but uncompromising", and a "frantic-minded perfectionist".
In November 2014, a three-hour retrospective of Morris' radio career was broadcast on BBC Radio 4 Extra under the title 'Raw Meat Radio', presented by Mary Anne Hobbs and featuring interviews with Armando Iannucci, Peter Baynham, Paul Garner, and others.
Morris won the Best TV Comedy Newcomer award from the British Comedy Awards in 1994 for his performance in The Day Today. He has won two BAFTA awards: the BAFTA Award for Best Short Film in 2002 for My Wrongs #8245–8249 & 117, and the BAFTA Award for Outstanding Debut by a British director, writer or producer in 2011 for Four Lions.
Morris and his wife, actress-turned-literary agent Jo Unwin, live in the Brixton district of London. The pair met in 1984 at the Edinburgh Festival, when he was playing bass guitar for the Cambridge Footlights Revue and she was in a comedy troupe called the Millies. They have two sons, Charles and Frederick, both of whom were born in Lambeth in south London.
Giving very few interviews and avoiding all social media, Morris has been described as a recluse. | [
{
"paragraph_id": 0,
"text": "Christopher J. Morris (born 15 June 1962) is an English comedian, radio presenter, actor, and filmmaker. Known for his deadpan, dark humour, surrealism, and controversial subject matter, he has been praised by the British Film Institute for his \"uncompromising, moralistic drive\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the early 1990s, Morris teamed up with his radio producer Armando Iannucci to create On the Hour, a satire of news programmes. This was expanded into a television spin off, The Day Today, which launched the career of comedian Steve Coogan and has since been hailed as one of the most important satirical shows of the 1990s. Morris further developed the satirical news format with Brass Eye, which lampooned celebrities whilst focusing on themes such as crime and drugs. For many, the apotheosis of Morris' career was a Brass Eye special, which dealt with the moral panic surrounding paedophilia. It quickly became one of the most complained-about programmes in British television history, leading the Daily Mail to describe him as \"the most loathed man on TV\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Meanwhile, Morris' postmodern sketch comedy and ambient music radio show Blue Jam, which had seen controversy similar to Brass Eye, helped him to gain a cult following. Blue Jam was adapted into the TV series Jam, which some hailed as \"the most radical and original television programme broadcast in years\", and he went on to win the BAFTA Award for Best Short Film after expanding a Blue Jam sketch into My Wrongs 8245–8249 & 117, which starred Paddy Considine. This was followed by Nathan Barley, a sitcom written in collaboration with a then little-known Charlie Brooker that satirised hipsters, which had low ratings but found success upon its DVD release. Morris followed this by joining the cast of the sitcom The IT Crowd, his first project in which he did not have writing or producing input.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 2010, Morris directed his first feature-length film, Four Lions, which satirised Islamic terrorism through a group of inept British Muslims. Reception of the film was largely positive, earning Morris his second BAFTA Film Award, this time for Outstanding Debut. Since 2012, he has directed four episodes of Iannucci's political comedy Veep and appeared onscreen in The Double and Stewart Lee's Comedy Vehicle. His second feature-length film, The Day Shall Come, was released in 2019.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Christopher J. Morris was born on 15 June 1962 in Colchester, Essex, the son of Rosemary Parrington and Paul Michael Morris. His father was a GP. Morris has a large red birthmark almost completely covering the left side of his face and neck, which he disguises with makeup when acting. He grew up in a Victorian farmhouse in the village of Buckden, Cambridgeshire, which he described as \"very dull\". He has two younger brothers, including theatre director Tom Morris. From an early age, he was a prankster and had a passion for radio. From the age of 10, he was educated at the independent Jesuit boarding school Stonyhurst College in Stonyhurst, Lancashire. He went to study zoology at the University of Bristol, where he gained a 2:1.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "On graduating, Morris pursued a career as a musician in various bands, for which he played the bass guitar. He then went to work for Radio West, a local radio station in Bristol. He then took up a news traineeship with BBC Radio Cambridgeshire, where he took advantage of access to editing and recording equipment to create elaborate spoofs and parodies. He also spent time in early 1987 hosting a 2–4pm afternoon show and finally ended up presenting Saturday morning show I.T.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "In July 1987, he moved on to BBC Radio Bristol to present his own show, No Known Cure, broadcast on Saturday and Sunday mornings. The show was surreal and satirical, with odd interviews conducted with unsuspecting members of the public. He was fired from Bristol in 1990 after \"talking over the news bulletins and making silly noises\". In 1988 he also joined, from its launch, Greater London Radio (GLR). He presented The Chris Morris Show on GLR until 1993, when one show got suspended after a sketch was broadcast involving a child \"outing\" celebrities.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "In 1991, Morris joined Armando Iannucci's spoof news project On the Hour. Broadcast on BBC Radio 4, it saw him work alongside Iannucci, Steve Coogan, Stewart Lee, Richard Herring and Rebecca Front. In 1992, Morris hosted Danny Baker's Radio 5 Morning Edition show for a week whilst Baker was on holiday. In 1994, Morris began a weekly evening show, the Chris Morris Music Show, on BBC Radio 1 alongside Peter Baynham and 'man with a mobile phone' Paul Garner. In the shows, Morris perfected the spoof interview style that would become a central component of his Brass Eye programme. In the same year, Morris teamed up with Peter Cook (as Sir Arthur Streeb-Greebling) in a series of improvised conversations for BBC Radio 3 entitled Why Bother?.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "\"If you make a joke in an area which is for some reason, normally random, out of bounds, then you might find something out, you might put your finger on something.\"",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "Chris Morris",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "In 1994, a BBC Two television series based on On the Hour was broadcast under the name The Day Today. The Day Today made a star of Morris, and marked the television debut of Steve Coogan's Alan Partridge character. The programme ended on a high after just one series, with Morris winning the 1994 British Comedy Award for Best Newcomer for his lead role as the Paxmanesque news anchor.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "In 1996, Morris appeared on the daytime programme The Time, The Place, posing as an academic, Thurston Lowe, in a discussion entitled \"Are British Men Lousy Lovers?\", but was found out when a producer alerted the show's host, John Stapleton.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "In 1997, the black humour which had featured in On the Hour and The Day Today became more prominent in Brass Eye, another spoof of current affairs television documentary, shown on Channel 4. All three series satirised and exaggerated issues expected of news shows. The second episode of Brass Eye, for example, satirised drugs and the political rhetoric surrounding them. To help convey the satire, Morris invented a fictional drug by the name of \"cake\". In the episode, British celebrities and politicians describe the supposed symptoms in detail; David Amess mentioned the fictional drug at Parliament. In 2001, Morris' satirized the moral panic regarding pedophilia in the most controversial episode of Brass Eye, \"Paedogeddon\". Channel 4 apologised for the episode after receiving criticism from tabloids and around 3,000 complaints from viewers, which, at the time, was the most for an episode of British television.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "From 1997 to 1999, Morris created Blue Jam for BBC Radio 1, a surreal taboo-breaking radio show set to an ambient soundtrack. In 2000, this was followed by Jam, a television reworking. Morris released a 'remix' version of this, entitled Jaaaaam.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "In 2002, Morris ventured into film, directing the short My Wrongs#8245–8249 & 117, adapted from a Blue Jam monologue about a man led astray by a sinister talking dog. It was the first film project of Warp Films, a branch of Warp Records. In 2002 it won the BAFTA for best short film. In 2005 Morris worked on a sitcom entitled Nathan Barley, based on the character created by Charlie Brooker for his website TVGoHome (Morris had contributed to TVGoHome on occasion, under the pseudonym 'Sid Peach'). Co-written by Brooker and Morris, the series was broadcast on Channel 4 in early 2005.",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "Morris appeared in The IT Crowd, a Channel 4 sitcom which focuses on the information technology department of the fictional company Reynholm Industries. The series was written and directed by Graham Linehan (with whom Morris collaborated on The Day Today, Brass Eye and Jam) and produced by Ash Atalla. Morris played Denholm Reynholm, the eccentric managing director of the company. This marked the first time Morris had acted in a substantial role in a project which he has not developed himself. Morris' character appeared to leave the series during episode two of the second series. His character made a brief return in the first episode of the third series.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "In November 2007, Morris wrote an article for The Observer in response to Ronan Bennett's article published six days earlier in The Guardian. Bennett's article, \"Shame on us\", accused the novelist Martin Amis of racism. Morris' response, \"The absurd world of Martin Amis\", was also highly critical of Amis; although he did not accede to Bennett's accusation of racism, Morris likened Amis to the Muslim cleric Abu Hamza (who was jailed for inciting racial hatred in 2006), suggesting that both men employ \"mock erudition, vitriol and decontextualised quotes from the Qu'ran\" to incite hatred.",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "Morris served as script editor for the 2009 series Stewart Lee's Comedy Vehicle, working with former colleagues Stewart Lee, Kevin Eldon and Armando Iannucci. He maintained this role for the second (2011) and third series (2014), also appearing as a mock interviewer dubbed the \"hostile interrogator\" in the third and fourth series.",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "\"I don't really see the point of comedy unless there's something underpinning it. I mean, what are you doing? Are you doing some kind of exotic display for the court, to be patted on the head by the court, or are you trying to change something?\"",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "— Morris discussing the motives behind his comedy",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "Morris completed his debut feature film Four Lions in late 2009, a satire based on a group of Islamist terrorists in Sheffield. It premiered at the Sundance Film Festival in January 2010 and was short-listed for the festival's World Cinema Narrative prize. The film (working title Boilerhouse) was picked up by Film Four. Morris told The Sunday Times that the film sought to do for Islamic terrorism what Dad's Army, the classic BBC comedy, did for the Nazis by showing them as \"scary but also ridiculous\".",
"title": "Career"
},
{
"paragraph_id": 21,
"text": "In 2012, Morris directed the seventh and penultimate episode of the first season of Veep, an Armando Iannucci-devised American version of The Thick of It. In 2013, he returned to direct two episodes for the second season of Veep, and a further episode for season three in 2014.",
"title": "Career"
},
{
"paragraph_id": 22,
"text": "In 2013, Morris appeared briefly in Richard Ayoade's The Double, a black comedy film based on the Fyodor Dostoyevsky novella of the same name. Morris had previously worked with Ayoade on Nathan Barley and The IT Crowd.",
"title": "Career"
},
{
"paragraph_id": 23,
"text": "In February 2014, Morris made a surprise appearance at the beginning of a Stewart Lee live show, introducing the comedian with fictional anecdotes about their work together. The following month, Morris appeared in the third series of Stewart Lee's Comedy Vehicle as a \"hostile interrogator\", a role previously occupied by Armando Iannucci.",
"title": "Career"
},
{
"paragraph_id": 24,
"text": "In December 2014, it was announced that a short radio collaboration with Noel Fielding and Richard Ayoade would be broadcast on BBC Radio 6. According to Fielding, the work had been in progress since around 2006. However, in January 2015 it was decided, 'in consultation with [Morris]', that the project was not yet complete, and so the intended broadcast did not go ahead.",
"title": "Career"
},
{
"paragraph_id": 25,
"text": "A statement released by Film4 in February 2016 made reference to funding what would be Morris' second feature film. In November 2017 it was reported that Morris had shot the movie, starring Anna Kendrick, in the Dominican Republic but the title was not made public. It was later reported in January 2018 that Jim Gaffigan and Rupert Friend had joined the cast of the still-untitled film, and that the plot would revolve around an FBI hostage situation gone wrong. The completed film, titled The Day Shall Come, had its world premiere at South by Southwest on 11 March 2019.",
"title": "Career"
},
{
"paragraph_id": 26,
"text": "Morris often co-writes and performs incidental music for his television shows, notably with Jam and the 'extended remix' version, Jaaaaam. In the early 1990s Morris contributed a Pixies parody track entitled \"Motherbanger\" to a flexi-disc given away with an edition of Select music magazine. Morris supplied sketches for British band Saint Etienne's 1993 single \"You're in a Bad Way\" (the sketch 'Spongbake' appears at the end of the 4th track on the CD single).",
"title": "Career"
},
{
"paragraph_id": 27,
"text": "In 2000, he collaborated by mail with Amon Tobin to create the track \"Bad Sex\", which was released as a B-side on the Tobin single \"Slowly\". British band Stereolab's song \"Nothing to Do with Me\" from their 2001 album Sound-Dust featured various lines from Chris Morris sketches as lyrics.",
"title": "Career"
},
{
"paragraph_id": 28,
"text": "Ramsey Ess of Vulture described Morris' comedy style as \"crass\" and \"shocking\", but noted an \"underlying morality\" and integrity, as well as the humor being Morris' priority.",
"title": "Style"
},
{
"paragraph_id": 29,
"text": "In 2003, Morris was listed in The Observer as one of the 50 funniest acts in British comedy. In 2005, Channel 4 aired a show called The Comedian's Comedian in which foremost writers and performers of comedy ranked their 50 favourite acts. Morris was at number eleven. Morris won the BAFTA for outstanding debut with his film Four Lions. Adeel Akhtar and Nigel Lindsay collected the award in his absence. Lindsay stated that Morris had sent him a text message before they collected the award reading, 'Doused in petrol, Zippo at the ready'. In June 2012 Morris was placed at number 16 in the Top 100 People in UK Comedy.",
"title": "Recognition"
},
{
"paragraph_id": 30,
"text": "In 2010, a biography, Disgusting Bliss: The Brass Eye of Chris Morris, was published. Written by Lucian Randall, the book depicted Morris as \"brilliant but uncompromising\", and a \"frantic-minded perfectionist\".",
"title": "Recognition"
},
{
"paragraph_id": 31,
"text": "In November 2014, a three-hour retrospective of Morris' radio career was broadcast on BBC Radio 4 Extra under the title 'Raw Meat Radio', presented by Mary Anne Hobbs and featuring interviews with Armando Iannucci, Peter Baynham, Paul Garner, and others.",
"title": "Recognition"
},
{
"paragraph_id": 32,
"text": "Morris won the Best TV Comedy Newcomer award from the British Comedy Awards in 1994 for his performance in The Day Today. He has won two BAFTA awards: the BAFTA Award for Best Short Film in 2002 for My Wrongs #8245–8249 & 117, and the BAFTA Award for Outstanding Debut by a British director, writer or producer in 2011 for Four Lions.",
"title": "Recognition"
},
{
"paragraph_id": 33,
"text": "Morris and his wife, actress-turned-literary agent Jo Unwin, live in the Brixton district of London. The pair met in 1984 at the Edinburgh Festival, when he was playing bass guitar for the Cambridge Footlights Revue and she was in a comedy troupe called the Millies. They have two sons, Charles and Frederick, both of whom were born in Lambeth in south London.",
"title": "Personal life"
},
{
"paragraph_id": 34,
"text": "Giving very few interviews and avoiding all social media, Morris has been described as a recluse.",
"title": "Personal life"
}
] | Christopher J. Morris is an English comedian, radio presenter, actor, and filmmaker. Known for his deadpan, dark humour, surrealism, and controversial subject matter, he has been praised by the British Film Institute for his "uncompromising, moralistic drive". In the early 1990s, Morris teamed up with his radio producer Armando Iannucci to create On the Hour, a satire of news programmes. This was expanded into a television spin off, The Day Today, which launched the career of comedian Steve Coogan and has since been hailed as one of the most important satirical shows of the 1990s. Morris further developed the satirical news format with Brass Eye, which lampooned celebrities whilst focusing on themes such as crime and drugs. For many, the apotheosis of Morris' career was a Brass Eye special, which dealt with the moral panic surrounding paedophilia. It quickly became one of the most complained-about programmes in British television history, leading the Daily Mail to describe him as "the most loathed man on TV". Meanwhile, Morris' postmodern sketch comedy and ambient music radio show Blue Jam, which had seen controversy similar to Brass Eye, helped him to gain a cult following. Blue Jam was adapted into the TV series Jam, which some hailed as "the most radical and original television programme broadcast in years", and he went on to win the BAFTA Award for Best Short Film after expanding a Blue Jam sketch into My Wrongs 8245–8249 & 117, which starred Paddy Considine. This was followed by Nathan Barley, a sitcom written in collaboration with a then little-known Charlie Brooker that satirised hipsters, which had low ratings but found success upon its DVD release. Morris followed this by joining the cast of the sitcom The IT Crowd, his first project in which he did not have writing or producing input. In 2010, Morris directed his first feature-length film, Four Lions, which satirised Islamic terrorism through a group of inept British Muslims. Reception of the film was largely positive, earning Morris his second BAFTA Film Award, this time for Outstanding Debut. Since 2012, he has directed four episodes of Iannucci's political comedy Veep and appeared onscreen in The Double and Stewart Lee's Comedy Vehicle. His second feature-length film, The Day Shall Come, was released in 2019. | 2001-09-11T05:26:55Z | 2023-12-31T23:26:19Z | [
"Template:Cite news",
"Template:Commons category-inline",
"Template:British Comedy Guide",
"Template:Authority control",
"Template:Cite web",
"Template:Yes",
"Template:Reflist",
"Template:Christopher Morris",
"Template:Infobox person",
"Template:Use dmy dates",
"Template:IMDb name",
"Template:San Diego Film Critics Society Award for Best Original Screenplay",
"Template:Short description",
"Template:Quote box",
"Template:Cite book",
"Template:Citation",
"Template:BAFTA Outstanding Debut Award",
"Template:Use British English"
] | https://en.wikipedia.org/wiki/Chris_Morris_(satirist) |
5,399 | Colorado | Colorado (/ˌkɒləˈrædoʊ, -ˈrɑːdoʊ/ , other variants) is a state in the Mountain West subregion of the Western United States. It encompasses most of the Southern Rocky Mountains, as well as the northeastern portion of the Colorado Plateau and the western edge of the Great Plains. Colorado is the eighth most extensive and 21st most populous U.S. state. The United States Census Bureau estimated the population of Colorado at 5,839,926 as of July 1, 2022, a 1.15% increase since the 2020 United States census.
The region has been inhabited by Native Americans and their ancestors for at least 13,500 years and possibly much longer. The eastern edge of the Rocky Mountains was a major migration route for early peoples who spread throughout the Americas. In 1848, much of the region was annexed to the United States with the Treaty of Guadalupe Hidalgo. The Pike's Peak Gold Rush of 1858–1862 created an influx of settlers. On February 28, 1861, U.S. President James Buchanan signed an act creating the Territory of Colorado, and on August 1, 1876, President Ulysses S. Grant signed Proclamation 230 admitting Colorado to the Union as the 38th state. The Spanish adjective "colorado" means "colored red" or "ruddy". Colorado is nicknamed the "Centennial State" because it became a state one century (and four weeks) after the signing of the United States Declaration of Independence.
Colorado is bordered by Wyoming to the north, Nebraska to the northeast, Kansas to the east, Oklahoma to the southeast, New Mexico to the south, Utah to the west, and touches Arizona to the southwest at the Four Corners. Colorado is noted for its landscape of mountains, forests, high plains, mesas, canyons, plateaus, rivers, and desert lands. Colorado is one of the Mountain States and is often considered to be part of the southwestern United States. The high plains of Colorado may be considered a part of the midwestern United States.
Denver is the capital, the most populous city, and the center of the Front Range Urban Corridor. Colorado Springs is the second most populous city. Residents of the state are known as Coloradans, although the antiquated "Coloradoan" is occasionally used. Major parts of the economy include government and defense, mining, agriculture, tourism, and increasingly other kinds of manufacturing. With increasing temperatures and decreasing water availability, Colorado's agriculture, forestry, and tourism economies are expected to be heavily affected by climate change.
The region that is today the State of Colorado has been inhabited by Native Americans and their Paleoamerican ancestors for at least 13,500 years and possibly more than 37,000 years. The eastern edge of the Rocky Mountains was a major migration route that was important to the spread of early peoples throughout the Americas. The Lindenmeier site in Larimer County contains artifacts dating from approximately 8720 BCE. The Ancient Pueblo peoples lived in the valleys and mesas of the Colorado Plateau. The Ute Nation inhabited the mountain valleys of the Southern Rocky Mountains and the Western Rocky Mountains, even as far east as the Front Range of the present day. The Apache and the Comanche also inhabited the Eastern and Southeastern parts of the state. In the 17th century, the Arapaho and Cheyenne moved west from the Great Lakes region to hunt across the High Plains of Colorado and Wyoming.
The Spanish Empire claimed Colorado as part of its New Mexico province before U.S. involvement in the region. The U.S. acquired a territorial claim to the eastern Rocky Mountains with the Louisiana Purchase from France in 1803. This U.S. claim conflicted with the claim by Spain to the upper Arkansas River Basin as the exclusive trading zone of its colony of Santa Fe de Nuevo México. In 1806, Zebulon Pike led a U.S. Army reconnaissance expedition into the disputed region. Colonel Pike and his troops were arrested by Spanish cavalrymen in the San Luis Valley the following February, taken to Chihuahua, and expelled from Mexico the following July.
The U.S. relinquished its claim to all land south and west of the Arkansas River and south of 42nd parallel north and west of the 100th meridian west as part of its purchase of Florida from Spain with the Adams-Onís Treaty of 1819. The treaty took effect on February 22, 1821. Having settled its border with Spain, the U.S. admitted the southeastern portion of the Territory of Missouri to the Union as the state of Missouri on August 10, 1821. The remainder of Missouri Territory, including what would become northeastern Colorado, became an unorganized territory and remained so for 33 years over the question of slavery. After 11 years of war, Spain finally recognized the independence of Mexico with the Treaty of Córdoba signed on August 24, 1821. Mexico eventually ratified the Adams–Onís Treaty in 1831. The Texian Revolt of 1835–36 fomented a dispute between the U.S. and Mexico which eventually erupted into the Mexican–American War in 1846. Mexico surrendered its northern territory to the U.S. with the Treaty of Guadalupe Hidalgo after the war in 1848; this included much of the western and southern areas of the current state of Colorado.
Most American settlers traveling overland west to the Oregon Country, the new goldfields of California, or the new Mormon settlements of the State of Deseret in the Salt Lake Valley, avoided the rugged Southern Rocky Mountains, and instead followed the North Platte River and Sweetwater River to South Pass (Wyoming), the lowest crossing of the Continental Divide between the Southern Rocky Mountains and the Central Rocky Mountains. In 1849, the Mormons of the Salt Lake Valley organized the extralegal State of Deseret, claiming the entire Great Basin and all lands drained by the rivers Green, Grand, and Colorado. The federal government of the U.S. flatly refused to recognize the new Mormon government, because it was theocratic and sanctioned plural marriage. Instead, the Compromise of 1850 divided the Mexican Cession and the northwestern claims of Texas into a new state and two new territories, the state of California, the Territory of New Mexico, and the Territory of Utah. On April 9, 1851, Mexican American settlers from the area of Taos settled the village of San Luis, then in the New Mexico Territory, later to become Colorado's first permanent Euro-American settlement.
In 1854, Senator Stephen A. Douglas persuaded the U.S. Congress to divide the unorganized territory east of the Continental Divide into two new organized territories, the Territory of Kansas and the Territory of Nebraska, and an unorganized southern region known as the Indian territory. Each new territory was to decide the fate of slavery within its boundaries, but this compromise merely served to fuel animosity between free soil and pro-slavery factions.
The gold seekers organized the Provisional Government of the Territory of Jefferson on August 24, 1859, but this new territory failed to secure approval from the Congress of the United States embroiled in the debate over slavery. The election of Abraham Lincoln for the President of the United States on November 6, 1860, led to the secession of nine southern slave states and the threat of civil war among the states. Seeking to augment the political power of the Union states, the Republican Party-dominated Congress quickly admitted the eastern portion of the Territory of Kansas into the Union as the free State of Kansas on January 29, 1861, leaving the western portion of the Kansas Territory, and its gold-mining areas, as unorganized territory.
Thirty days later on February 28, 1861, outgoing U.S. President James Buchanan signed an Act of Congress organizing the free Territory of Colorado. The original boundaries of Colorado remain unchanged except for government survey amendments. In 1776, Spanish priest Silvestre Vélez de Escalante recorded that Native Americans in the area knew the river as el Rio Colorado for the red-brown silt that the river carried from the mountains. In 1859, a U.S. Army topographic expedition led by Captain John Macomb located the confluence of the Green River with the Grand River in what is now Canyonlands National Park in Utah. The Macomb party designated the confluence as the source of the Colorado River.
On April 12, 1861, South Carolina artillery opened fire on Fort Sumter to start the American Civil War. While many gold seekers held sympathies for the Confederacy, the vast majority remained fiercely loyal to the Union cause.
In 1862, a force of Texas cavalry invaded the Territory of New Mexico and captured Santa Fe on March 10. The object of this Western Campaign was to seize or disrupt the gold fields of Colorado and California and to seize ports on the Pacific Ocean for the Confederacy. A hastily organized force of Colorado volunteers force-marched from Denver City, Colorado Territory, to Glorieta Pass, New Mexico Territory, in an attempt to block the Texans. On March 28, the Coloradans and local New Mexico volunteers stopped the Texans at the Battle of Glorieta Pass, destroyed their cannon and supply wagons, and dispersed 500 of their horses and mules. The Texans were forced to retreat to Santa Fe. Having lost the supplies for their campaign and finding little support in New Mexico, the Texans abandoned Santa Fe and returned to San Antonio in defeat. The Confederacy made no further attempts to seize the Southwestern United States.
In 1864, Territorial Governor John Evans appointed the Reverend John Chivington as Colonel of the Colorado Volunteers with orders to protect white settlers from Cheyenne and Arapaho warriors who were accused of stealing cattle. Colonel Chivington ordered his troops to attack a band of Cheyenne and Arapaho encamped along Sand Creek. Chivington reported that his troops killed more than 500 warriors. The militia returned to Denver City in triumph, but several officers reported that the so-called battle was a blatant massacre of Indians at peace, that most of the dead were women and children, and that the bodies of the dead had been hideously mutilated and desecrated. Three U.S. Army inquiries condemned the action, and incoming President Andrew Johnson asked Governor Evans for his resignation, but none of the perpetrators was ever punished. This event is now known as the Sand Creek massacre.
In the midst and aftermath of the Civil War, many discouraged prospectors returned to their homes, but a few stayed and developed mines, mills, farms, ranches, roads, and towns in Colorado Territory. On September 14, 1864, James Huff discovered silver near Argentine Pass, the first of many silver strikes. In 1867, the Union Pacific Railroad laid its tracks west to Weir, now Julesburg, in the northeast corner of the Territory. The Union Pacific linked up with the Central Pacific Railroad at Promontory Summit, Utah, on May 10, 1869, to form the First transcontinental railroad. The Denver Pacific Railway reached Denver in June of the following year, and the Kansas Pacific arrived two months later to forge the second line across the continent. In 1872, rich veins of silver were discovered in the San Juan Mountains on the Ute Indian reservation in southwestern Colorado. The Ute people were removed from the San Juans the following year.
The United States Congress passed an enabling act on March 3, 1875, specifying the requirements for the Territory of Colorado to become a state. On August 1, 1876 (four weeks after the Centennial of the United States), U.S. President Ulysses S. Grant signed a proclamation admitting Colorado to the Union as the 38th state and earning it the moniker "Centennial State".
The discovery of a major silver lode near Leadville in 1878 triggered the Colorado Silver Boom. The Sherman Silver Purchase Act of 1890 invigorated silver mining, and Colorado's last, but greatest, gold strike at Cripple Creek a few months later lured a new generation of gold seekers. Colorado women were granted the right to vote on November 7, 1893, making Colorado the second state to grant universal suffrage and the first one by a popular vote (of Colorado men). The repeal of the Sherman Silver Purchase Act in 1893 led to a staggering collapse of the mining and agricultural economy of Colorado, but the state slowly and steadily recovered. Between the 1880s and 1930s, Denver's floriculture industry developed into a major industry in Colorado. This period became known locally as the Carnation Gold Rush.
Poor labor conditions and discontent among miners resulted in several major clashes between strikers and the Colorado National Guard, including the 1903–1904 Western Federation of Miners Strike and Colorado Coalfield War, the latter of which included the Ludlow massacre that killed a dozen women and children. Both the 1913–1914 Coalfield War and the Denver streetcar strike of 1920 resulted in federal troops intervening to end the violence. In 1927, the 1927-28 Colorado coal strike occurred and was ultimately successful in winning a dollar a day increase in wages. During it however the Columbine Mine massacre resulted in six dead strikers following a confrontation with Colorado Rangers. In a separate incident in Trinidad the mayor was accused of deputizing members of the KKK against the striking workers. More than 5,000 Colorado miners—many immigrants—are estimated to have died in accidents since records were first formally collected following an 1884 accident in Crested Butte that killed 59.
In 1924, the Ku Klux Klan Colorado Realm achieved dominance in Colorado politics. With peak membership levels, the Second Klan levied significant control over both the local and state Democrat and Republican parties, particularly in the governor's office and city governments of Denver, Cañon City, and Durango. A particularly strong element of the Klan controlled the Denver Police. Cross burnings became semi-regular occurrences in cities such as Florence and Pueblo. The Klan targeted African-Americans, Catholics, Eastern European immigrants, and other non-White Protestant groups. Efforts by non-Klan lawmen and lawyers including Philip Van Cise lead to a rapid decline in the organization's power, with membership waning significantly by the end of the 1920s.
Colorado became the first western state to host a major political convention when the Democratic Party met in Denver in 1908. By the U.S. census in 1930, the population of Colorado first exceeded one million residents. Colorado suffered greatly through the Great Depression and the Dust Bowl of the 1930s, but a major wave of immigration following World War II boosted Colorado's fortune. Tourism became a mainstay of the state economy, and high technology became an important economic engine. The United States Census Bureau estimated that the population of Colorado exceeded five million in 2009.
On September 11, 1957, a plutonium fire occurred at the Rocky Flats Plant, which resulted in the significant plutonium contamination of surrounding populated areas.
From the 1940s and 1970s, many protest movements gained momentum in Colorado, predominantly in Denver. This included the Chicano Movement, a civil rights, and social movement of Mexican Americans emphasizing a Chicano identity that is widely considered to have begun in Denver. The National Chicano Liberation Youth Conference was held in Colorado in March 1969.
In 1967, Colorado was the first state to loosen restrictions on abortion when governor John Love signed a law allowing abortions in cases of rape, incest, or threats to the woman's mental or physical health. Many states followed Colorado's lead in loosening abortion laws in the 1960s and 1970s.
Since the late 1990s, Colorado has been the site of multiple major mass shootings, including the infamous Columbine High School massacre in 1999 which made international news, where two gunmen killed 12 students and one teacher, before committing suicide. The incident has since spawned many copycat incidents. On July 20, 2012, a gunman killed 12 people in a movie theater in Aurora. The state responded with tighter restrictions on firearms, including introducing a limit on magazine capacity. On March 22, 2021, a gunman killed 10 people, including a police officer, in a King Soopers supermarket in Boulder. In an instance of anti-LGBT violence, a gunman killed 5 people at a nightclub in Colorado Springs during the night of November 19–20, 2022.
Four warships of the U.S. Navy have been named the USS Colorado. The first USS Colorado was named for the Colorado River and served in the Civil War and later the Asiatic Squadron, where it was attacked during the 1871 Korean Expedition. The later three ships were named in honor of the state, including an armored cruiser and the battleship USS Colorado, the latter of which was the lead ship of her class and served in World War II in the Pacific beginning in 1941. At the time of the attack on Pearl Harbor, the battleship USS Colorado was located at the naval base in San Diego, California, and thus went unscathed. The most recent vessel to bear the name USS Colorado is Virginia-class submarine USS Colorado (SSN-788), which was commissioned in 2018.
Colorado is notable for its diverse geography, which includes alpine mountains, high plains, deserts with huge sand dunes, and deep canyons. In 1861, the United States Congress defined the boundaries of the new Territory of Colorado exclusively by lines of latitude and longitude, stretching from 37°N to 41°N latitude, and from 102°02′48″W to 109°02′48″W longitude (25°W to 32°W from the Washington Meridian). After 162 years of government surveys, the borders of Colorado were officially defined by 697 boundary markers and 697 straight boundary lines. Colorado, Wyoming, and Utah are the only states that have their borders defined solely by straight boundary lines with no natural features. The southwest corner of Colorado is the Four Corners Monument at 36°59′56″N, 109°2′43″W. The Four Corners Monument, located at the place where Colorado, New Mexico, Arizona, and Utah meet, is the only place in the United States where four states meet.
Approximately half of Colorado is flat and rolling land. East of the Rocky Mountains are the Colorado Eastern Plains of the High Plains, the section of the Great Plains within Colorado at elevations ranging from roughly 3,350 to 7,500 feet (1,020 to 2,290 m). The Colorado plains are mostly prairies but also include deciduous forests, buttes, and canyons. Precipitation averages 15 to 25 inches (380 to 640 mm) annually.
Eastern Colorado is presently mainly farmland and rangeland, along with small farming villages and towns. Corn, wheat, hay, soybeans, and oats are all typical crops. Most villages and towns in this region boast both a water tower and a grain elevator. Irrigation water is available from both surface and subterranean sources. Surface water sources include the South Platte, the Arkansas River, and a few other streams. Subterranean water is generally accessed through artesian wells. Heavy usage of these wells for irrigation purposes caused underground water reserves to decline in the region. Eastern Colorado also hosts a considerable amount and range of livestock, such as cattle ranches and hog farms.
Roughly 70% of Colorado's population resides along the eastern edge of the Rocky Mountains in the Front Range Urban Corridor between Cheyenne, Wyoming, and Pueblo, Colorado. This region is partially protected from prevailing storms that blow in from the Pacific Ocean region by the high Rockies in the middle of Colorado. The "Front Range" includes Denver, Boulder, Fort Collins, Loveland, Castle Rock, Colorado Springs, Pueblo, Greeley, and other townships and municipalities in between. On the other side of the Rockies, the significant population centers in western Colorado (which is known as "The Western Slope") are the cities of Grand Junction, Durango, and Montrose.
To the west of the Great Plains of Colorado rises the eastern slope of the Rocky Mountains. Notable peaks of the Rocky Mountains include Longs Peak, Mount Blue Sky, Pikes Peak, and the Spanish Peaks near Walsenburg, in southern Colorado. This area drains to the east and the southeast, ultimately either via the Mississippi River or the Rio Grande into the Gulf of Mexico.
The Rocky Mountains within Colorado contain 53 true peaks with a total of 58 that are 14,000 feet (4,267 m) or higher in elevation above sea level, known as fourteeners. These mountains are largely covered with trees such as conifers and aspens up to the tree line, at an elevation of about 12,000 feet (3,658 m) in southern Colorado to about 10,500 feet (3,200 m) in northern Colorado. Above this tree line, only alpine vegetation grows. Only small parts of the Colorado Rockies are snow-covered year-round.
Much of the alpine snow melts by mid-August except for a few snow-capped peaks and a few small glaciers. The Colorado Mineral Belt, stretching from the San Juan Mountains in the southwest to Boulder and Central City on the front range, contains most of the historic gold- and silver-mining districts of Colorado. Mount Elbert is the highest summit of the Rocky Mountains. The 30 highest major summits of the Rocky Mountains of North America are all within the state.
The summit of Mount Elbert at 14,440 feet (4,401.2 m) elevation in Lake County is the highest point in Colorado and the Rocky Mountains of North America. Colorado is the only U.S. state that lies entirely above 1,000 meters elevation. The point where the Arikaree River flows out of Yuma County, Colorado, and into Cheyenne County, Kansas, is the lowest in Colorado at 3,317 feet (1,011 m) elevation. This point, which is the highest low elevation point of any state, is higher than the high elevation points of 18 states and the District of Columbia.
The Continental Divide of the Americas extends along the crest of the Rocky Mountains. The area of Colorado to the west of the Continental Divide is called the Western Slope of Colorado. West of the Continental Divide, water flows to the southwest via the Colorado River and the Green River into the Gulf of California.
Within the interior of the Rocky Mountains are several large parks which are high broad basins. In the north, on the east side of the Continental Divide is the North Park of Colorado. The North Park is drained by the North Platte River, which flows north into Wyoming and Nebraska. Just to the south of North Park, but on the western side of the Continental Divide, is the Middle Park of Colorado, which is drained by the Colorado River. The South Park of Colorado is the region of the headwaters of the South Platte River.
In south-central Colorado is the large San Luis Valley, where the headwaters of the Rio Grande are located. The northern part of the valley is the San Luis Closed Basin, an endorheic basin that helped created the Great Sand Dunes. The valley sits between the Sangre De Cristo Mountains and San Juan Mountains. The Rio Grande drains due south into New Mexico, Texas, and Mexico. Across the Sangre de Cristo Range to the east of the San Luis Valley lies the Wet Mountain Valley. These basins, particularly the San Luis Valley, lie along the Rio Grande Rift, a major geological formation of the Rocky Mountains, and its branches.
The Western Slope of Colorado includes the western face of the Rocky Mountains and all of the area to the western border. This area includes several terrains and climates from alpine mountains to arid deserts. The Western Slope includes many ski resort towns in the Rocky Mountains and towns west to Utah. It is less populous than the Front Range but includes a large number of national parks and monuments.
The northwestern corner of Colorado is a sparsely populated region, and it contains part of the noted Dinosaur National Monument, which not only is a paleontological area, but is also a scenic area of rocky hills, canyons, arid desert, and streambeds. Here, the Green River briefly crosses over into Colorado.
The Western Slope of Colorado is drained by the Colorado River and its tributaries (primarily the Gunnison River, Green River, and the San Juan River). The Colorado River flows through Glenwood Canyon, and then through an arid valley made up of desert from Rifle to Parachute, through the desert canyon of De Beque Canyon, and into the arid desert of Grand Valley, where the city of Grand Junction is located.
Also prominent is the Grand Mesa, which lies to the southeast of Grand Junction; the high San Juan Mountains, a rugged mountain range; and to the north and west of the San Juan Mountains, the Colorado Plateau.
Grand Junction, Colorado, at the confluence of the Colorado and Gunnison Rivers, is the largest city on the Western Slope. Grand Junction and Durango are the only major centers of television broadcasting west of the Continental Divide in Colorado, though most mountain resort communities publish daily newspapers. Grand Junction is located at the juncture of Interstate 70 and US 50, the only major highways in western Colorado. Grand Junction is also along the major railroad of the Western Slope, the Union Pacific. This railroad also provides the tracks for Amtrak's California Zephyr passenger train, which crosses the Rocky Mountains between Denver and Grand Junction.
The Western Slope includes multiple notable destinations in the Colorado Rocky Mountains, including Glenwood Springs, with its resort hot springs, and the ski resorts of Aspen, Breckenridge, Vail, Crested Butte, Steamboat Springs, and Telluride.
Higher education in and near the Western Slope can be found at Colorado Mesa University in Grand Junction, Western Colorado University in Gunnison, Fort Lewis College in Durango, and Colorado Mountain College in Glenwood Springs and Steamboat Springs.
The Four Corners Monument in the southwest corner of Colorado marks the common boundary of Colorado, New Mexico, Arizona, and Utah; the only such place in the United States.
The climate of Colorado is more complex than states outside of the Mountain States region. Unlike most other states, southern Colorado is not always warmer than northern Colorado. Most of Colorado is made up of mountains, foothills, high plains, and desert lands. Mountains and surrounding valleys greatly affect the local climate. Northeast, east, and southeast Colorado are mostly the high plains, while Northern Colorado is a mix of high plains, foothills, and mountains. Northwest and west Colorado are predominantly mountainous, with some desert lands mixed in. Southwest and southern Colorado are a complex mixture of desert and mountain areas.
The climate of the Eastern Plains is semi-arid (Köppen climate classification: BSk) with low humidity and moderate precipitation, usually from 15 to 25 inches (380 to 640 millimeters) annually, although many areas near the rivers are semi-humid climate. The area is known for its abundant sunshine and cool, clear nights, which give this area a great average diurnal temperature range. The difference between the highs of the days and the lows of the nights can be considerable as warmth dissipates to space during clear nights, the heat radiation not being trapped by clouds. The Front Range urban corridor, where most of the population of Colorado resides, lies in a pronounced precipitation shadow as a result of being on the lee side of the Rocky Mountains.
In summer, this area can have many days above 95 °F (35 °C) and often 100 °F (38 °C). On the plains, the winter lows usually range from 25 to −10 °F (−4 to −23 °C). About 75% of the precipitation falls within the growing season, from April to September, but this area is very prone to droughts. Most of the precipitation comes from thunderstorms, which can be severe, and from major snowstorms that occur in the winter and early spring. Otherwise, winters tend to be mostly dry and cold.
In much of the region, March is the snowiest month. April and May are normally the rainiest months, while April is the wettest month overall. The Front Range cities closer to the mountains tend to be warmer in the winter due to Chinook winds which warms the area, sometimes bringing temperatures of 70 °F (21 °C) or higher in the winter. The average July temperature is 55 °F (13 °C) in the morning and 90 °F (32 °C) in the afternoon. The average January temperature is 18 °F (−8 °C) in the morning and 48 °F (9 °C) in the afternoon, although variation between consecutive days can be 40 °F (-40 °C).
Just west of the plains and into the foothills, there is a wide variety of climate types. Locations merely a few miles apart can experience entirely different weather depending on the topography. Most valleys have a semi-arid climate, not unlike the eastern plains, which transitions to an alpine climate at the highest elevations. Microclimates also exist in local areas that run nearly the entire spectrum of climates, including subtropical highland (Cfb/Cwb), humid subtropical (Cfa), humid continental (Dfa/Dfb), Mediterranean (Csa/Csb) and subarctic (Dfc).
Extreme weather changes are common in Colorado, although a significant portion of the extreme weather occurs in the least populated areas of the state. Thunderstorms are common east of the Continental Divide in the spring and summer, yet are usually brief. Hail is a common sight in the mountains east of the Divide and across the eastern Plains, especially the northeast part of the state. Hail is the most commonly reported warm-season severe weather hazard, and occasionally causes human injuries, as well as significant property damage. The eastern Plains are subject to some of the biggest hail storms in North America. Notable examples are the severe hailstorms that hit Denver on July 11, 1990, and May 8, 2017, the latter being the costliest ever in the state.
The Eastern Plains are part of the extreme western portion of Tornado Alley; some damaging tornadoes in the Eastern Plains include the 1990 Limon F3 tornado and the 2008 Windsor EF3 tornado, which devastated a small town. Portions of the eastern Plains see especially frequent tornadoes, both those spawned from mesocyclones in supercell thunderstorms and from less intense landspouts, such as within the Denver convergence vorticity zone (DCVZ).
The Plains are also susceptible to occasional floods and particularly severe flash floods, which are caused both by thunderstorms and by the rapid melting of snow in the mountains during warm weather. Notable examples include the 1965 Denver Flood, the Big Thompson River flooding of 1976 and the 2013 Colorado floods. Hot weather is common during summers in Denver. The city's record in 1901 for the number of consecutive days above 90 °F (32 °C) was broken during the summer of 2008. The new record of 24 consecutive days surpassed the previous record by almost a week.
Much of Colorado is very dry, with the state averaging only 17 inches (430 millimeters) of precipitation per year statewide. The state rarely experiences a time when some portion is not in some degree of drought. The lack of precipitation contributes to the severity of wildfires in the state, such as the Hayman Fire of 2002. Other notable fires include the Fourmile Canyon Fire of 2010, the Waldo Canyon Fire and High Park Fire of June 2012, and the Black Forest Fire of June 2013. Even these fires were exceeded in severity by the Pine Gulch Fire, Cameron Peak Fire, and East Troublesome Fire in 2020, all being the three largest fires in Colorado history (see 2020 Colorado wildfires). And the Marshall Fire which started on December 30, 2021, while not the largest in state history, was the most destructive ever in terms of property loss (see Marshall Fire).
However, some of the mountainous regions of Colorado receive a huge amount of moisture from winter snowfalls. The spring melts of these snows often cause great waterflows in the Yampa River, the Colorado River, the Rio Grande, the Arkansas River, the North Platte River, and the South Platte River.
Water flowing out of the Colorado Rocky Mountains is a very significant source of water for the farms, towns, and cities of the southwest states of New Mexico, Arizona, Utah, and Nevada, as well as the Midwest, such as Nebraska and Kansas, and the southern states of Oklahoma and Texas. A significant amount of water is also diverted for use in California; occasionally (formerly naturally and consistently), the flow of water reaches northern Mexico.
Climate change in Colorado encompasses the effects of climate change, attributed to man-made increases in atmospheric carbon dioxide, in the U.S. state of Colorado.
In 2019 The Denver Post reported that "[i]ndividuals living in southeastern Colorado are more vulnerable to potential health effects from climate change than residents in other parts of the state". The United States Environmental Protection Agency has more broadly reported:
The highest official ambient air temperature ever recorded in Colorado was 115 °F (46.1 °C) on July 20, 2019, at John Martin Dam. The lowest official air temperature was −61 °F (−51.7 °C) on February 1, 1985, at Maybell.
Despite its mountainous terrain, Colorado is relatively quiet seismically. The U.S. National Earthquake Information Center is located in Golden.
On August 22, 2011, a 5.3 magnitude earthquake occurred 9 miles (14 km) west-southwest of the city of Trinidad. There were no casualties and only a small amount of damage was reported. It was the second-largest earthquake in Colorado's history. A magnitude 5.7 earthquake was recorded in 1973.
In the early morning hours of August 24, 2018, four minor earthquakes rattled Colorado, ranging from magnitude 2.9 to 4.3.
Colorado has recorded 525 earthquakes since 1973, a majority of which range 2 to 3.5 on the Richter scale.
A process of extirpation by trapping and poisoning of the gray wolf (Canis lupus) from Colorado in the 1930s saw the last wild wolf in the state shot in 1945. A wolf pack recolonized Moffat County, Colorado in northwestern Colorado in 2019. Cattle farmers have expressed concern that a returning wolf population potentially threatens their herds. Coloradans voted to reintroduce gray wolves in 2020, with the state committing to a plan to have a population in the state by 2022 and permitting non-lethal methods of driving off wolves attacking livestock and pets.
While there is fossil evidence of Harrington's mountain goat in Colorado between at least 800,000 years ago and its extinction with megafauna roughly 11,000 years ago, the mountain goat is not native to Colorado but was instead introduced to the state over time during the interval between 1947 and 1972. Despite being an artificially-introduced species, the state declared mountain goats a native species in 1993. In 2013, 2014, and 2019, an unknown illness killed nearly all mountain goat kids, leading to a Colorado Parks and Wildlife investigation.
The native population of pronghorn in Colorado has varied wildly over the last century, reaching a low of only 15,000 individuals during the 1960s. However, conservation efforts succeeded in bringing the stable population back up to roughly 66,000 by 2013. The population was estimated to have reached 85,000 by 2019 and had increasingly more run-ins with the increased suburban housing along the eastern Front Range. State wildlife officials suggested that landowners would need to modify fencing to allow the greater number of pronghorns to move unabated through the newly developed land. Pronghorns are most readily found in the northern and eastern portions of the state, with some populations also in the western San Juan Mountains.
Common wildlife found in the mountains of Colorado include mule deer, southwestern red squirrel, golden-mantled ground squirrel, yellow-bellied marmot, moose, American pika, and red fox, all at exceptionally high numbers, though moose are not native to the state. The foothills include deer, fox squirrel, desert cottontail, mountain cottontail, and coyote. The prairies are home to black-tailed prairie dog, the endangered swift fox, American badger, and white-tailed jackrabbit.
The State of Colorado is divided into 64 counties. Two of these counties, the City and County of Broomfield and the City and County of Denver, have consolidated city and county governments. Counties are important units of government in Colorado since there are no civil townships or other minor civil divisions.
The most populous county in Colorado is El Paso County, the home of the City of Colorado Springs. The second most populous county is the City and County of Denver, the state capital. Five of the 64 counties now have more than 500,000 residents, while 12 have fewer than 5,000 residents. The ten most populous Colorado counties are all located in the Front Range Urban Corridor. Mesa County is the most populous county on the Colorado Western Slope.
Colorado has 272 active incorporated municipalities, comprising 197 towns, 73 cities, and two consolidated city and county governments. At the 2020 United States census, 4,299,942 of the 5,773,714 Colorado residents (74.47%) lived in one of these 272 municipalities. Another 714,417 residents (12.37%) lived in one of the 210 census-designated places, while the remaining 759,355 residents (13.15%) lived in the many rural and mountainous areas of the state.
Colorado municipalities operate under one of five types of municipal governing authority. Colorado currently has two consolidated city and county governments, 61 home rule cities, 12 statutory cities, 35 home rule towns, 161 statutory towns, and one territorial charter municipality.
The most populous municipality is the City and County of Denver. Colorado has 12 municipalities with more than 100,000 residents, and 17 with fewer than 100 residents. The 16 most populous Colorado municipalities are all located in the Front Range Urban Corridor. The City of Grand Junction is the most populous municipality on the Colorado Western Slope. The Town of Carbonate has had no year-round population since the 1890 census due to its severe winter weather and difficult access.
In addition to its 272 municipalities, Colorado has 210 unincorporated census-designated places (CDPs) and many other small communities. The most populous unincorporated community in Colorado is Highlands Ranch south of Denver. The seven most populous CDPs are located in the Front Range Urban Corridor. The Clifton CDP is the most populous CDP on the Colorado Western Slope.
Colorado has more than 4,000 special districts, most with property tax authority. These districts may provide schools, law enforcement, fire protection, water, sewage, drainage, irrigation, transportation, recreation, infrastructure, cultural facilities, business support, redevelopment, or other services.
Some of these districts have the authority to levy sales tax as well as property tax and use fees. This has led to a hodgepodge of sales tax and property tax rates in Colorado. There are some street intersections in Colorado with a different sales tax rate on each corner, sometimes substantially different.
Some of the more notable Colorado districts are:
Most recently on March 6, 2020, the Office of Management and Budget defined 21 statistical areas for Colorado comprising four combined statistical areas, seven metropolitan statistical areas, and ten micropolitan statistical areas.
The most populous of the seven metropolitan statistical areas in Colorado is the 10-county Denver-Aurora-Lakewood, CO Metropolitan Statistical Area with a population of 2,963,821 at the 2020 United States census, an increase of +15.29% since the 2010 census.
The more extensive 12-county Denver-Aurora, CO Combined Statistical Area had a population of 3,623,560 at the 2020 census, an increase of +17.23% since the 2010 census.
The most populous extended metropolitan region in Rocky Mountain Region is the 18-county Front Range Urban Corridor along the northeast face of the Southern Rocky Mountains. This region with Denver at its center had a population of 5,055,344 at the 2020 census, an increase of +16.65% since the 2010 census.
The United States Census Bureau estimated the population of Colorado on July 1, 2022, at 5,839,926, a 1.15% increase since the 2020 United States census.
Coloradan Hispanics and Latinos (of any race and heritage) made up 20.7% of the population. According to the 2000 census, the largest ancestry groups in Colorado are German (22%) including those of Swiss and Austrian descent, Mexican (18%), Irish (12%), and English (12%). Persons reporting German ancestry are especially numerous in the Front Range, the Rockies (west-central counties), and Eastern parts/High Plains.
Colorado has a high proportion of Hispanic, mostly Mexican-American, citizens in Metropolitan Denver, Colorado Springs, as well as the smaller cities of Greeley and Pueblo, and elsewhere. Southern, Southwestern, and Southeastern Colorado have a large number of Hispanos, the descendants of the early settlers of colonial Spanish origin. In 1940, the U.S. Census Bureau reported Colorado's population as 8.2% Hispanic and 90.3% non-Hispanic White. The Hispanic population of Colorado has continued to grow quickly over the past decades. By 2019, Hispanics made up 22% of Colorado's population, and Non-Hispanic Whites made up 70%. Spoken English in Colorado has many Spanish idioms.
Colorado also has some large African-American communities located in Denver, in the neighborhoods of Montbello, Five Points, Whittier, and many other East Denver areas. The state has sizable numbers of Asian-Americans of Mongolian, Chinese, Filipino, Korean, Southeast Asian, and Japanese descent. The highest population of Asian Americans can be found on the south and southeast side of Denver, as well as some on Denver's southwest side. The Denver metropolitan area is considered more liberal and diverse than much of the state when it comes to political issues and environmental concerns.
The population of Native Americans in the state is small. Native Americans are concentrated in metropolitan Denver and the southwestern corner of Colorado, where there are two Ute reservations.
The majority of Colorado's immigrants are from Mexico, India, China, Vietnam, Korea, Germany and Canada.
There were a total of 70,331 births in Colorado in 2006. (Birth rate of 14.6 per thousand.) In 2007, non-Hispanic Whites were involved in 59.1% of all births. Some 14.06% of those births involved a non-Hispanic White person and someone of a different race, most often with a couple including one Hispanic. A birth where at least one Hispanic person was involved counted for 43% of the births in Colorado. As of the 2010 census, Colorado has the seventh highest percentage of Hispanics (20.7%) in the U.S. behind New Mexico (46.3%), California (37.6%), Texas (37.6%), Arizona (29.6%), Nevada (26.5%), and Florida (22.5%). Per the 2000 census, the Hispanic population is estimated to be 918,899, or approximately 20% of the state's total population. Colorado has the 5th-largest population of Mexican-Americans, behind California, Texas, Arizona, and Illinois. In percentages, Colorado has the 6th-highest percentage of Mexican-Americans, behind New Mexico, California, Texas, Arizona, and Nevada.
In 2011, 46% of Colorado's population younger than the age of one were minorities, meaning that they had at least one parent who was not non-Hispanic White.
Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number.
In 2017, Colorado recorded the second-lowest fertility rate in the United States outside of New England, after Oregon, at 1.63 children per woman. Significant, contributing factors to the decline in pregnancies were the Title X Family Planning Program and an intrauterine device grant from Warren Buffett's family.
English, the official language of the state, is the most commonly spoken in Colorado. One Native American language still spoken in Colorado is the Colorado River Numic language also known as the Ute dialect.
The most common non-English language spoken in the state is Spanish.
Religious self-identification, per Public Religion Research Institute's 2022 American Values Survey
Major religious affiliations of the people of Colorado as of 2014 were 64% Christian, of whom there are 44% Protestant, 16% Roman Catholic, 3% Mormon, and 1% Eastern Orthodox. Other religious breakdowns according to the Pew Research Center were 1% Jewish, 1% Muslim, 1% Buddhist and 4% other. The religiously unaffiliated made up 29% of the population. In 2020, according to the Public Religion Research Institute, Christianity was 66% of the population. Judaism was also reported to have increased in this separate study, forming 2% of the religious landscape, while the religiously unaffiliated were reported to form 28% of the population in this separate study. In 2022, the same organization reported 61% was Christian (39% Protestant, 19% Catholic, 2% Mormon, 1% Eastern Orthodox), 2% New Age, 1% Jewish, 1% Hindu, and 34% religiously unaffiliated.
According to the Association of Religion Data Archives, the largest Christian denominations by the number of adherents in 2010 were the Catholic Church with 811,630; multi-denominational Evangelical Protestants with 229,981; and the Church of Jesus Christ of Latter-day Saints with 151,433. In 2020, the Association of Religion Data Archives determined the largest Christian denominations were Catholics (873,236), non/multi/inter-denominational Protestants (406,798), and Mormons (150,509). Throughout its non-Christian population, there were 12,500 Hindus, 7,101 Hindu Yogis, and 17,369 Buddhists at the 2020 study.
Our Lady of Guadalupe Catholic Church was the first permanent Catholic parish in modern-day Colorado and was constructed by Spanish colonists from New Mexico in modern-day Conejos. Latin Church Catholics are served by three dioceses: the Archdiocese of Denver and the Dioceses of Colorado Springs and Pueblo.
The first permanent settlement by members of the Church of Jesus Christ of Latter-day Saints in Colorado arrived from Mississippi and initially camped along the Arkansas River just east of the present-day site of Pueblo.
Colorado is generally considered among the healthiest states by behavioral and healthcare researchers. Among the positive contributing factors is the state's well-known outdoor recreation opportunities and initiatives. However, there is a stratification of health metrics with wealthier counties such as Douglas and Pitkin performing significantly better relative to southern, less wealthy counties such as Huerfano and Las Animas.
According to several studies, Coloradans have the lowest rates of obesity of any state in the US. As of 2018, 24% of the population was considered medically obese, and while the lowest in the nation, the percentage had increased from 17% in 2004.
According to a report in the Journal of the American Medical Association, residents of Colorado had a 2014 life expectancy of 80.21 years, the longest of any U.S. state.
According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 10,397 homeless people in Colorado.
The total state product in 2015 was $318.6 billion. Median Annual Household Income in 2016 was $70,666, 8th in the nation. Per capita personal income in 2010 was $51,940, ranking Colorado 11th in the nation. The state's economy broadened from its mid-19th-century roots in mining when irrigated agriculture developed, and by the late 19th century, raising livestock had become important. Early industry was based on the extraction and processing of minerals and agricultural products. Current agricultural products are cattle, wheat, dairy products, corn, and hay.
The federal government operates several federal facilities in the state, including NORAD (North American Aerospace Defense Command), United States Air Force Academy, Schriever Air Force Base located approximately 10 miles (16 kilometers) east of Peterson Air Force Base, and Fort Carson, both located in Colorado Springs within El Paso County; NOAA, the National Renewable Energy Laboratory (NREL) in Golden, and the National Institute of Standards and Technology in Boulder; U.S. Geological Survey and other government agencies at the Denver Federal Center near Lakewood; the Denver Mint, Buckley Space Force Base, the Tenth Circuit Court of Appeals, and the Byron G. Rogers Federal Building and United States Courthouse in Denver; and a federal Supermax Prison and other federal prisons near Cañon City. In addition to these and other federal agencies, Colorado has abundant National Forest land and four National Parks that contribute to federal ownership of 24,615,788 acres (99,617 km) of land in Colorado, or 37% of the total area of the state. In the second half of the 20th century, the industrial and service sectors expanded greatly. The state's economy is diversified and is notable for its concentration on scientific research and high-technology industries. Other industries include food processing, transportation equipment, machinery, chemical products, the extraction of metals such as gold (see Gold mining in Colorado), silver, and molybdenum. Colorado now also has the largest annual production of beer in any state. Denver is an important financial center.
The state's diverse geography and majestic mountains attract millions of tourists every year, including 85.2 million in 2018. Tourism contributes greatly to Colorado's economy, with tourists generating $22.3 billion in 2018.
Several nationally known brand names have originated in Colorado factories and laboratories. From Denver came the forerunner of telecommunications giant Qwest in 1879, Samsonite luggage in 1910, Gates belts and hoses in 1911, and Russell Stover Candies in 1923. Kuner canned vegetables began in Brighton in 1864. From Golden came Coors beer in 1873, CoorsTek industrial ceramics in 1920, and Jolly Rancher candy in 1949. CF&I railroad rails, wire, nails, and pipe debuted in Pueblo in 1892. Holly Sugar was first milled from beets in Holly in 1905, and later moved its headquarters to Colorado Springs. The present-day Swift packed meat of Greeley evolved from Monfort of Colorado, Inc., established in 1930. Estes model rockets were launched in Penrose in 1958. Fort Collins has been the home of Woodward Governor Company's motor controllers (governors) since 1870, and Waterpik dental water jets and showerheads since 1962. Celestial Seasonings herbal teas have been made in Boulder since 1969. Rocky Mountain Chocolate Factory made its first candy in Durango in 1981.
Colorado has a flat 4.63% income tax, regardless of income level. On November 3, 2020, voters authorized an initiative to lower that income tax rate to 4.55 percent. Unlike most states, which calculate taxes based on federal adjusted gross income, Colorado taxes are based on taxable income—income after federal exemptions and federal itemized (or standard) deductions. Colorado's state sales tax is 2.9% on retail sales. When state revenues exceed state constitutional limits, according to Colorado's Taxpayer Bill of Rights legislation, full-year Colorado residents can claim a sales tax refund on their individual state income tax return. Many counties and cities charge their own rates, in addition to the base state rate. There are also certain county and special district taxes that may apply.
Real estate and personal business property are taxable in Colorado. The state's senior property tax exemption was temporarily suspended by the Colorado Legislature in 2003. The tax break was scheduled to return for the assessment year 2006, payable in 2007.
As of December 2018, the state's unemployment rate was 4.2%.
The West Virginia teachers' strike in 2018 inspired teachers in other states, including Colorado, to take similar action.
Corn is grown in the Eastern Plains of Colorado. Arid conditions and drought negatively impacted yields in 2020 and 2022.
Colorado has significant hydrocarbon resources. According to the Energy Information Administration, Colorado hosts seven of the largest natural gas fields in the United States, and two of the largest oil fields. Conventional and unconventional natural gas output from several Colorado basins typically accounts for more than five percent of annual U.S. natural gas production. Colorado's oil shale deposits hold an estimated 1 trillion barrels (160 km) of oil—nearly as much oil as the entire world's proven oil reserves. Substantial deposits of bituminous, subbituminous, and lignite coal are found in the state.
Uranium mining in Colorado goes back to 1872, when pitchblende ore was taken from gold mines near Central City, Colorado. Not counting byproduct uranium from phosphate, Colorado is considered to have the third-largest uranium reserves of any U.S. state, behind Wyoming and New Mexico. When Colorado and Utah dominated radium mining from 1910 to 1922, uranium and vanadium were the byproducts (giving towns like present-day Superfund site Uravan their names). Uranium price increases from 2001 to 2007 prompted several companies to revive uranium mining in Colorado. During the 1940s, certain communities–including Naturita and Paradox–earned the moniker of "yellowcake towns" from their relationship with uranium mining. Price drops and financing problems in late 2008 forced these companies to cancel or scale back the uranium-mining project. As of 2016, there were no major uranium mining operations in the state, though plans existed to restart production.
Colorado's high Rocky Mountain ridges and eastern plains offer wind power potential, and geologic activity in the mountain areas provides the potential for geothermal power development. Much of the state is sunny and could produce solar power. Major rivers flowing from the Rocky Mountains offer hydroelectric power resources.
Several film productions have been shot on location in Colorado, especially prominent Westerns like True Grit, The Searchers, and Butch Cassidy and the Sundance Kid. Several historic military forts, railways with trains still operating, and mining ghost towns have been used and transformed for historical accuracy in well-known films. There are also several scenic highways and mountain passes that helped to feature the open road in films such as Vanishing Point, Bingo and Starman. Some Colorado landmarks have been featured in films, such as The Stanley Hotel in Dumb and Dumber and The Shining and the Sculptured House in Sleeper. In 2015, Furious 7 was to film driving sequences on Pikes Peak Highway in Colorado. The TV adult-animated series South Park takes place in central Colorado in the titular town. Additionally, The TV series Good Luck Charlie was set, but not filmed, in Denver, Colorado. The Colorado Office of Film and Television has noted that more than 400 films have been shot in Colorado.
There are also several established film festivals in Colorado, including Aspen Shortsfest, Boulder International Film Festival, Castle Rock Film Festival, Denver Film Festival, Festivus Film Festival, Mile High Horror Film Festival, Moondance International Film Festival, Mountainfilm in Telluride, Rocky Mountain Women's Film Festival, and Telluride Film Festival.
Many notable writers have lived or spent extended periods in Colorado. Beat Generation writers Jack Kerouac and Neal Cassady lived in and around Denver for several years each. Irish playwright Oscar Wilde visited Colorado on his tour of the United States in 1882, writing in his 1906 Impressions of America that Leadville was "the richest city in the world. It has also got the reputation of being the roughest, and every man carries a revolver."
Colorado is known for its Southwest and Rocky Mountain cuisine, with Mexican restaurants found throughout the state.
Boulder was named America's Foodiest Town 2010 by Bon Appétit. Boulder, and Colorado in general, is home to several national food and beverage companies, top-tier restaurants and farmers' markets. Boulder also has more Master Sommeliers per capita than any other city, including San Francisco and New York. Denver is known for steak, but now has a diverse culinary scene with many restaurants.
Polidori Sausage is a brand of pork products available in supermarkets, which originated in Colorado, in the early 20th century.
The Food & Wine Classic is held annually each June in Aspen. Aspen also has a reputation as the culinary capital of the Rocky Mountain region.
Colorado wines include award-winning varietals that have attracted favorable notice from outside the state. With wines made from traditional Vitis vinifera grapes along with wines made from cherries, peaches, plums, and honey, Colorado wines have won top national and international awards for their quality. Colorado's grape growing regions contain the highest elevation vineyards in the United States, with most viticulture in the state practiced between 4,000 and 7,000 feet (1,219 and 2,134 m) above sea level. The mountain climate ensures warm summer days and cool nights. Colorado is home to two designated American Viticultural Areas of the Grand Valley AVA and the West Elks AVA, where most of the vineyards in the state are located. However, an increasing number of wineries are located along the Front Range. In 2018, Wine Enthusiast Magazine named Colorado's Grand Valley AVA in Mesa County, Colorado, as one of the Top Ten wine travel destinations in the world.
Colorado is home to many nationally praised microbreweries, including New Belgium Brewing Company, Odell Brewing Company, Great Divide Brewing Company, and Bristol Brewing Company. The area of northern Colorado near and between the cities of Denver, Boulder, and Fort Collins is known as the "Napa Valley of Beer" due to its high density of craft breweries.
Colorado is open to cannabis (marijuana) tourism. With the adoption of the 64th state amendment in 2012, Colorado became the first state in the union to legalize marijuana for medicinal (2000), industrial (referring to hemp, 2012), and recreational (2012) use. Colorado's marijuana industry sold $1.31 billion worth of marijuana in 2016 and $1.26 billion in the first three-quarters of 2017. The state generated tax, fee, and license revenue of $194 million in 2016 on legal marijuana sales. Colorado regulates hemp as any part of the plant with less than 0.3% THC.
On April 4, 2014, Senate Bill 14–184 addressing oversight of Colorado's industrial hemp program was first introduced, ultimately being signed into law by Governor John Hickenlooper on May 31, 2014.
On November 7, 2000, 54% of Colorado voters passed Amendment 20, which amends the Colorado State constitution to allow the medical use of marijuana. A patient's medical use of marijuana, within the following limits, is lawful:
Currently, Colorado has listed "eight medical conditions for which patients can use marijuana—cancer, glaucoma, HIV/AIDS, muscle spasms, seizures, severe pain, severe nausea and cachexia, or dramatic weight loss and muscle atrophy". While governor, John Hickenlooper allocated about half of the state's $13 million "Medical Marijuana Program Cash Fund" to medical research in the 2014 budget. By 2018, the Medical Marijuana Program Cash Fund was the "largest pool of pot money in the state" and was used to fund programs including research into pediatric applications for controlling autism symptoms.
On November 6, 2012, voters amended the state constitution to protect "personal use" of marijuana for adults, establishing a framework to regulate marijuana in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014.
Colorado has five major professional sports leagues, all based in the Denver metropolitan area. Colorado is the least populous state with a franchise in each of the major professional sports leagues.
The Colorado Springs Snow Sox professional baseball team is based in Colorado Springs. The team is a member of the Pecos League, an independent baseball league which is not affiliated with Major or Minor League Baseball.
The Pikes Peak International Hill Climb is a major hill climbing motor race held on the Pikes Peak Highway.
The Cherry Hills Country Club has hosted several professional golf tournaments, including the U.S. Open, U.S. Senior Open, U.S. Women's Open, PGA Championship and BMW Championship.
The following universities and colleges participate in the National Collegiate Athletic Association Division I. The most popular college sports program is the University of Colorado Buffaloes, who used to play in the Big-12 but now play in the Pac-12. They have won the 1957 and 1991 Orange Bowl, 1995 Fiesta Bowl, and 1996 Cotton Bowl Classic.
Colorado's primary mode of transportation (in terms of passengers) is its highway system. Interstate 25 (I-25) is the primary north–south highway in the state, connecting Pueblo, Colorado Springs, Denver, and Fort Collins, and extending north to Wyoming and south to New Mexico. I-70 is the primary east–west corridor. It connects Grand Junction and the mountain communities with Denver and enters Utah and Kansas. The state is home to a network of US and Colorado highways that provide access to all principal areas of the state. Many smaller communities are connected to this network only via county roads.
Denver International Airport (DIA) is the third-busiest domestic U.S. and international airport in the world by passenger traffic. DIA handles by far the largest volume of commercial air traffic in Colorado and is the busiest U.S. hub airport between Chicago and the Pacific coast, making Denver the most important airport for connecting passenger traffic in the western United States.
Public transportation bus services are offered both intra-city and inter-city—including the Denver metro area's RTD services. The Regional Transportation District (RTD) operates the popular RTD Bus & Rail transit system in the Denver Metropolitan Area. As of January 2013 the RTD rail system had 170 light-rail vehicles, serving 47 miles (76 km) of track. In addition to local public transit, intercity bus service is provided by Burlington Trailways, Bustang, Express Arrow, and Greyhound Lines.
Amtrak operates two passenger rail lines in Colorado, the California Zephyr and Southwest Chief. Colorado's contribution to world railroad history was forged principally by the Denver and Rio Grande Western Railroad which began in 1870 and wrote the book on mountain railroading. In 1988 the "Rio Grande" was acquired, but was merged into, the Southern Pacific Railroad by their joint owner Philip Anschutz. On September 11, 1996, Anschutz sold the combined company to the Union Pacific Railroad, creating the largest railroad network in the United States. The Anschutz sale was partly in response to the earlier merger of Burlington Northern and Santa Fe which formed the large Burlington Northern and Santa Fe Railway (BNSF), Union Pacific's principal competitor in western U.S. railroading. Both Union Pacific and BNSF have extensive freight operations in Colorado.
Colorado's freight railroad network consists of 2,688 miles of Class I trackage. It is integral to the U.S. economy, being a critical artery for the movement of energy, agriculture, mining, and industrial commodities as well as general freight and manufactured products between the East and Midwest and the Pacific coast states.
In August 2014, Colorado began to issue driver licenses to aliens not lawfully in the United States who lived in Colorado. In September 2014, KCNC reported that 524 non-citizens were issued Colorado driver licenses that are normally issued to U.S. citizens living in Colorado.
The first institution of higher education in the Colorado Territory was the Colorado Seminary, opened on November 16, 1864, by the Methodist Episcopal Church. The seminary closed in 1867 but reopened in 1880 as the University of Denver. In 1870, the Bishop George Maxwell Randall of the Episcopal Church's Missionary District of Colorado and Parts Adjacent opened the first of what become the Colorado University Schools which would include the Territorial School of Mines opened in 1873 and sold to the Colorado Territory in 1874. These schools were initially run by the Episcopal Church. An 1861 territorial act called for the creation of a public university in Boulder, though it would not be until 1876 that the University of Colorado was founded. The 1876 act also renamed Territorial School of Mines as the Colorado School of Mines. An 1870 territorial act created the Agricultural College of Colorado which opened in 1879. The college was renamed the Colorado State College of Agriculture and Mechanic Arts in 1935, and became Colorado State University in 1957.
The first Catholic college in Colorado was the Jesuit Sacred Heart College, which was founded in New Mexico in 1877, moved to Morrison in 1884, and to Denver in 1887. The college was renamed Regis College in 1921 and Regis University in 1991. On April 1, 1924, armed students patrolled the campus after a burning cross was found, the climax of tensions between Regis College and the locally-powerful Ku Klux Klan.
Following a 1950 assessment by the Service Academy Board, it was determined that there was a need to supplement the U.S. Military and Naval Academies with a third school that would provide commissioned officers for the newly independent Air Force. On April 1, 1954, President Dwight Eisenhower signed a law that moved for the creation of a U.S. Air Force Academy. Later that year, Colorado Springs was selected to host the new institution. From its establishment in 1955, until the construction of appropriate facilities in Colorado Springs was completed and opened in 1958, the Air Force Academy operated out of Lowry Air Force Base in Denver. With the opening of the Colorado Springs facility, the cadets moved to the new campus, though not in the full-kit march that some urban and campus legends suggest. The first class of Space Force officers from the Air Force Academy commissioned on April 18, 2020.
The major military installations in Colorado include:
Former military posts in Colorado include:
Like the federal government and all other U.S. states, Colorado's state constitution provides for three branches of government: the legislative, the executive, and the judicial branches.
The Governor of Colorado heads the state's executive branch. The current governor is Jared Polis, a Democrat. Colorado's other statewide elected executive officers are the Lieutenant Governor of Colorado (elected on a ticket with the Governor), Secretary of State of Colorado, Colorado State Treasurer, and Attorney General of Colorado, all of whom serve four-year terms.
The seven-member Colorado Supreme Court is the state's highest court, with seven justices. The Colorado Court of Appeals, with 22 judges, sits in divisions of three judges each. Colorado is divided into 22 judicial districts, each of which has a district court and a county court with limited jurisdiction. The state also has specialized water courts, which sit in seven distinct divisions around the state and which decide matters relating to water rights and the use and administration of water.
The state legislative body is the Colorado General Assembly, which is made up of two houses – the House of Representatives and the Senate. The House has 65 members and the Senate has 35. As of 2023, the Democratic Party holds a 23 to 12 majority in the Senate and a 46 to 19 majority in the House.
Most Coloradans are native to other states (nearly 60% according to the 2000 census), and this is illustrated by the fact that the state did not have a native-born governor from 1975 (when John David Vanderhoof left office) until 2007, when Bill Ritter took office; his election the previous year marked the first electoral victory for a native-born Coloradan in a gubernatorial race since 1958 (Vanderhoof had ascended from the Lieutenant Governorship when John Arthur Love was given a position in Richard Nixon's administration in 1973).
Tax is collected by the Colorado Department of Revenue.
Colorado was once considered a swing state, but has become a relatively safe blue state in both state and federal elections. In presidential elections, it had not been won until 2020 by double digits since 1984 and has backed the winning candidate in 9 of the last 11 elections. Coloradans have elected 17 Democrats and 12 Republicans to the governorship in the last 100 years.
In presidential politics, Colorado was considered a reliably Republican state during the post-World War II era, voting for the Democratic candidate only in 1948, 1964, and 1992. However, it became a competitive swing state in the 1990s. Since the mid-2000s, it has swung heavily to the Democrats, voting for Barack Obama in 2008 and 2012, Hillary Clinton in 2016, and Joe Biden in 2020.
Colorado politics exhibits a contrast between conservative cities such as Colorado Springs and Grand Junction, and liberal cities such as Boulder and Denver. Democrats are strongest in metropolitan Denver, the college towns of Fort Collins and Boulder, southern Colorado (including Pueblo), and several western ski resort counties. The Republicans are strongest in the Eastern Plains, Colorado Springs, Greeley, and far Western Colorado near Grand Junction.
Colorado is represented by two members of the United States Senate:
Colorado is represented by eight members of the United States House of Representatives:
In a 2020 study, Colorado was ranked as the seventh easiest state for citizens to vote in.
Colorado was the first state in the union to enact, by voter referendum, a law extending suffrage to women. That initiative was approved by the state's voters on November 7, 1893.
On the November 8, 1932, ballot, Colorado approved the repeal of alcohol prohibition more than a year before the Twenty-first Amendment to the United States Constitution was ratified.
Colorado has banned, via C.R.S. section 12-6-302, the sale of motor vehicles on Sunday since at least 1953.
In 1972 Colorado voters rejected a referendum proposal to fund the 1976 Winter Olympics, which had been scheduled to be held in the state. Denver had been chosen by the International Olympic Committee as the host city on May 12, 1970.
In 1992, by a margin of 53 to 47 percent, Colorado voters approved an amendment to the state constitution (Amendment 2) that would have prevented any city, town, or county in the state from taking any legislative, executive or judicial action to recognize homosexuals or bisexuals as a protected class. In 1996, in a 6–3 ruling in Romer v. Evans, the U.S. Supreme Court found that preventing protected status based upon homosexuality or bisexuality did not satisfy the Equal Protection Clause.
In 2006, voters passed Amendment 43, which banned gay marriage in Colorado. That initiative was nullified by the U.S. Supreme Court's 2015 decision in Obergefell v. Hodges.
In 2012, voters amended the state constitution protecting the "personal use" of marijuana for adults, establishing a framework to regulate cannabis like alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014.
On December 19, 2023, the Colorado Supreme Court ruled that Donald Trump was disqualified from the 2024 United States Presidential Election in part due to his alleged incitement of the January 6 insurrection.
The two Native American reservations remaining in Colorado are the Southern Ute Indian Reservation (1873; Ute dialect: Kapuuta-wa Moghwachi Núuchi-u) and Ute Mountain Ute Indian Reservation (1940; Ute dialect: Wʉgama Núuchi). The two abolished Indian reservations in Colorado were the Cheyenne and Arapaho Indian Reservation (1851–1870) and Ute Indian Reservation (1855–1873).
Colorado is home to 4 national parks, 9 national monuments, 3 national historic sites, 2 national recreation areas, 4 national historic trails, 1 national scenic trail, 11 national forests, 2 national grasslands, 44 national wildernesses, 3 national conservation areas, 8 national wildlife refuges, 3 national heritage areas, 26 national historic landmarks, 16 national natural landmarks, more than 1,500 National Register of Historic Places, 1 wild and scenic river, 42 state parks, 307 state wildlife areas, 93 state natural areas, 28 national recreation trails, 6 regional trails, and numerous other scenic, historic, and recreational areas.
39°N 105°W / 39°N 105°W / 39; -105 (State of Colorado) | [
{
"paragraph_id": 0,
"text": "Colorado (/ˌkɒləˈrædoʊ, -ˈrɑːdoʊ/ , other variants) is a state in the Mountain West subregion of the Western United States. It encompasses most of the Southern Rocky Mountains, as well as the northeastern portion of the Colorado Plateau and the western edge of the Great Plains. Colorado is the eighth most extensive and 21st most populous U.S. state. The United States Census Bureau estimated the population of Colorado at 5,839,926 as of July 1, 2022, a 1.15% increase since the 2020 United States census.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The region has been inhabited by Native Americans and their ancestors for at least 13,500 years and possibly much longer. The eastern edge of the Rocky Mountains was a major migration route for early peoples who spread throughout the Americas. In 1848, much of the region was annexed to the United States with the Treaty of Guadalupe Hidalgo. The Pike's Peak Gold Rush of 1858–1862 created an influx of settlers. On February 28, 1861, U.S. President James Buchanan signed an act creating the Territory of Colorado, and on August 1, 1876, President Ulysses S. Grant signed Proclamation 230 admitting Colorado to the Union as the 38th state. The Spanish adjective \"colorado\" means \"colored red\" or \"ruddy\". Colorado is nicknamed the \"Centennial State\" because it became a state one century (and four weeks) after the signing of the United States Declaration of Independence.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Colorado is bordered by Wyoming to the north, Nebraska to the northeast, Kansas to the east, Oklahoma to the southeast, New Mexico to the south, Utah to the west, and touches Arizona to the southwest at the Four Corners. Colorado is noted for its landscape of mountains, forests, high plains, mesas, canyons, plateaus, rivers, and desert lands. Colorado is one of the Mountain States and is often considered to be part of the southwestern United States. The high plains of Colorado may be considered a part of the midwestern United States.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Denver is the capital, the most populous city, and the center of the Front Range Urban Corridor. Colorado Springs is the second most populous city. Residents of the state are known as Coloradans, although the antiquated \"Coloradoan\" is occasionally used. Major parts of the economy include government and defense, mining, agriculture, tourism, and increasingly other kinds of manufacturing. With increasing temperatures and decreasing water availability, Colorado's agriculture, forestry, and tourism economies are expected to be heavily affected by climate change.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The region that is today the State of Colorado has been inhabited by Native Americans and their Paleoamerican ancestors for at least 13,500 years and possibly more than 37,000 years. The eastern edge of the Rocky Mountains was a major migration route that was important to the spread of early peoples throughout the Americas. The Lindenmeier site in Larimer County contains artifacts dating from approximately 8720 BCE. The Ancient Pueblo peoples lived in the valleys and mesas of the Colorado Plateau. The Ute Nation inhabited the mountain valleys of the Southern Rocky Mountains and the Western Rocky Mountains, even as far east as the Front Range of the present day. The Apache and the Comanche also inhabited the Eastern and Southeastern parts of the state. In the 17th century, the Arapaho and Cheyenne moved west from the Great Lakes region to hunt across the High Plains of Colorado and Wyoming.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The Spanish Empire claimed Colorado as part of its New Mexico province before U.S. involvement in the region. The U.S. acquired a territorial claim to the eastern Rocky Mountains with the Louisiana Purchase from France in 1803. This U.S. claim conflicted with the claim by Spain to the upper Arkansas River Basin as the exclusive trading zone of its colony of Santa Fe de Nuevo México. In 1806, Zebulon Pike led a U.S. Army reconnaissance expedition into the disputed region. Colonel Pike and his troops were arrested by Spanish cavalrymen in the San Luis Valley the following February, taken to Chihuahua, and expelled from Mexico the following July.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The U.S. relinquished its claim to all land south and west of the Arkansas River and south of 42nd parallel north and west of the 100th meridian west as part of its purchase of Florida from Spain with the Adams-Onís Treaty of 1819. The treaty took effect on February 22, 1821. Having settled its border with Spain, the U.S. admitted the southeastern portion of the Territory of Missouri to the Union as the state of Missouri on August 10, 1821. The remainder of Missouri Territory, including what would become northeastern Colorado, became an unorganized territory and remained so for 33 years over the question of slavery. After 11 years of war, Spain finally recognized the independence of Mexico with the Treaty of Córdoba signed on August 24, 1821. Mexico eventually ratified the Adams–Onís Treaty in 1831. The Texian Revolt of 1835–36 fomented a dispute between the U.S. and Mexico which eventually erupted into the Mexican–American War in 1846. Mexico surrendered its northern territory to the U.S. with the Treaty of Guadalupe Hidalgo after the war in 1848; this included much of the western and southern areas of the current state of Colorado.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Most American settlers traveling overland west to the Oregon Country, the new goldfields of California, or the new Mormon settlements of the State of Deseret in the Salt Lake Valley, avoided the rugged Southern Rocky Mountains, and instead followed the North Platte River and Sweetwater River to South Pass (Wyoming), the lowest crossing of the Continental Divide between the Southern Rocky Mountains and the Central Rocky Mountains. In 1849, the Mormons of the Salt Lake Valley organized the extralegal State of Deseret, claiming the entire Great Basin and all lands drained by the rivers Green, Grand, and Colorado. The federal government of the U.S. flatly refused to recognize the new Mormon government, because it was theocratic and sanctioned plural marriage. Instead, the Compromise of 1850 divided the Mexican Cession and the northwestern claims of Texas into a new state and two new territories, the state of California, the Territory of New Mexico, and the Territory of Utah. On April 9, 1851, Mexican American settlers from the area of Taos settled the village of San Luis, then in the New Mexico Territory, later to become Colorado's first permanent Euro-American settlement.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1854, Senator Stephen A. Douglas persuaded the U.S. Congress to divide the unorganized territory east of the Continental Divide into two new organized territories, the Territory of Kansas and the Territory of Nebraska, and an unorganized southern region known as the Indian territory. Each new territory was to decide the fate of slavery within its boundaries, but this compromise merely served to fuel animosity between free soil and pro-slavery factions.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The gold seekers organized the Provisional Government of the Territory of Jefferson on August 24, 1859, but this new territory failed to secure approval from the Congress of the United States embroiled in the debate over slavery. The election of Abraham Lincoln for the President of the United States on November 6, 1860, led to the secession of nine southern slave states and the threat of civil war among the states. Seeking to augment the political power of the Union states, the Republican Party-dominated Congress quickly admitted the eastern portion of the Territory of Kansas into the Union as the free State of Kansas on January 29, 1861, leaving the western portion of the Kansas Territory, and its gold-mining areas, as unorganized territory.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Thirty days later on February 28, 1861, outgoing U.S. President James Buchanan signed an Act of Congress organizing the free Territory of Colorado. The original boundaries of Colorado remain unchanged except for government survey amendments. In 1776, Spanish priest Silvestre Vélez de Escalante recorded that Native Americans in the area knew the river as el Rio Colorado for the red-brown silt that the river carried from the mountains. In 1859, a U.S. Army topographic expedition led by Captain John Macomb located the confluence of the Green River with the Grand River in what is now Canyonlands National Park in Utah. The Macomb party designated the confluence as the source of the Colorado River.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "On April 12, 1861, South Carolina artillery opened fire on Fort Sumter to start the American Civil War. While many gold seekers held sympathies for the Confederacy, the vast majority remained fiercely loyal to the Union cause.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1862, a force of Texas cavalry invaded the Territory of New Mexico and captured Santa Fe on March 10. The object of this Western Campaign was to seize or disrupt the gold fields of Colorado and California and to seize ports on the Pacific Ocean for the Confederacy. A hastily organized force of Colorado volunteers force-marched from Denver City, Colorado Territory, to Glorieta Pass, New Mexico Territory, in an attempt to block the Texans. On March 28, the Coloradans and local New Mexico volunteers stopped the Texans at the Battle of Glorieta Pass, destroyed their cannon and supply wagons, and dispersed 500 of their horses and mules. The Texans were forced to retreat to Santa Fe. Having lost the supplies for their campaign and finding little support in New Mexico, the Texans abandoned Santa Fe and returned to San Antonio in defeat. The Confederacy made no further attempts to seize the Southwestern United States.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1864, Territorial Governor John Evans appointed the Reverend John Chivington as Colonel of the Colorado Volunteers with orders to protect white settlers from Cheyenne and Arapaho warriors who were accused of stealing cattle. Colonel Chivington ordered his troops to attack a band of Cheyenne and Arapaho encamped along Sand Creek. Chivington reported that his troops killed more than 500 warriors. The militia returned to Denver City in triumph, but several officers reported that the so-called battle was a blatant massacre of Indians at peace, that most of the dead were women and children, and that the bodies of the dead had been hideously mutilated and desecrated. Three U.S. Army inquiries condemned the action, and incoming President Andrew Johnson asked Governor Evans for his resignation, but none of the perpetrators was ever punished. This event is now known as the Sand Creek massacre.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In the midst and aftermath of the Civil War, many discouraged prospectors returned to their homes, but a few stayed and developed mines, mills, farms, ranches, roads, and towns in Colorado Territory. On September 14, 1864, James Huff discovered silver near Argentine Pass, the first of many silver strikes. In 1867, the Union Pacific Railroad laid its tracks west to Weir, now Julesburg, in the northeast corner of the Territory. The Union Pacific linked up with the Central Pacific Railroad at Promontory Summit, Utah, on May 10, 1869, to form the First transcontinental railroad. The Denver Pacific Railway reached Denver in June of the following year, and the Kansas Pacific arrived two months later to forge the second line across the continent. In 1872, rich veins of silver were discovered in the San Juan Mountains on the Ute Indian reservation in southwestern Colorado. The Ute people were removed from the San Juans the following year.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The United States Congress passed an enabling act on March 3, 1875, specifying the requirements for the Territory of Colorado to become a state. On August 1, 1876 (four weeks after the Centennial of the United States), U.S. President Ulysses S. Grant signed a proclamation admitting Colorado to the Union as the 38th state and earning it the moniker \"Centennial State\".",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The discovery of a major silver lode near Leadville in 1878 triggered the Colorado Silver Boom. The Sherman Silver Purchase Act of 1890 invigorated silver mining, and Colorado's last, but greatest, gold strike at Cripple Creek a few months later lured a new generation of gold seekers. Colorado women were granted the right to vote on November 7, 1893, making Colorado the second state to grant universal suffrage and the first one by a popular vote (of Colorado men). The repeal of the Sherman Silver Purchase Act in 1893 led to a staggering collapse of the mining and agricultural economy of Colorado, but the state slowly and steadily recovered. Between the 1880s and 1930s, Denver's floriculture industry developed into a major industry in Colorado. This period became known locally as the Carnation Gold Rush.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Poor labor conditions and discontent among miners resulted in several major clashes between strikers and the Colorado National Guard, including the 1903–1904 Western Federation of Miners Strike and Colorado Coalfield War, the latter of which included the Ludlow massacre that killed a dozen women and children. Both the 1913–1914 Coalfield War and the Denver streetcar strike of 1920 resulted in federal troops intervening to end the violence. In 1927, the 1927-28 Colorado coal strike occurred and was ultimately successful in winning a dollar a day increase in wages. During it however the Columbine Mine massacre resulted in six dead strikers following a confrontation with Colorado Rangers. In a separate incident in Trinidad the mayor was accused of deputizing members of the KKK against the striking workers. More than 5,000 Colorado miners—many immigrants—are estimated to have died in accidents since records were first formally collected following an 1884 accident in Crested Butte that killed 59.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1924, the Ku Klux Klan Colorado Realm achieved dominance in Colorado politics. With peak membership levels, the Second Klan levied significant control over both the local and state Democrat and Republican parties, particularly in the governor's office and city governments of Denver, Cañon City, and Durango. A particularly strong element of the Klan controlled the Denver Police. Cross burnings became semi-regular occurrences in cities such as Florence and Pueblo. The Klan targeted African-Americans, Catholics, Eastern European immigrants, and other non-White Protestant groups. Efforts by non-Klan lawmen and lawyers including Philip Van Cise lead to a rapid decline in the organization's power, with membership waning significantly by the end of the 1920s.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Colorado became the first western state to host a major political convention when the Democratic Party met in Denver in 1908. By the U.S. census in 1930, the population of Colorado first exceeded one million residents. Colorado suffered greatly through the Great Depression and the Dust Bowl of the 1930s, but a major wave of immigration following World War II boosted Colorado's fortune. Tourism became a mainstay of the state economy, and high technology became an important economic engine. The United States Census Bureau estimated that the population of Colorado exceeded five million in 2009.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "On September 11, 1957, a plutonium fire occurred at the Rocky Flats Plant, which resulted in the significant plutonium contamination of surrounding populated areas.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "From the 1940s and 1970s, many protest movements gained momentum in Colorado, predominantly in Denver. This included the Chicano Movement, a civil rights, and social movement of Mexican Americans emphasizing a Chicano identity that is widely considered to have begun in Denver. The National Chicano Liberation Youth Conference was held in Colorado in March 1969.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 1967, Colorado was the first state to loosen restrictions on abortion when governor John Love signed a law allowing abortions in cases of rape, incest, or threats to the woman's mental or physical health. Many states followed Colorado's lead in loosening abortion laws in the 1960s and 1970s.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Since the late 1990s, Colorado has been the site of multiple major mass shootings, including the infamous Columbine High School massacre in 1999 which made international news, where two gunmen killed 12 students and one teacher, before committing suicide. The incident has since spawned many copycat incidents. On July 20, 2012, a gunman killed 12 people in a movie theater in Aurora. The state responded with tighter restrictions on firearms, including introducing a limit on magazine capacity. On March 22, 2021, a gunman killed 10 people, including a police officer, in a King Soopers supermarket in Boulder. In an instance of anti-LGBT violence, a gunman killed 5 people at a nightclub in Colorado Springs during the night of November 19–20, 2022.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Four warships of the U.S. Navy have been named the USS Colorado. The first USS Colorado was named for the Colorado River and served in the Civil War and later the Asiatic Squadron, where it was attacked during the 1871 Korean Expedition. The later three ships were named in honor of the state, including an armored cruiser and the battleship USS Colorado, the latter of which was the lead ship of her class and served in World War II in the Pacific beginning in 1941. At the time of the attack on Pearl Harbor, the battleship USS Colorado was located at the naval base in San Diego, California, and thus went unscathed. The most recent vessel to bear the name USS Colorado is Virginia-class submarine USS Colorado (SSN-788), which was commissioned in 2018.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Colorado is notable for its diverse geography, which includes alpine mountains, high plains, deserts with huge sand dunes, and deep canyons. In 1861, the United States Congress defined the boundaries of the new Territory of Colorado exclusively by lines of latitude and longitude, stretching from 37°N to 41°N latitude, and from 102°02′48″W to 109°02′48″W longitude (25°W to 32°W from the Washington Meridian). After 162 years of government surveys, the borders of Colorado were officially defined by 697 boundary markers and 697 straight boundary lines. Colorado, Wyoming, and Utah are the only states that have their borders defined solely by straight boundary lines with no natural features. The southwest corner of Colorado is the Four Corners Monument at 36°59′56″N, 109°2′43″W. The Four Corners Monument, located at the place where Colorado, New Mexico, Arizona, and Utah meet, is the only place in the United States where four states meet.",
"title": "Geography"
},
{
"paragraph_id": 26,
"text": "Approximately half of Colorado is flat and rolling land. East of the Rocky Mountains are the Colorado Eastern Plains of the High Plains, the section of the Great Plains within Colorado at elevations ranging from roughly 3,350 to 7,500 feet (1,020 to 2,290 m). The Colorado plains are mostly prairies but also include deciduous forests, buttes, and canyons. Precipitation averages 15 to 25 inches (380 to 640 mm) annually.",
"title": "Geography"
},
{
"paragraph_id": 27,
"text": "Eastern Colorado is presently mainly farmland and rangeland, along with small farming villages and towns. Corn, wheat, hay, soybeans, and oats are all typical crops. Most villages and towns in this region boast both a water tower and a grain elevator. Irrigation water is available from both surface and subterranean sources. Surface water sources include the South Platte, the Arkansas River, and a few other streams. Subterranean water is generally accessed through artesian wells. Heavy usage of these wells for irrigation purposes caused underground water reserves to decline in the region. Eastern Colorado also hosts a considerable amount and range of livestock, such as cattle ranches and hog farms.",
"title": "Geography"
},
{
"paragraph_id": 28,
"text": "Roughly 70% of Colorado's population resides along the eastern edge of the Rocky Mountains in the Front Range Urban Corridor between Cheyenne, Wyoming, and Pueblo, Colorado. This region is partially protected from prevailing storms that blow in from the Pacific Ocean region by the high Rockies in the middle of Colorado. The \"Front Range\" includes Denver, Boulder, Fort Collins, Loveland, Castle Rock, Colorado Springs, Pueblo, Greeley, and other townships and municipalities in between. On the other side of the Rockies, the significant population centers in western Colorado (which is known as \"The Western Slope\") are the cities of Grand Junction, Durango, and Montrose.",
"title": "Geography"
},
{
"paragraph_id": 29,
"text": "To the west of the Great Plains of Colorado rises the eastern slope of the Rocky Mountains. Notable peaks of the Rocky Mountains include Longs Peak, Mount Blue Sky, Pikes Peak, and the Spanish Peaks near Walsenburg, in southern Colorado. This area drains to the east and the southeast, ultimately either via the Mississippi River or the Rio Grande into the Gulf of Mexico.",
"title": "Geography"
},
{
"paragraph_id": 30,
"text": "The Rocky Mountains within Colorado contain 53 true peaks with a total of 58 that are 14,000 feet (4,267 m) or higher in elevation above sea level, known as fourteeners. These mountains are largely covered with trees such as conifers and aspens up to the tree line, at an elevation of about 12,000 feet (3,658 m) in southern Colorado to about 10,500 feet (3,200 m) in northern Colorado. Above this tree line, only alpine vegetation grows. Only small parts of the Colorado Rockies are snow-covered year-round.",
"title": "Geography"
},
{
"paragraph_id": 31,
"text": "Much of the alpine snow melts by mid-August except for a few snow-capped peaks and a few small glaciers. The Colorado Mineral Belt, stretching from the San Juan Mountains in the southwest to Boulder and Central City on the front range, contains most of the historic gold- and silver-mining districts of Colorado. Mount Elbert is the highest summit of the Rocky Mountains. The 30 highest major summits of the Rocky Mountains of North America are all within the state.",
"title": "Geography"
},
{
"paragraph_id": 32,
"text": "The summit of Mount Elbert at 14,440 feet (4,401.2 m) elevation in Lake County is the highest point in Colorado and the Rocky Mountains of North America. Colorado is the only U.S. state that lies entirely above 1,000 meters elevation. The point where the Arikaree River flows out of Yuma County, Colorado, and into Cheyenne County, Kansas, is the lowest in Colorado at 3,317 feet (1,011 m) elevation. This point, which is the highest low elevation point of any state, is higher than the high elevation points of 18 states and the District of Columbia.",
"title": "Geography"
},
{
"paragraph_id": 33,
"text": "The Continental Divide of the Americas extends along the crest of the Rocky Mountains. The area of Colorado to the west of the Continental Divide is called the Western Slope of Colorado. West of the Continental Divide, water flows to the southwest via the Colorado River and the Green River into the Gulf of California.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "Within the interior of the Rocky Mountains are several large parks which are high broad basins. In the north, on the east side of the Continental Divide is the North Park of Colorado. The North Park is drained by the North Platte River, which flows north into Wyoming and Nebraska. Just to the south of North Park, but on the western side of the Continental Divide, is the Middle Park of Colorado, which is drained by the Colorado River. The South Park of Colorado is the region of the headwaters of the South Platte River.",
"title": "Geography"
},
{
"paragraph_id": 35,
"text": "In south-central Colorado is the large San Luis Valley, where the headwaters of the Rio Grande are located. The northern part of the valley is the San Luis Closed Basin, an endorheic basin that helped created the Great Sand Dunes. The valley sits between the Sangre De Cristo Mountains and San Juan Mountains. The Rio Grande drains due south into New Mexico, Texas, and Mexico. Across the Sangre de Cristo Range to the east of the San Luis Valley lies the Wet Mountain Valley. These basins, particularly the San Luis Valley, lie along the Rio Grande Rift, a major geological formation of the Rocky Mountains, and its branches.",
"title": "Geography"
},
{
"paragraph_id": 36,
"text": "The Western Slope of Colorado includes the western face of the Rocky Mountains and all of the area to the western border. This area includes several terrains and climates from alpine mountains to arid deserts. The Western Slope includes many ski resort towns in the Rocky Mountains and towns west to Utah. It is less populous than the Front Range but includes a large number of national parks and monuments.",
"title": "Geography"
},
{
"paragraph_id": 37,
"text": "The northwestern corner of Colorado is a sparsely populated region, and it contains part of the noted Dinosaur National Monument, which not only is a paleontological area, but is also a scenic area of rocky hills, canyons, arid desert, and streambeds. Here, the Green River briefly crosses over into Colorado.",
"title": "Geography"
},
{
"paragraph_id": 38,
"text": "The Western Slope of Colorado is drained by the Colorado River and its tributaries (primarily the Gunnison River, Green River, and the San Juan River). The Colorado River flows through Glenwood Canyon, and then through an arid valley made up of desert from Rifle to Parachute, through the desert canyon of De Beque Canyon, and into the arid desert of Grand Valley, where the city of Grand Junction is located.",
"title": "Geography"
},
{
"paragraph_id": 39,
"text": "Also prominent is the Grand Mesa, which lies to the southeast of Grand Junction; the high San Juan Mountains, a rugged mountain range; and to the north and west of the San Juan Mountains, the Colorado Plateau.",
"title": "Geography"
},
{
"paragraph_id": 40,
"text": "Grand Junction, Colorado, at the confluence of the Colorado and Gunnison Rivers, is the largest city on the Western Slope. Grand Junction and Durango are the only major centers of television broadcasting west of the Continental Divide in Colorado, though most mountain resort communities publish daily newspapers. Grand Junction is located at the juncture of Interstate 70 and US 50, the only major highways in western Colorado. Grand Junction is also along the major railroad of the Western Slope, the Union Pacific. This railroad also provides the tracks for Amtrak's California Zephyr passenger train, which crosses the Rocky Mountains between Denver and Grand Junction.",
"title": "Geography"
},
{
"paragraph_id": 41,
"text": "The Western Slope includes multiple notable destinations in the Colorado Rocky Mountains, including Glenwood Springs, with its resort hot springs, and the ski resorts of Aspen, Breckenridge, Vail, Crested Butte, Steamboat Springs, and Telluride.",
"title": "Geography"
},
{
"paragraph_id": 42,
"text": "Higher education in and near the Western Slope can be found at Colorado Mesa University in Grand Junction, Western Colorado University in Gunnison, Fort Lewis College in Durango, and Colorado Mountain College in Glenwood Springs and Steamboat Springs.",
"title": "Geography"
},
{
"paragraph_id": 43,
"text": "The Four Corners Monument in the southwest corner of Colorado marks the common boundary of Colorado, New Mexico, Arizona, and Utah; the only such place in the United States.",
"title": "Geography"
},
{
"paragraph_id": 44,
"text": "The climate of Colorado is more complex than states outside of the Mountain States region. Unlike most other states, southern Colorado is not always warmer than northern Colorado. Most of Colorado is made up of mountains, foothills, high plains, and desert lands. Mountains and surrounding valleys greatly affect the local climate. Northeast, east, and southeast Colorado are mostly the high plains, while Northern Colorado is a mix of high plains, foothills, and mountains. Northwest and west Colorado are predominantly mountainous, with some desert lands mixed in. Southwest and southern Colorado are a complex mixture of desert and mountain areas.",
"title": "Climate"
},
{
"paragraph_id": 45,
"text": "The climate of the Eastern Plains is semi-arid (Köppen climate classification: BSk) with low humidity and moderate precipitation, usually from 15 to 25 inches (380 to 640 millimeters) annually, although many areas near the rivers are semi-humid climate. The area is known for its abundant sunshine and cool, clear nights, which give this area a great average diurnal temperature range. The difference between the highs of the days and the lows of the nights can be considerable as warmth dissipates to space during clear nights, the heat radiation not being trapped by clouds. The Front Range urban corridor, where most of the population of Colorado resides, lies in a pronounced precipitation shadow as a result of being on the lee side of the Rocky Mountains.",
"title": "Climate"
},
{
"paragraph_id": 46,
"text": "In summer, this area can have many days above 95 °F (35 °C) and often 100 °F (38 °C). On the plains, the winter lows usually range from 25 to −10 °F (−4 to −23 °C). About 75% of the precipitation falls within the growing season, from April to September, but this area is very prone to droughts. Most of the precipitation comes from thunderstorms, which can be severe, and from major snowstorms that occur in the winter and early spring. Otherwise, winters tend to be mostly dry and cold.",
"title": "Climate"
},
{
"paragraph_id": 47,
"text": "In much of the region, March is the snowiest month. April and May are normally the rainiest months, while April is the wettest month overall. The Front Range cities closer to the mountains tend to be warmer in the winter due to Chinook winds which warms the area, sometimes bringing temperatures of 70 °F (21 °C) or higher in the winter. The average July temperature is 55 °F (13 °C) in the morning and 90 °F (32 °C) in the afternoon. The average January temperature is 18 °F (−8 °C) in the morning and 48 °F (9 °C) in the afternoon, although variation between consecutive days can be 40 °F (-40 °C).",
"title": "Climate"
},
{
"paragraph_id": 48,
"text": "Just west of the plains and into the foothills, there is a wide variety of climate types. Locations merely a few miles apart can experience entirely different weather depending on the topography. Most valleys have a semi-arid climate, not unlike the eastern plains, which transitions to an alpine climate at the highest elevations. Microclimates also exist in local areas that run nearly the entire spectrum of climates, including subtropical highland (Cfb/Cwb), humid subtropical (Cfa), humid continental (Dfa/Dfb), Mediterranean (Csa/Csb) and subarctic (Dfc).",
"title": "Climate"
},
{
"paragraph_id": 49,
"text": "Extreme weather changes are common in Colorado, although a significant portion of the extreme weather occurs in the least populated areas of the state. Thunderstorms are common east of the Continental Divide in the spring and summer, yet are usually brief. Hail is a common sight in the mountains east of the Divide and across the eastern Plains, especially the northeast part of the state. Hail is the most commonly reported warm-season severe weather hazard, and occasionally causes human injuries, as well as significant property damage. The eastern Plains are subject to some of the biggest hail storms in North America. Notable examples are the severe hailstorms that hit Denver on July 11, 1990, and May 8, 2017, the latter being the costliest ever in the state.",
"title": "Climate"
},
{
"paragraph_id": 50,
"text": "The Eastern Plains are part of the extreme western portion of Tornado Alley; some damaging tornadoes in the Eastern Plains include the 1990 Limon F3 tornado and the 2008 Windsor EF3 tornado, which devastated a small town. Portions of the eastern Plains see especially frequent tornadoes, both those spawned from mesocyclones in supercell thunderstorms and from less intense landspouts, such as within the Denver convergence vorticity zone (DCVZ).",
"title": "Climate"
},
{
"paragraph_id": 51,
"text": "The Plains are also susceptible to occasional floods and particularly severe flash floods, which are caused both by thunderstorms and by the rapid melting of snow in the mountains during warm weather. Notable examples include the 1965 Denver Flood, the Big Thompson River flooding of 1976 and the 2013 Colorado floods. Hot weather is common during summers in Denver. The city's record in 1901 for the number of consecutive days above 90 °F (32 °C) was broken during the summer of 2008. The new record of 24 consecutive days surpassed the previous record by almost a week.",
"title": "Climate"
},
{
"paragraph_id": 52,
"text": "Much of Colorado is very dry, with the state averaging only 17 inches (430 millimeters) of precipitation per year statewide. The state rarely experiences a time when some portion is not in some degree of drought. The lack of precipitation contributes to the severity of wildfires in the state, such as the Hayman Fire of 2002. Other notable fires include the Fourmile Canyon Fire of 2010, the Waldo Canyon Fire and High Park Fire of June 2012, and the Black Forest Fire of June 2013. Even these fires were exceeded in severity by the Pine Gulch Fire, Cameron Peak Fire, and East Troublesome Fire in 2020, all being the three largest fires in Colorado history (see 2020 Colorado wildfires). And the Marshall Fire which started on December 30, 2021, while not the largest in state history, was the most destructive ever in terms of property loss (see Marshall Fire).",
"title": "Climate"
},
{
"paragraph_id": 53,
"text": "However, some of the mountainous regions of Colorado receive a huge amount of moisture from winter snowfalls. The spring melts of these snows often cause great waterflows in the Yampa River, the Colorado River, the Rio Grande, the Arkansas River, the North Platte River, and the South Platte River.",
"title": "Climate"
},
{
"paragraph_id": 54,
"text": "Water flowing out of the Colorado Rocky Mountains is a very significant source of water for the farms, towns, and cities of the southwest states of New Mexico, Arizona, Utah, and Nevada, as well as the Midwest, such as Nebraska and Kansas, and the southern states of Oklahoma and Texas. A significant amount of water is also diverted for use in California; occasionally (formerly naturally and consistently), the flow of water reaches northern Mexico.",
"title": "Climate"
},
{
"paragraph_id": 55,
"text": "Climate change in Colorado encompasses the effects of climate change, attributed to man-made increases in atmospheric carbon dioxide, in the U.S. state of Colorado.",
"title": "Climate"
},
{
"paragraph_id": 56,
"text": "In 2019 The Denver Post reported that \"[i]ndividuals living in southeastern Colorado are more vulnerable to potential health effects from climate change than residents in other parts of the state\". The United States Environmental Protection Agency has more broadly reported:",
"title": "Climate"
},
{
"paragraph_id": 57,
"text": "The highest official ambient air temperature ever recorded in Colorado was 115 °F (46.1 °C) on July 20, 2019, at John Martin Dam. The lowest official air temperature was −61 °F (−51.7 °C) on February 1, 1985, at Maybell.",
"title": "Climate"
},
{
"paragraph_id": 58,
"text": "Despite its mountainous terrain, Colorado is relatively quiet seismically. The U.S. National Earthquake Information Center is located in Golden.",
"title": "Climate"
},
{
"paragraph_id": 59,
"text": "On August 22, 2011, a 5.3 magnitude earthquake occurred 9 miles (14 km) west-southwest of the city of Trinidad. There were no casualties and only a small amount of damage was reported. It was the second-largest earthquake in Colorado's history. A magnitude 5.7 earthquake was recorded in 1973.",
"title": "Climate"
},
{
"paragraph_id": 60,
"text": "In the early morning hours of August 24, 2018, four minor earthquakes rattled Colorado, ranging from magnitude 2.9 to 4.3.",
"title": "Climate"
},
{
"paragraph_id": 61,
"text": "Colorado has recorded 525 earthquakes since 1973, a majority of which range 2 to 3.5 on the Richter scale.",
"title": "Climate"
},
{
"paragraph_id": 62,
"text": "A process of extirpation by trapping and poisoning of the gray wolf (Canis lupus) from Colorado in the 1930s saw the last wild wolf in the state shot in 1945. A wolf pack recolonized Moffat County, Colorado in northwestern Colorado in 2019. Cattle farmers have expressed concern that a returning wolf population potentially threatens their herds. Coloradans voted to reintroduce gray wolves in 2020, with the state committing to a plan to have a population in the state by 2022 and permitting non-lethal methods of driving off wolves attacking livestock and pets.",
"title": "Fauna"
},
{
"paragraph_id": 63,
"text": "While there is fossil evidence of Harrington's mountain goat in Colorado between at least 800,000 years ago and its extinction with megafauna roughly 11,000 years ago, the mountain goat is not native to Colorado but was instead introduced to the state over time during the interval between 1947 and 1972. Despite being an artificially-introduced species, the state declared mountain goats a native species in 1993. In 2013, 2014, and 2019, an unknown illness killed nearly all mountain goat kids, leading to a Colorado Parks and Wildlife investigation.",
"title": "Fauna"
},
{
"paragraph_id": 64,
"text": "The native population of pronghorn in Colorado has varied wildly over the last century, reaching a low of only 15,000 individuals during the 1960s. However, conservation efforts succeeded in bringing the stable population back up to roughly 66,000 by 2013. The population was estimated to have reached 85,000 by 2019 and had increasingly more run-ins with the increased suburban housing along the eastern Front Range. State wildlife officials suggested that landowners would need to modify fencing to allow the greater number of pronghorns to move unabated through the newly developed land. Pronghorns are most readily found in the northern and eastern portions of the state, with some populations also in the western San Juan Mountains.",
"title": "Fauna"
},
{
"paragraph_id": 65,
"text": "Common wildlife found in the mountains of Colorado include mule deer, southwestern red squirrel, golden-mantled ground squirrel, yellow-bellied marmot, moose, American pika, and red fox, all at exceptionally high numbers, though moose are not native to the state. The foothills include deer, fox squirrel, desert cottontail, mountain cottontail, and coyote. The prairies are home to black-tailed prairie dog, the endangered swift fox, American badger, and white-tailed jackrabbit.",
"title": "Fauna"
},
{
"paragraph_id": 66,
"text": "The State of Colorado is divided into 64 counties. Two of these counties, the City and County of Broomfield and the City and County of Denver, have consolidated city and county governments. Counties are important units of government in Colorado since there are no civil townships or other minor civil divisions.",
"title": "Counties"
},
{
"paragraph_id": 67,
"text": "The most populous county in Colorado is El Paso County, the home of the City of Colorado Springs. The second most populous county is the City and County of Denver, the state capital. Five of the 64 counties now have more than 500,000 residents, while 12 have fewer than 5,000 residents. The ten most populous Colorado counties are all located in the Front Range Urban Corridor. Mesa County is the most populous county on the Colorado Western Slope.",
"title": "Counties"
},
{
"paragraph_id": 68,
"text": "Colorado has 272 active incorporated municipalities, comprising 197 towns, 73 cities, and two consolidated city and county governments. At the 2020 United States census, 4,299,942 of the 5,773,714 Colorado residents (74.47%) lived in one of these 272 municipalities. Another 714,417 residents (12.37%) lived in one of the 210 census-designated places, while the remaining 759,355 residents (13.15%) lived in the many rural and mountainous areas of the state.",
"title": "Counties"
},
{
"paragraph_id": 69,
"text": "Colorado municipalities operate under one of five types of municipal governing authority. Colorado currently has two consolidated city and county governments, 61 home rule cities, 12 statutory cities, 35 home rule towns, 161 statutory towns, and one territorial charter municipality.",
"title": "Counties"
},
{
"paragraph_id": 70,
"text": "The most populous municipality is the City and County of Denver. Colorado has 12 municipalities with more than 100,000 residents, and 17 with fewer than 100 residents. The 16 most populous Colorado municipalities are all located in the Front Range Urban Corridor. The City of Grand Junction is the most populous municipality on the Colorado Western Slope. The Town of Carbonate has had no year-round population since the 1890 census due to its severe winter weather and difficult access.",
"title": "Counties"
},
{
"paragraph_id": 71,
"text": "In addition to its 272 municipalities, Colorado has 210 unincorporated census-designated places (CDPs) and many other small communities. The most populous unincorporated community in Colorado is Highlands Ranch south of Denver. The seven most populous CDPs are located in the Front Range Urban Corridor. The Clifton CDP is the most populous CDP on the Colorado Western Slope.",
"title": "Counties"
},
{
"paragraph_id": 72,
"text": "Colorado has more than 4,000 special districts, most with property tax authority. These districts may provide schools, law enforcement, fire protection, water, sewage, drainage, irrigation, transportation, recreation, infrastructure, cultural facilities, business support, redevelopment, or other services.",
"title": "Counties"
},
{
"paragraph_id": 73,
"text": "Some of these districts have the authority to levy sales tax as well as property tax and use fees. This has led to a hodgepodge of sales tax and property tax rates in Colorado. There are some street intersections in Colorado with a different sales tax rate on each corner, sometimes substantially different.",
"title": "Counties"
},
{
"paragraph_id": 74,
"text": "Some of the more notable Colorado districts are:",
"title": "Counties"
},
{
"paragraph_id": 75,
"text": "Most recently on March 6, 2020, the Office of Management and Budget defined 21 statistical areas for Colorado comprising four combined statistical areas, seven metropolitan statistical areas, and ten micropolitan statistical areas.",
"title": "Statistical areas"
},
{
"paragraph_id": 76,
"text": "The most populous of the seven metropolitan statistical areas in Colorado is the 10-county Denver-Aurora-Lakewood, CO Metropolitan Statistical Area with a population of 2,963,821 at the 2020 United States census, an increase of +15.29% since the 2010 census.",
"title": "Statistical areas"
},
{
"paragraph_id": 77,
"text": "The more extensive 12-county Denver-Aurora, CO Combined Statistical Area had a population of 3,623,560 at the 2020 census, an increase of +17.23% since the 2010 census.",
"title": "Statistical areas"
},
{
"paragraph_id": 78,
"text": "The most populous extended metropolitan region in Rocky Mountain Region is the 18-county Front Range Urban Corridor along the northeast face of the Southern Rocky Mountains. This region with Denver at its center had a population of 5,055,344 at the 2020 census, an increase of +16.65% since the 2010 census.",
"title": "Statistical areas"
},
{
"paragraph_id": 79,
"text": "The United States Census Bureau estimated the population of Colorado on July 1, 2022, at 5,839,926, a 1.15% increase since the 2020 United States census.",
"title": "Demographics"
},
{
"paragraph_id": 80,
"text": "Coloradan Hispanics and Latinos (of any race and heritage) made up 20.7% of the population. According to the 2000 census, the largest ancestry groups in Colorado are German (22%) including those of Swiss and Austrian descent, Mexican (18%), Irish (12%), and English (12%). Persons reporting German ancestry are especially numerous in the Front Range, the Rockies (west-central counties), and Eastern parts/High Plains.",
"title": "Demographics"
},
{
"paragraph_id": 81,
"text": "Colorado has a high proportion of Hispanic, mostly Mexican-American, citizens in Metropolitan Denver, Colorado Springs, as well as the smaller cities of Greeley and Pueblo, and elsewhere. Southern, Southwestern, and Southeastern Colorado have a large number of Hispanos, the descendants of the early settlers of colonial Spanish origin. In 1940, the U.S. Census Bureau reported Colorado's population as 8.2% Hispanic and 90.3% non-Hispanic White. The Hispanic population of Colorado has continued to grow quickly over the past decades. By 2019, Hispanics made up 22% of Colorado's population, and Non-Hispanic Whites made up 70%. Spoken English in Colorado has many Spanish idioms.",
"title": "Demographics"
},
{
"paragraph_id": 82,
"text": "Colorado also has some large African-American communities located in Denver, in the neighborhoods of Montbello, Five Points, Whittier, and many other East Denver areas. The state has sizable numbers of Asian-Americans of Mongolian, Chinese, Filipino, Korean, Southeast Asian, and Japanese descent. The highest population of Asian Americans can be found on the south and southeast side of Denver, as well as some on Denver's southwest side. The Denver metropolitan area is considered more liberal and diverse than much of the state when it comes to political issues and environmental concerns.",
"title": "Demographics"
},
{
"paragraph_id": 83,
"text": "The population of Native Americans in the state is small. Native Americans are concentrated in metropolitan Denver and the southwestern corner of Colorado, where there are two Ute reservations.",
"title": "Demographics"
},
{
"paragraph_id": 84,
"text": "The majority of Colorado's immigrants are from Mexico, India, China, Vietnam, Korea, Germany and Canada.",
"title": "Demographics"
},
{
"paragraph_id": 85,
"text": "There were a total of 70,331 births in Colorado in 2006. (Birth rate of 14.6 per thousand.) In 2007, non-Hispanic Whites were involved in 59.1% of all births. Some 14.06% of those births involved a non-Hispanic White person and someone of a different race, most often with a couple including one Hispanic. A birth where at least one Hispanic person was involved counted for 43% of the births in Colorado. As of the 2010 census, Colorado has the seventh highest percentage of Hispanics (20.7%) in the U.S. behind New Mexico (46.3%), California (37.6%), Texas (37.6%), Arizona (29.6%), Nevada (26.5%), and Florida (22.5%). Per the 2000 census, the Hispanic population is estimated to be 918,899, or approximately 20% of the state's total population. Colorado has the 5th-largest population of Mexican-Americans, behind California, Texas, Arizona, and Illinois. In percentages, Colorado has the 6th-highest percentage of Mexican-Americans, behind New Mexico, California, Texas, Arizona, and Nevada.",
"title": "Demographics"
},
{
"paragraph_id": 86,
"text": "In 2011, 46% of Colorado's population younger than the age of one were minorities, meaning that they had at least one parent who was not non-Hispanic White.",
"title": "Demographics"
},
{
"paragraph_id": 87,
"text": "Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number.",
"title": "Demographics"
},
{
"paragraph_id": 88,
"text": "In 2017, Colorado recorded the second-lowest fertility rate in the United States outside of New England, after Oregon, at 1.63 children per woman. Significant, contributing factors to the decline in pregnancies were the Title X Family Planning Program and an intrauterine device grant from Warren Buffett's family.",
"title": "Demographics"
},
{
"paragraph_id": 89,
"text": "English, the official language of the state, is the most commonly spoken in Colorado. One Native American language still spoken in Colorado is the Colorado River Numic language also known as the Ute dialect.",
"title": "Demographics"
},
{
"paragraph_id": 90,
"text": "The most common non-English language spoken in the state is Spanish.",
"title": "Demographics"
},
{
"paragraph_id": 91,
"text": "Religious self-identification, per Public Religion Research Institute's 2022 American Values Survey",
"title": "Demographics"
},
{
"paragraph_id": 92,
"text": "Major religious affiliations of the people of Colorado as of 2014 were 64% Christian, of whom there are 44% Protestant, 16% Roman Catholic, 3% Mormon, and 1% Eastern Orthodox. Other religious breakdowns according to the Pew Research Center were 1% Jewish, 1% Muslim, 1% Buddhist and 4% other. The religiously unaffiliated made up 29% of the population. In 2020, according to the Public Religion Research Institute, Christianity was 66% of the population. Judaism was also reported to have increased in this separate study, forming 2% of the religious landscape, while the religiously unaffiliated were reported to form 28% of the population in this separate study. In 2022, the same organization reported 61% was Christian (39% Protestant, 19% Catholic, 2% Mormon, 1% Eastern Orthodox), 2% New Age, 1% Jewish, 1% Hindu, and 34% religiously unaffiliated.",
"title": "Demographics"
},
{
"paragraph_id": 93,
"text": "According to the Association of Religion Data Archives, the largest Christian denominations by the number of adherents in 2010 were the Catholic Church with 811,630; multi-denominational Evangelical Protestants with 229,981; and the Church of Jesus Christ of Latter-day Saints with 151,433. In 2020, the Association of Religion Data Archives determined the largest Christian denominations were Catholics (873,236), non/multi/inter-denominational Protestants (406,798), and Mormons (150,509). Throughout its non-Christian population, there were 12,500 Hindus, 7,101 Hindu Yogis, and 17,369 Buddhists at the 2020 study.",
"title": "Demographics"
},
{
"paragraph_id": 94,
"text": "Our Lady of Guadalupe Catholic Church was the first permanent Catholic parish in modern-day Colorado and was constructed by Spanish colonists from New Mexico in modern-day Conejos. Latin Church Catholics are served by three dioceses: the Archdiocese of Denver and the Dioceses of Colorado Springs and Pueblo.",
"title": "Demographics"
},
{
"paragraph_id": 95,
"text": "The first permanent settlement by members of the Church of Jesus Christ of Latter-day Saints in Colorado arrived from Mississippi and initially camped along the Arkansas River just east of the present-day site of Pueblo.",
"title": "Demographics"
},
{
"paragraph_id": 96,
"text": "Colorado is generally considered among the healthiest states by behavioral and healthcare researchers. Among the positive contributing factors is the state's well-known outdoor recreation opportunities and initiatives. However, there is a stratification of health metrics with wealthier counties such as Douglas and Pitkin performing significantly better relative to southern, less wealthy counties such as Huerfano and Las Animas.",
"title": "Demographics"
},
{
"paragraph_id": 97,
"text": "According to several studies, Coloradans have the lowest rates of obesity of any state in the US. As of 2018, 24% of the population was considered medically obese, and while the lowest in the nation, the percentage had increased from 17% in 2004.",
"title": "Demographics"
},
{
"paragraph_id": 98,
"text": "According to a report in the Journal of the American Medical Association, residents of Colorado had a 2014 life expectancy of 80.21 years, the longest of any U.S. state.",
"title": "Demographics"
},
{
"paragraph_id": 99,
"text": "According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 10,397 homeless people in Colorado.",
"title": "Demographics"
},
{
"paragraph_id": 100,
"text": "The total state product in 2015 was $318.6 billion. Median Annual Household Income in 2016 was $70,666, 8th in the nation. Per capita personal income in 2010 was $51,940, ranking Colorado 11th in the nation. The state's economy broadened from its mid-19th-century roots in mining when irrigated agriculture developed, and by the late 19th century, raising livestock had become important. Early industry was based on the extraction and processing of minerals and agricultural products. Current agricultural products are cattle, wheat, dairy products, corn, and hay.",
"title": "Economy"
},
{
"paragraph_id": 101,
"text": "The federal government operates several federal facilities in the state, including NORAD (North American Aerospace Defense Command), United States Air Force Academy, Schriever Air Force Base located approximately 10 miles (16 kilometers) east of Peterson Air Force Base, and Fort Carson, both located in Colorado Springs within El Paso County; NOAA, the National Renewable Energy Laboratory (NREL) in Golden, and the National Institute of Standards and Technology in Boulder; U.S. Geological Survey and other government agencies at the Denver Federal Center near Lakewood; the Denver Mint, Buckley Space Force Base, the Tenth Circuit Court of Appeals, and the Byron G. Rogers Federal Building and United States Courthouse in Denver; and a federal Supermax Prison and other federal prisons near Cañon City. In addition to these and other federal agencies, Colorado has abundant National Forest land and four National Parks that contribute to federal ownership of 24,615,788 acres (99,617 km) of land in Colorado, or 37% of the total area of the state. In the second half of the 20th century, the industrial and service sectors expanded greatly. The state's economy is diversified and is notable for its concentration on scientific research and high-technology industries. Other industries include food processing, transportation equipment, machinery, chemical products, the extraction of metals such as gold (see Gold mining in Colorado), silver, and molybdenum. Colorado now also has the largest annual production of beer in any state. Denver is an important financial center.",
"title": "Economy"
},
{
"paragraph_id": 102,
"text": "The state's diverse geography and majestic mountains attract millions of tourists every year, including 85.2 million in 2018. Tourism contributes greatly to Colorado's economy, with tourists generating $22.3 billion in 2018.",
"title": "Economy"
},
{
"paragraph_id": 103,
"text": "Several nationally known brand names have originated in Colorado factories and laboratories. From Denver came the forerunner of telecommunications giant Qwest in 1879, Samsonite luggage in 1910, Gates belts and hoses in 1911, and Russell Stover Candies in 1923. Kuner canned vegetables began in Brighton in 1864. From Golden came Coors beer in 1873, CoorsTek industrial ceramics in 1920, and Jolly Rancher candy in 1949. CF&I railroad rails, wire, nails, and pipe debuted in Pueblo in 1892. Holly Sugar was first milled from beets in Holly in 1905, and later moved its headquarters to Colorado Springs. The present-day Swift packed meat of Greeley evolved from Monfort of Colorado, Inc., established in 1930. Estes model rockets were launched in Penrose in 1958. Fort Collins has been the home of Woodward Governor Company's motor controllers (governors) since 1870, and Waterpik dental water jets and showerheads since 1962. Celestial Seasonings herbal teas have been made in Boulder since 1969. Rocky Mountain Chocolate Factory made its first candy in Durango in 1981.",
"title": "Economy"
},
{
"paragraph_id": 104,
"text": "Colorado has a flat 4.63% income tax, regardless of income level. On November 3, 2020, voters authorized an initiative to lower that income tax rate to 4.55 percent. Unlike most states, which calculate taxes based on federal adjusted gross income, Colorado taxes are based on taxable income—income after federal exemptions and federal itemized (or standard) deductions. Colorado's state sales tax is 2.9% on retail sales. When state revenues exceed state constitutional limits, according to Colorado's Taxpayer Bill of Rights legislation, full-year Colorado residents can claim a sales tax refund on their individual state income tax return. Many counties and cities charge their own rates, in addition to the base state rate. There are also certain county and special district taxes that may apply.",
"title": "Economy"
},
{
"paragraph_id": 105,
"text": "Real estate and personal business property are taxable in Colorado. The state's senior property tax exemption was temporarily suspended by the Colorado Legislature in 2003. The tax break was scheduled to return for the assessment year 2006, payable in 2007.",
"title": "Economy"
},
{
"paragraph_id": 106,
"text": "As of December 2018, the state's unemployment rate was 4.2%.",
"title": "Economy"
},
{
"paragraph_id": 107,
"text": "The West Virginia teachers' strike in 2018 inspired teachers in other states, including Colorado, to take similar action.",
"title": "Economy"
},
{
"paragraph_id": 108,
"text": "Corn is grown in the Eastern Plains of Colorado. Arid conditions and drought negatively impacted yields in 2020 and 2022.",
"title": "Economy"
},
{
"paragraph_id": 109,
"text": "Colorado has significant hydrocarbon resources. According to the Energy Information Administration, Colorado hosts seven of the largest natural gas fields in the United States, and two of the largest oil fields. Conventional and unconventional natural gas output from several Colorado basins typically accounts for more than five percent of annual U.S. natural gas production. Colorado's oil shale deposits hold an estimated 1 trillion barrels (160 km) of oil—nearly as much oil as the entire world's proven oil reserves. Substantial deposits of bituminous, subbituminous, and lignite coal are found in the state.",
"title": "Economy"
},
{
"paragraph_id": 110,
"text": "Uranium mining in Colorado goes back to 1872, when pitchblende ore was taken from gold mines near Central City, Colorado. Not counting byproduct uranium from phosphate, Colorado is considered to have the third-largest uranium reserves of any U.S. state, behind Wyoming and New Mexico. When Colorado and Utah dominated radium mining from 1910 to 1922, uranium and vanadium were the byproducts (giving towns like present-day Superfund site Uravan their names). Uranium price increases from 2001 to 2007 prompted several companies to revive uranium mining in Colorado. During the 1940s, certain communities–including Naturita and Paradox–earned the moniker of \"yellowcake towns\" from their relationship with uranium mining. Price drops and financing problems in late 2008 forced these companies to cancel or scale back the uranium-mining project. As of 2016, there were no major uranium mining operations in the state, though plans existed to restart production.",
"title": "Economy"
},
{
"paragraph_id": 111,
"text": "Colorado's high Rocky Mountain ridges and eastern plains offer wind power potential, and geologic activity in the mountain areas provides the potential for geothermal power development. Much of the state is sunny and could produce solar power. Major rivers flowing from the Rocky Mountains offer hydroelectric power resources.",
"title": "Economy"
},
{
"paragraph_id": 112,
"text": "Several film productions have been shot on location in Colorado, especially prominent Westerns like True Grit, The Searchers, and Butch Cassidy and the Sundance Kid. Several historic military forts, railways with trains still operating, and mining ghost towns have been used and transformed for historical accuracy in well-known films. There are also several scenic highways and mountain passes that helped to feature the open road in films such as Vanishing Point, Bingo and Starman. Some Colorado landmarks have been featured in films, such as The Stanley Hotel in Dumb and Dumber and The Shining and the Sculptured House in Sleeper. In 2015, Furious 7 was to film driving sequences on Pikes Peak Highway in Colorado. The TV adult-animated series South Park takes place in central Colorado in the titular town. Additionally, The TV series Good Luck Charlie was set, but not filmed, in Denver, Colorado. The Colorado Office of Film and Television has noted that more than 400 films have been shot in Colorado.",
"title": "Culture"
},
{
"paragraph_id": 113,
"text": "There are also several established film festivals in Colorado, including Aspen Shortsfest, Boulder International Film Festival, Castle Rock Film Festival, Denver Film Festival, Festivus Film Festival, Mile High Horror Film Festival, Moondance International Film Festival, Mountainfilm in Telluride, Rocky Mountain Women's Film Festival, and Telluride Film Festival.",
"title": "Culture"
},
{
"paragraph_id": 114,
"text": "Many notable writers have lived or spent extended periods in Colorado. Beat Generation writers Jack Kerouac and Neal Cassady lived in and around Denver for several years each. Irish playwright Oscar Wilde visited Colorado on his tour of the United States in 1882, writing in his 1906 Impressions of America that Leadville was \"the richest city in the world. It has also got the reputation of being the roughest, and every man carries a revolver.\"",
"title": "Culture"
},
{
"paragraph_id": 115,
"text": "Colorado is known for its Southwest and Rocky Mountain cuisine, with Mexican restaurants found throughout the state.",
"title": "Culture"
},
{
"paragraph_id": 116,
"text": "Boulder was named America's Foodiest Town 2010 by Bon Appétit. Boulder, and Colorado in general, is home to several national food and beverage companies, top-tier restaurants and farmers' markets. Boulder also has more Master Sommeliers per capita than any other city, including San Francisco and New York. Denver is known for steak, but now has a diverse culinary scene with many restaurants.",
"title": "Culture"
},
{
"paragraph_id": 117,
"text": "Polidori Sausage is a brand of pork products available in supermarkets, which originated in Colorado, in the early 20th century.",
"title": "Culture"
},
{
"paragraph_id": 118,
"text": "The Food & Wine Classic is held annually each June in Aspen. Aspen also has a reputation as the culinary capital of the Rocky Mountain region.",
"title": "Culture"
},
{
"paragraph_id": 119,
"text": "Colorado wines include award-winning varietals that have attracted favorable notice from outside the state. With wines made from traditional Vitis vinifera grapes along with wines made from cherries, peaches, plums, and honey, Colorado wines have won top national and international awards for their quality. Colorado's grape growing regions contain the highest elevation vineyards in the United States, with most viticulture in the state practiced between 4,000 and 7,000 feet (1,219 and 2,134 m) above sea level. The mountain climate ensures warm summer days and cool nights. Colorado is home to two designated American Viticultural Areas of the Grand Valley AVA and the West Elks AVA, where most of the vineyards in the state are located. However, an increasing number of wineries are located along the Front Range. In 2018, Wine Enthusiast Magazine named Colorado's Grand Valley AVA in Mesa County, Colorado, as one of the Top Ten wine travel destinations in the world.",
"title": "Culture"
},
{
"paragraph_id": 120,
"text": "Colorado is home to many nationally praised microbreweries, including New Belgium Brewing Company, Odell Brewing Company, Great Divide Brewing Company, and Bristol Brewing Company. The area of northern Colorado near and between the cities of Denver, Boulder, and Fort Collins is known as the \"Napa Valley of Beer\" due to its high density of craft breweries.",
"title": "Culture"
},
{
"paragraph_id": 121,
"text": "Colorado is open to cannabis (marijuana) tourism. With the adoption of the 64th state amendment in 2012, Colorado became the first state in the union to legalize marijuana for medicinal (2000), industrial (referring to hemp, 2012), and recreational (2012) use. Colorado's marijuana industry sold $1.31 billion worth of marijuana in 2016 and $1.26 billion in the first three-quarters of 2017. The state generated tax, fee, and license revenue of $194 million in 2016 on legal marijuana sales. Colorado regulates hemp as any part of the plant with less than 0.3% THC.",
"title": "Culture"
},
{
"paragraph_id": 122,
"text": "On April 4, 2014, Senate Bill 14–184 addressing oversight of Colorado's industrial hemp program was first introduced, ultimately being signed into law by Governor John Hickenlooper on May 31, 2014.",
"title": "Culture"
},
{
"paragraph_id": 123,
"text": "On November 7, 2000, 54% of Colorado voters passed Amendment 20, which amends the Colorado State constitution to allow the medical use of marijuana. A patient's medical use of marijuana, within the following limits, is lawful:",
"title": "Culture"
},
{
"paragraph_id": 124,
"text": "Currently, Colorado has listed \"eight medical conditions for which patients can use marijuana—cancer, glaucoma, HIV/AIDS, muscle spasms, seizures, severe pain, severe nausea and cachexia, or dramatic weight loss and muscle atrophy\". While governor, John Hickenlooper allocated about half of the state's $13 million \"Medical Marijuana Program Cash Fund\" to medical research in the 2014 budget. By 2018, the Medical Marijuana Program Cash Fund was the \"largest pool of pot money in the state\" and was used to fund programs including research into pediatric applications for controlling autism symptoms.",
"title": "Culture"
},
{
"paragraph_id": 125,
"text": "On November 6, 2012, voters amended the state constitution to protect \"personal use\" of marijuana for adults, establishing a framework to regulate marijuana in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014.",
"title": "Culture"
},
{
"paragraph_id": 126,
"text": "Colorado has five major professional sports leagues, all based in the Denver metropolitan area. Colorado is the least populous state with a franchise in each of the major professional sports leagues.",
"title": "Culture"
},
{
"paragraph_id": 127,
"text": "The Colorado Springs Snow Sox professional baseball team is based in Colorado Springs. The team is a member of the Pecos League, an independent baseball league which is not affiliated with Major or Minor League Baseball.",
"title": "Culture"
},
{
"paragraph_id": 128,
"text": "The Pikes Peak International Hill Climb is a major hill climbing motor race held on the Pikes Peak Highway.",
"title": "Culture"
},
{
"paragraph_id": 129,
"text": "The Cherry Hills Country Club has hosted several professional golf tournaments, including the U.S. Open, U.S. Senior Open, U.S. Women's Open, PGA Championship and BMW Championship.",
"title": "Culture"
},
{
"paragraph_id": 130,
"text": "The following universities and colleges participate in the National Collegiate Athletic Association Division I. The most popular college sports program is the University of Colorado Buffaloes, who used to play in the Big-12 but now play in the Pac-12. They have won the 1957 and 1991 Orange Bowl, 1995 Fiesta Bowl, and 1996 Cotton Bowl Classic.",
"title": "Culture"
},
{
"paragraph_id": 131,
"text": "Colorado's primary mode of transportation (in terms of passengers) is its highway system. Interstate 25 (I-25) is the primary north–south highway in the state, connecting Pueblo, Colorado Springs, Denver, and Fort Collins, and extending north to Wyoming and south to New Mexico. I-70 is the primary east–west corridor. It connects Grand Junction and the mountain communities with Denver and enters Utah and Kansas. The state is home to a network of US and Colorado highways that provide access to all principal areas of the state. Many smaller communities are connected to this network only via county roads.",
"title": "Transportation"
},
{
"paragraph_id": 132,
"text": "Denver International Airport (DIA) is the third-busiest domestic U.S. and international airport in the world by passenger traffic. DIA handles by far the largest volume of commercial air traffic in Colorado and is the busiest U.S. hub airport between Chicago and the Pacific coast, making Denver the most important airport for connecting passenger traffic in the western United States.",
"title": "Transportation"
},
{
"paragraph_id": 133,
"text": "Public transportation bus services are offered both intra-city and inter-city—including the Denver metro area's RTD services. The Regional Transportation District (RTD) operates the popular RTD Bus & Rail transit system in the Denver Metropolitan Area. As of January 2013 the RTD rail system had 170 light-rail vehicles, serving 47 miles (76 km) of track. In addition to local public transit, intercity bus service is provided by Burlington Trailways, Bustang, Express Arrow, and Greyhound Lines.",
"title": "Transportation"
},
{
"paragraph_id": 134,
"text": "Amtrak operates two passenger rail lines in Colorado, the California Zephyr and Southwest Chief. Colorado's contribution to world railroad history was forged principally by the Denver and Rio Grande Western Railroad which began in 1870 and wrote the book on mountain railroading. In 1988 the \"Rio Grande\" was acquired, but was merged into, the Southern Pacific Railroad by their joint owner Philip Anschutz. On September 11, 1996, Anschutz sold the combined company to the Union Pacific Railroad, creating the largest railroad network in the United States. The Anschutz sale was partly in response to the earlier merger of Burlington Northern and Santa Fe which formed the large Burlington Northern and Santa Fe Railway (BNSF), Union Pacific's principal competitor in western U.S. railroading. Both Union Pacific and BNSF have extensive freight operations in Colorado.",
"title": "Transportation"
},
{
"paragraph_id": 135,
"text": "Colorado's freight railroad network consists of 2,688 miles of Class I trackage. It is integral to the U.S. economy, being a critical artery for the movement of energy, agriculture, mining, and industrial commodities as well as general freight and manufactured products between the East and Midwest and the Pacific coast states.",
"title": "Transportation"
},
{
"paragraph_id": 136,
"text": "In August 2014, Colorado began to issue driver licenses to aliens not lawfully in the United States who lived in Colorado. In September 2014, KCNC reported that 524 non-citizens were issued Colorado driver licenses that are normally issued to U.S. citizens living in Colorado.",
"title": "Transportation"
},
{
"paragraph_id": 137,
"text": "The first institution of higher education in the Colorado Territory was the Colorado Seminary, opened on November 16, 1864, by the Methodist Episcopal Church. The seminary closed in 1867 but reopened in 1880 as the University of Denver. In 1870, the Bishop George Maxwell Randall of the Episcopal Church's Missionary District of Colorado and Parts Adjacent opened the first of what become the Colorado University Schools which would include the Territorial School of Mines opened in 1873 and sold to the Colorado Territory in 1874. These schools were initially run by the Episcopal Church. An 1861 territorial act called for the creation of a public university in Boulder, though it would not be until 1876 that the University of Colorado was founded. The 1876 act also renamed Territorial School of Mines as the Colorado School of Mines. An 1870 territorial act created the Agricultural College of Colorado which opened in 1879. The college was renamed the Colorado State College of Agriculture and Mechanic Arts in 1935, and became Colorado State University in 1957.",
"title": "Education"
},
{
"paragraph_id": 138,
"text": "The first Catholic college in Colorado was the Jesuit Sacred Heart College, which was founded in New Mexico in 1877, moved to Morrison in 1884, and to Denver in 1887. The college was renamed Regis College in 1921 and Regis University in 1991. On April 1, 1924, armed students patrolled the campus after a burning cross was found, the climax of tensions between Regis College and the locally-powerful Ku Klux Klan.",
"title": "Education"
},
{
"paragraph_id": 139,
"text": "Following a 1950 assessment by the Service Academy Board, it was determined that there was a need to supplement the U.S. Military and Naval Academies with a third school that would provide commissioned officers for the newly independent Air Force. On April 1, 1954, President Dwight Eisenhower signed a law that moved for the creation of a U.S. Air Force Academy. Later that year, Colorado Springs was selected to host the new institution. From its establishment in 1955, until the construction of appropriate facilities in Colorado Springs was completed and opened in 1958, the Air Force Academy operated out of Lowry Air Force Base in Denver. With the opening of the Colorado Springs facility, the cadets moved to the new campus, though not in the full-kit march that some urban and campus legends suggest. The first class of Space Force officers from the Air Force Academy commissioned on April 18, 2020.",
"title": "Education"
},
{
"paragraph_id": 140,
"text": "The major military installations in Colorado include:",
"title": "Military installations"
},
{
"paragraph_id": 141,
"text": "Former military posts in Colorado include:",
"title": "Military installations"
},
{
"paragraph_id": 142,
"text": "Like the federal government and all other U.S. states, Colorado's state constitution provides for three branches of government: the legislative, the executive, and the judicial branches.",
"title": "Government"
},
{
"paragraph_id": 143,
"text": "The Governor of Colorado heads the state's executive branch. The current governor is Jared Polis, a Democrat. Colorado's other statewide elected executive officers are the Lieutenant Governor of Colorado (elected on a ticket with the Governor), Secretary of State of Colorado, Colorado State Treasurer, and Attorney General of Colorado, all of whom serve four-year terms.",
"title": "Government"
},
{
"paragraph_id": 144,
"text": "The seven-member Colorado Supreme Court is the state's highest court, with seven justices. The Colorado Court of Appeals, with 22 judges, sits in divisions of three judges each. Colorado is divided into 22 judicial districts, each of which has a district court and a county court with limited jurisdiction. The state also has specialized water courts, which sit in seven distinct divisions around the state and which decide matters relating to water rights and the use and administration of water.",
"title": "Government"
},
{
"paragraph_id": 145,
"text": "The state legislative body is the Colorado General Assembly, which is made up of two houses – the House of Representatives and the Senate. The House has 65 members and the Senate has 35. As of 2023, the Democratic Party holds a 23 to 12 majority in the Senate and a 46 to 19 majority in the House.",
"title": "Government"
},
{
"paragraph_id": 146,
"text": "Most Coloradans are native to other states (nearly 60% according to the 2000 census), and this is illustrated by the fact that the state did not have a native-born governor from 1975 (when John David Vanderhoof left office) until 2007, when Bill Ritter took office; his election the previous year marked the first electoral victory for a native-born Coloradan in a gubernatorial race since 1958 (Vanderhoof had ascended from the Lieutenant Governorship when John Arthur Love was given a position in Richard Nixon's administration in 1973).",
"title": "Government"
},
{
"paragraph_id": 147,
"text": "Tax is collected by the Colorado Department of Revenue.",
"title": "Government"
},
{
"paragraph_id": 148,
"text": "Colorado was once considered a swing state, but has become a relatively safe blue state in both state and federal elections. In presidential elections, it had not been won until 2020 by double digits since 1984 and has backed the winning candidate in 9 of the last 11 elections. Coloradans have elected 17 Democrats and 12 Republicans to the governorship in the last 100 years.",
"title": "Government"
},
{
"paragraph_id": 149,
"text": "In presidential politics, Colorado was considered a reliably Republican state during the post-World War II era, voting for the Democratic candidate only in 1948, 1964, and 1992. However, it became a competitive swing state in the 1990s. Since the mid-2000s, it has swung heavily to the Democrats, voting for Barack Obama in 2008 and 2012, Hillary Clinton in 2016, and Joe Biden in 2020.",
"title": "Government"
},
{
"paragraph_id": 150,
"text": "Colorado politics exhibits a contrast between conservative cities such as Colorado Springs and Grand Junction, and liberal cities such as Boulder and Denver. Democrats are strongest in metropolitan Denver, the college towns of Fort Collins and Boulder, southern Colorado (including Pueblo), and several western ski resort counties. The Republicans are strongest in the Eastern Plains, Colorado Springs, Greeley, and far Western Colorado near Grand Junction.",
"title": "Government"
},
{
"paragraph_id": 151,
"text": "Colorado is represented by two members of the United States Senate:",
"title": "Government"
},
{
"paragraph_id": 152,
"text": "Colorado is represented by eight members of the United States House of Representatives:",
"title": "Government"
},
{
"paragraph_id": 153,
"text": "In a 2020 study, Colorado was ranked as the seventh easiest state for citizens to vote in.",
"title": "Government"
},
{
"paragraph_id": 154,
"text": "Colorado was the first state in the union to enact, by voter referendum, a law extending suffrage to women. That initiative was approved by the state's voters on November 7, 1893.",
"title": "Government"
},
{
"paragraph_id": 155,
"text": "On the November 8, 1932, ballot, Colorado approved the repeal of alcohol prohibition more than a year before the Twenty-first Amendment to the United States Constitution was ratified.",
"title": "Government"
},
{
"paragraph_id": 156,
"text": "Colorado has banned, via C.R.S. section 12-6-302, the sale of motor vehicles on Sunday since at least 1953.",
"title": "Government"
},
{
"paragraph_id": 157,
"text": "In 1972 Colorado voters rejected a referendum proposal to fund the 1976 Winter Olympics, which had been scheduled to be held in the state. Denver had been chosen by the International Olympic Committee as the host city on May 12, 1970.",
"title": "Government"
},
{
"paragraph_id": 158,
"text": "In 1992, by a margin of 53 to 47 percent, Colorado voters approved an amendment to the state constitution (Amendment 2) that would have prevented any city, town, or county in the state from taking any legislative, executive or judicial action to recognize homosexuals or bisexuals as a protected class. In 1996, in a 6–3 ruling in Romer v. Evans, the U.S. Supreme Court found that preventing protected status based upon homosexuality or bisexuality did not satisfy the Equal Protection Clause.",
"title": "Government"
},
{
"paragraph_id": 159,
"text": "In 2006, voters passed Amendment 43, which banned gay marriage in Colorado. That initiative was nullified by the U.S. Supreme Court's 2015 decision in Obergefell v. Hodges.",
"title": "Government"
},
{
"paragraph_id": 160,
"text": "In 2012, voters amended the state constitution protecting the \"personal use\" of marijuana for adults, establishing a framework to regulate cannabis like alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014.",
"title": "Government"
},
{
"paragraph_id": 161,
"text": "On December 19, 2023, the Colorado Supreme Court ruled that Donald Trump was disqualified from the 2024 United States Presidential Election in part due to his alleged incitement of the January 6 insurrection.",
"title": "Government"
},
{
"paragraph_id": 162,
"text": "The two Native American reservations remaining in Colorado are the Southern Ute Indian Reservation (1873; Ute dialect: Kapuuta-wa Moghwachi Núuchi-u) and Ute Mountain Ute Indian Reservation (1940; Ute dialect: Wʉgama Núuchi). The two abolished Indian reservations in Colorado were the Cheyenne and Arapaho Indian Reservation (1851–1870) and Ute Indian Reservation (1855–1873).",
"title": "Native American reservations"
},
{
"paragraph_id": 163,
"text": "Colorado is home to 4 national parks, 9 national monuments, 3 national historic sites, 2 national recreation areas, 4 national historic trails, 1 national scenic trail, 11 national forests, 2 national grasslands, 44 national wildernesses, 3 national conservation areas, 8 national wildlife refuges, 3 national heritage areas, 26 national historic landmarks, 16 national natural landmarks, more than 1,500 National Register of Historic Places, 1 wild and scenic river, 42 state parks, 307 state wildlife areas, 93 state natural areas, 28 national recreation trails, 6 regional trails, and numerous other scenic, historic, and recreational areas.",
"title": "Protected areas"
},
{
"paragraph_id": 164,
"text": "39°N 105°W / 39°N 105°W / 39; -105 (State of Colorado)",
"title": "External links"
}
] | Colorado is a state in the Mountain West subregion of the Western United States. It encompasses most of the Southern Rocky Mountains, as well as the northeastern portion of the Colorado Plateau and the western edge of the Great Plains. Colorado is the eighth most extensive and 21st most populous U.S. state. The United States Census Bureau estimated the population of Colorado at 5,839,926 as of July 1, 2022, a 1.15% increase since the 2020 United States census. The region has been inhabited by Native Americans and their ancestors for at least 13,500 years and possibly much longer. The eastern edge of the Rocky Mountains was a major migration route for early peoples who spread throughout the Americas. In 1848, much of the region was annexed to the United States with the Treaty of Guadalupe Hidalgo. The Pike's Peak Gold Rush of 1858–1862 created an influx of settlers. On February 28, 1861, U.S. President James Buchanan signed an act creating the Territory of Colorado, and on August 1, 1876, President Ulysses S. Grant signed Proclamation 230 admitting Colorado to the Union as the 38th state. The Spanish adjective "colorado" means "colored red" or "ruddy". Colorado is nicknamed the "Centennial State" because it became a state one century after the signing of the United States Declaration of Independence. Colorado is bordered by Wyoming to the north, Nebraska to the northeast, Kansas to the east, Oklahoma to the southeast, New Mexico to the south, Utah to the west, and touches Arizona to the southwest at the Four Corners. Colorado is noted for its landscape of mountains, forests, high plains, mesas, canyons, plateaus, rivers, and desert lands. Colorado is one of the Mountain States and is often considered to be part of the southwestern United States. The high plains of Colorado may be considered a part of the midwestern United States. Denver is the capital, the most populous city, and the center of the Front Range Urban Corridor. Colorado Springs is the second most populous city. Residents of the state are known as Coloradans, although the antiquated "Coloradoan" is occasionally used. Major parts of the economy include government and defense, mining, agriculture, tourism, and increasingly other kinds of manufacturing. With increasing temperatures and decreasing water availability, Colorado's agriculture, forestry, and tourism economies are expected to be heavily affected by climate change. | 2001-10-12T20:56:49Z | 2023-12-24T22:00:48Z | [
"Template:Dts",
"Template:Reflist",
"Template:Cite book",
"Template:Short description",
"Template:Efn",
"Template:Refend",
"Template:Bartable",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Failed verification",
"Template:Convert",
"Template:GeoGroup",
"Template:Authority control",
"Template:ISBN",
"Template:United States political divisions",
"Template:IPAc-en",
"Template:Ussc",
"Template:Cbignore",
"Template:Cite iucn",
"Template:For timeline",
"Template:Notelist",
"Template:Clear",
"Template:Ntsh",
"Template:Party color cell",
"Template:Cite report",
"Template:Further",
"Template:US Census population",
"Template:Pie chart",
"Template:Webarchive",
"Template:Cite magazine",
"Template:Excerpt",
"Template:Weather box",
"Template:Cite journal",
"Template:Curlie",
"Template:Panorama",
"Template:Osmrelation-inline",
"Template:Age",
"Template:Portal",
"Template:Refbegin",
"Template:Infobox U.S. state",
"Template:Main",
"Template:Cite AV media",
"Template:Cite news",
"Template:As of",
"Template:Collapsible list",
"Template:Citation",
"Template:See also",
"Template:Sister project links",
"Template:Colorado",
"Template:Coord",
"Template:For-multi",
"Template:Nts",
"Template:Cite ngs"
] | https://en.wikipedia.org/wiki/Colorado |
5,401 | Carboniferous | The Carboniferous (/ˌkɑːrbəˈnɪfərəs/ KAR-bə-NIF-ər-əs) is a geologic period and system of the Paleozoic that spans 60 million years from the end of the Devonian Period 358.9 million years ago (mya), to the beginning of the Permian Period, 298.9 mya. In North America, the Carboniferous is often treated as two separate geological periods, the earlier Mississippian and the later Pennsylvanian.
The name Carboniferous means "coal-bearing", from the Latin carbō ("coal") and ferō ("bear, carry"), and refers to the many coal beds formed globally during that time. The first of the modern "system" names, it was coined by geologists William Conybeare and William Phillips in 1822, based on a study of the British rock succession.
Carboniferous is the period during which both terrestrial animal and land plant life was well established. Stegocephalia (four-limbed vertebrates including true tetrapods), whose forerunners (tetrapodomorphs) had evolved from lobe-finned fish during the preceding Devonian period, became pentadactylous during the Carboniferous. The period is sometimes called the Age of Amphibians due to the diversification of early amphibians such as the temnospondyls, which became dominant land vertebrates, as well as the first appearance of amniotes including synapsids (the clade to which modern mammals belong) and sauropsids (which include modern reptiles and birds) during the late Carboniferous. Due to the raised atmospheric oxygen level, land arthropods such as arachnids (e.g. trigonotarbids and Pulmonoscorpius), myriapods (e.g. Arthropleura) and insects (e.g. Meganeura) also underwent a major evolutionary radiation during the late Carboniferous. Vast swaths of forests and swamps covered the land, which eventually became the coal beds characteristic of the Carboniferous stratigraphy evident today.
The later half of the period experienced glaciations, low sea level, and mountain building as the continents collided to form Pangaea. A minor marine and terrestrial extinction event, the Carboniferous rainforest collapse, occurred at the end of the period, caused by climate change.
The development of a Carboniferous chronostratigraphic timescale began in the late 18th century. The term "Carboniferous" was first used as an adjective by Irish geologist Richard Kirwan in 1799, and later used in a heading entitled "Coal-measures or Carboniferous Strata" by John Farey Sr. in 1811. Four units were originally ascribed to the Carboniferous, in ascending order, the Old Red Sandstone, Carboniferous Limestone, Millstone Grit and the Coal Measures. These four units were placed into a formalised Carboniferous unit by William Conybeare and William Phillips in 1822, and then into the Carboniferous System by Phillips in 1835. The Old Red Sandstone was later considered Devonian in age.
The similarity in successions between the British Isles and Western Europe led to the development of a common European timescale with the Carboniferous System divided into the lower Dinantian, dominated by carbonate deposition and the upper Silesian with mainly siliciclastic deposition. The Dinantian was divided into the Tournaisian and Viséan stages. The Silesian into the Namurian, Westphalian and Stephanian stages. The Tournaisian is the same length as the International Commission on Stratigraphy (ICS) stage, but the Viséan is longer, extending into the lower Serpukhovian. North American geologists recognised a similar stratigraphy, but divided it into two systems rather than one. These are the lower carbonate-rich sequence of the Mississippian System and the upper siliciclastic and coal-rich sequence of the Pennsylvanian. The United States Geological Survey officially recognised these two systems in 1953. In Russia, in the 1840’s British and Russian geologists divided the Carboniferous into the Lower, Middle and Upper series based on Russian sequences. In the 1890’s these became the Dinantian, Moscovian and Uralian stages. The Serpukivian was proposed as part of the Lower Carboniferous, and the Upper Carboniferous was divided into the Moscovian and Gzhelian. The Bashkirian was added in 1934.
In 1975, the ICS formally ratified the Carboniferous System, with the Mississippian and Pennsylvanian subsystems from the North American timescale, the Tournaisian and Visean stages from the Western European and the Serpukhovian, Bashkirian, Moscovian, Kasimovian and Gzhelian from the Russian. With the formal ratification of the Carboniferous System, the Dinantian, Silesian, Namurian, Westphalian and Stephanian became redundant terms, although the latter three are still in common use in Western Europe.
The Carboniferous is divided into two subsystems; the Mississippian and Pennsylvanian. These are divided into three series and seven stages. The Tournaisian, Visean and Serpukhovian stages equate to the Lower, Middle and Upper series of the Mississippian respectively. The Bashkirian and Moscovian stages, the Lower and Middle Pennsylvanian and the Kasimovian and Gzhelian stages the Upper Pennsylvanian.
Stages can be defined globally or regionally. For global stratigraphic correlation, the ICS ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. Only the boundaries of the Carboniferous System and three of the stage bases are defined by global stratotype sections and points because of the complexity of the geology. The ICS subdivisions from youngest to oldest are as follows:
The Mississippian was proposed by Alexander Winchell in 1870 named after the extensive exposure of Lower Carboniferous limestone in the upper Mississippi valley. During the Mississippian, there was a marine connection between the Paleo-Tethys and Panthalassa through the Rheic Ocean resulting in the near worldwide distribution of marine faunas and so allowing widespread correlations using marine biostratigraphy. However, there are few Mississippian volcanic rocks and so obtaining radiometric dates is difficult.
The Tournaisian Stage is named after the Belgian city of Tournai. It was introduced in scientific literature by Belgian geologist André Dumont in 1832. The GSSP for the base of the Carboniferous System, Mississippian Subsystem and Tournaisian Stage is located at the La Serre section in Montagne Noire, southern France. It is defined by the first appearance of the conodont Siphonodella sulcata within the evolutionary lineage from Siphonodella praesulcata to Siphonodella sulcata. This was ratified by the ICS in 1990. However, in 2006 further study revealed the presence of Siphonodella sulcata below the boundary, and the presence of Siphonodella praesulcata and Siphonodella sulcata together above a local unconformity. This means the evolution of one species to the other, the definition of the boundary, is not seen at the La Serre site making precise correlation difficult.
The Viséan Stage was introduced by André Dumont in 1832 and is named after the city of Visé, Liège Province, Belgium. In 1967, the base of the Visean was officially defined as the first black limestone in the Leffe facies at the Bastion Section in the Dinant Basin. These changes are now thought to be ecologically driven rather than due to evolutionary change, and so this has not been used as the location for the GSSP. Instead, the GSSP for the base of the Visean is located in Bed 83 of the sequence of dark grey limestones and shales at the Pengchong section, Guangxi, southern China. It is defined by the first appearance of the fusulinid Eoparastaffella simplex in the evolutionary lineage Eoparastaffella ovalis – Eoparastaffella simplex and was ratified in 2009.
The Serpukhovian Stage was proposed in 1890 by Russian stratigrapher Sergei Nikitin. It is named after the city of Serpukhov, near Moscow. The Serpukhovian Stage currently lacks a defined GSSP. The Visean-Serpukhovian boundary coincides with a major period of glaciation. The resulting sea level fall and climatic changes led to the loss of connections between marine basins and endemism of marine fauna across the Russian margin. This means changes in biota are environmental rather than evolutionary making wider correlation difficult. Work is underway in the Urals and Nashui, Guizhou Province, southwestern China for a suitable site for the GSSP with the proposed definition for the base of the Serpukhovian as the first appearance of conodont Lochriea ziegleri.
The Pennsylvanian was proposed by J.J.Stevenson in 1888, named after the widespread coal-rich strata found across the state of Pennsylvania. The closure of the Rheic Ocean and formation of Pangea during the Pennsylvanian, together with widespread glaciation across Gondwana led to major climate and sea level changes, which restricted marine fauna to particular geographic areas thereby reducing widespread biostratigraphic correlations. Extensive volcanic events associated with the assembling of Pangea means more radiometric dating is possible relative to the Mississippian.
The Bashkirian Stage was proposed by Russian stratigrapher Sofia Semikhatova in 1934. It was named after Bashkiria, the then Russian name of the republic of Bashkortostan in the southern Ural Mountains of Russia. The GSSP for the base of the Pennsylvanian Subsystem and Bashkirian Stage is located at Arrow Canyon in Nevada, US and was ratified in 1996. It is defined by the first appearance of the conodont Declinognathodus noduliferus. Arrow Canyon lay in a shallow, tropical seaway which stretched from Southern California to Alaska. The boundary is within a cyclothem sequence of transgressive limestones and fine sandstones, and regressive mudstones and brecciated limestones.
The Moscovian Stage is named after shallow marine limestones and colourful clays found around Moscow, Russia. It was first introduced by Sergei Nikitin in 1890. The Moscovian currently lacks a defined GSSP. The fusulinid Aljutovella aljutovica can be used to define the base of the Moscovian across the northern and eastern margins of Pangea, however, it is restricted in geographic area, which means it cannot be used for global correlations. The first appearance of the conodonts Declinognathodus donetzianus or Idiognathoides postsulcatus have been proposed as a boundary marking species and potential sites in the Urals and Nashui, Guizhou Province, southwestern China are being considered.
The Kasimovian is the first stage in the Upper Pennsylvanian. It is named after the Russian city of Kasimov, and was originally included as part of Nikitin's 1890 definition of the Moscovian. It was first recognised as a distinct unit by A.P. Ivanov in 1926, who named it the "Tiguliferina" Horizon after a type of brachiopod. The boundary covers of period of globally low sea level, which has resulted in disconformities within many sequences of this age. This has created difficulties in finding suitable marine fauna that can used to correlate boundaries worldwide. The Kasimovian currently lacks a defined GSSP and potential sites in the southern Urals, southwest USA and Nashui, Guizhou Province, southwestern China are being considered.
The Gzhelian Stage is the second stage in the Upper Pennsylvanian. It is named after the Russian village of Gzhel, near Ramenskoye, not far from Moscow. The name and type locality were defined by Sergei Nikitin in 1890. The restricted geographic distribution of fauna is again a problem in defining the Kasimovian-Gzhelian boundary and the base of the Gzhelian currently lacks a defined GSSP. The first appearance of the fusulinid Rauserites rossicus and Rauserites stuckenbergi can be used in the Boreal Sea and Paleo-Tethyan regions but not eastern Pangea or Panthalassa margins. Potential sites in the Urals and Nashui, Guizhou Province, southwestern China for the GSSP are being considered.
The GSSP for the base of the Permian is located in the Aidaralash River valley near Aqtöbe, Kazakhstan and was ratified in 1996. The beginning of the stage is defined by the first appearance of the conodont Streptognathodus postfusus.
A cyclothem is a succession of non-marine and marine sedimentary rocks, deposited during a single sedimentary cycle, with an erosional surface at its base. Whilst individual cyclothems are often only metres to a few tens of metres thick, cyclothem sequences can be many hundreds to thousands of metres thick, and contain tens to hundreds of individual cyclothems. Cyclothems were deposited along continental shelves where the very gentle gradient of the shelves meant even small changes in sea level led to large advances or retreats of the sea. Cyclothem lithologies vary from mudrock and carbonate-dominated to coarse siliciclastic sediment-dominated sequences depending on the paleo-topography, climate and supply of sediments to the shelf.
The main period of cyclothem deposition occurred during the Late Paleozoic Ice Age (LPIA) from the Late Mississippian to Early Permian, when the waxing and waning of ice sheets led to rapid changes in eustatic sea level. The growth of ice sheets led global sea levels to fall as water was lock away in glaciers. Falling sea levels exposed large tracts of the continental shelves across which river systems eroded channels and valleys and vegetation broke down the surface to form soils. The non-marine sediments deposited on this erosional surface form the base of the cyclothem. As sea levels began to rise, the rivers flowed through increasingly water-logged landscapes of swamps and lakes. Peat mires developed in these wet and oxygen-poor conditions, leading to coal formation. With continuing sea level rise, coastlines migrated landward and deltas, lagoons and esturaries developed; their sediments deposited over the peat mires. As fully marine conditions were established, limestones succeeded these marginal marine deposits. The limestones were in turn overlain by deep water black shales as maximum sea levels were reached. Ideally, this sequence would be reversed as sea levels began to fall again, however, sea level falls tend to be protracted, whilst sea level rises are rapid - ice sheets grow slowly, but melt quickly. Therefore, the majority of a cyclothem sequence occurred during falling sea levels, when rates of erosion were high, meaning they were often periods of non-deposition. Erosion during sea level falls could also result in the full or partial removal of previous cyclothem sequences. Individual cyclothems are generally less than 10 m thick because the speed at which sea level rose gave only limited time for sediments to accumulate.
During the Pennsylvanian, cyclothems were deposited in shallow, epicontinental seas across the tropical regions of Laurussia (present day western and central US, Europe, Russia and central Asia) and the North and South China cratons. The rapid sea levels fluctuations they represent correlate with the glacial cycles of the Late Paleozoic Ice Age. The advance and retreat of ice sheets across Gondwana followed a 100 kyr Milankovitch cycle and so each cyclothem represents a cycle of sea level fall and rise over a 100 kyr period.
The Carboniferous coal beds provided much of the fuel for power generation during the Industrial Revolution and are still of great economic importance.
The large coal deposits of the Carboniferous owe their existence primarily to two factors. The first is the appearance of wood tissue and bark-bearing trees. The evolution of the wood fiber lignin and the bark-sealing, waxy substance suberin variously opposed decay organisms so effectively that dead materials accumulated long enough to fossilise on a large scale. The second factor was the lower sea levels that occurred during the Carboniferous as compared to the preceding Devonian Period. This fostered the development of extensive lowland swamps and forests. Based on a genetic analysis of basidiomycetes, it is proposed that large quantities of wood were buried during this period because animals and decomposing bacteria and fungi had not yet evolved enzymes that could effectively digest the resistant phenolic lignin polymers and waxy suberin polymers. They suggest fungi that could break those substances down effectively became dominant only towards the end of the period, making subsequent coal formation much rarer. The delayed fungal evolution hypothesis has been challenged by other researchers, who conclude that tectonic and climatic conditions during the formation of Pangaea, which created water filled basins alongside developing mountain ranges, resulted in the development of widespread humid, tropical conditions and the burial of massive quantities of organic matter, were responsible for the high rate of coal formation, with large amounts of coal also being formed during the Mesozoic and Cenozoic well after lignin digesting fungi had become well established, and that fungal degradation of lignin had likely already evolved by the end of the Devonian, even if the specific enzymes used by basidiomycetes had not.
During the Carboniferous, there was an increased rate in tectonic plate movements as the supercontinent of Pangea assembled. The continents themselves formed a near circle around the opening Paleo-Tethys Ocean, with the massive Panthalassic Ocean beyond. The largest continent, Gondwana (modern day Africa, Arabia, South America, India, Madagascar, West Australia and East Antarctica), covered the south polar region. To its northwest was Laurussia (modern day North America, Greenland, Scandinavia, and much of Western Europe). These two continents slowly collided to form the core of Pangea. To the north of Laurussia lay Siberia and Amuria (central Mongolia). To the east of Siberia, Kazakhstania, North China and South China formed the northern margin of the Paleo-Tethys, with Annamia (Mainland Southeast Asia) laying to the south.
An Early Carboniferous global marine transgression resulted in the widespread deposition of limestones in the warm, shallow seas of equatorial regions. Sea levels then dropped as the Late Paleozoic Ice Age (LPIA) intensified in the Pennsylvanian, exposing large areas of continental shelf. As glaciers waxed and waned repeated rises and falls in sea levels produced a distinctive pattern of terrestrial and marine sediments known as cyclothems. These consist of river channel and delta deposits with peat mires, followed by estuarine, coastal and offshore marine deposits as river deltas and wetlands built out across the continental shelves, only to be drowned as sea levels rose again.
Today the Variscan-Alleghanian-Ouachita Orogen stretches over 10,000 km from the present day Gulf of Mexico in the east to Turkey in the west. It formed between the Middle Devonian and Early Permian as a series of continental collisions between Laurussia, Gondwana and the Armorican Terrane Assemblage (much of modern day Central and Western Europe including Iberia) as the Rheic Ocean closed and Pangea formed.
The Armorican terranes rifted away from Gondwana during the Late Ordovician. As they drifted northwards the Rheic Ocean closed in front of them and they began to collide with southeastern Laurussia in the Middle Devonian. The resulting Variscan Orogeny involved a complex series of oblique collisions with associated metamorphism, igneous activity, and large-scale deformation between these terranes and Laurussia, which continued into the Carboniferous.
During the mid Carboniferous, the South American sector of Gondwana collided obliquely with Laurussia’s southern margin resulting in the Ouachita Orogeny. The major strike-slip faulting that occurred between Laurussia and Gondwana extended eastwards into the Appalachian Mountains where early deformation in the Alleghanian Orogeny was predominantly strike-slip. As the West African sector of Gondwana collided with Laurussia, during the Late Pennsylvanian, deformation along the Alleghanian orogen became northwesterly-directed compression.
The Ural Orogen is a north-south trending fold and thrust belt that forms the western edge of the Central Asian Orogenic Belt. The Uralian Orogeny began in the Late Devonian and continued, with some hiatuses, into the Jurassic. From the Late Devonian to Early Carboniferous, the Magnitogorsk island arc, which lay between Kazakhstania and Laurussia in the Palaeo-Uralian Ocean, collided with the passive margin of northeastern Laurussia (Baltica craton). The suture zone between the former island arc complex and the continental margin formed the Main Uralian Fault, a major structure that runs for more than 2000 km along the orogen.(6) Accretion of the island arc was complete by the Tournaisian, but subduction of the Paleo-Ural Ocean between Kazakhstania and Laurussia continued until the Bashkirian when the ocean finally closed and continental collision began. Significant strike-slip movement along this zone indicates the collision was oblique. Deformation continued into the Permian and during the Late Carboniferous and Permian the region was extensively intruded by granites.
The Laurussian continent was formed by the collision between Laurentia, Baltica and Avalonia during the Devonian. At the beginning of the Carboniferous it lay at low latitude in the southern hemisphere and drifted north during the Carboniferous, crossing the equator during the mid-to-Late Carboniferous and reaching low latitudes in the northern hemisphere by the end of the Carboniferous. The Variscan-Appalachian-Ouachita mountain ranges drew in moist air from the Paleo-Tethys resulting in heavy precipitation and a tropical wetland environment. Extensive coaldeposits developed within the cyclothem sequences that dominated the Pennsylvanian sedimentary basins associated with the growing orogenic belt.
Whilst the southern and southeastern margins of Laurussia were dominated by the Variscan-Alleghanian-Ouachita Orogeny and the northeasterly margin by the Uralian Orogeny, subduction of the Panthalassic oceanic plate along its western margin resulted in the Antler Orogeny in the Late Devonian to early Mississippian. Further north along the margin, slab roll-back, beginning in the early Mississippian, led to the rifting of the Yukon-Tanana terrane and the opening of the Slide Mountain Ocean. Along the northern margin of Laurussia, orogenic collapse of the Late Devonian to early Mississippian Ellesmerian or Innuitian Orogeny led to the development of the Sverdrup Basin.
Much of Gondwana lay in the southern polar region during the Carboniferous. As the plate moved, the South Pole drifted from southern Africa in the Early Carboniferous to East Antarctica by the end of the period. Glacial deposits are widespread across Gondwana and indicate multiple ice centres and long distance movement of ice.
The northern to northeastern margin of Gondwana (Northeast Africa, Arabia, India and northeastern West Australia) was a passive margin along the southern edge of the Paleo-Tethys with cyclothem deposition including, during more temperate intervals, coal swamps in Western Australia. The Mexican terranes along the northwestern Gondwanan margin, were affected by the subduction of the Rheic Ocean. However, they lay to west of the Ouachita Orogeny and were not impacted by continental collision, but became part of the active margin of the Pacific. The Moroccan margin was affected by periods of widespread dextral strike-slip deformation, magmatism and metamorphism associated with the Variscan Orogeny.
Towards the end of the Carboniferous, extension and rifting across the northern margin of Gondwana would led to the breaking away of the Cimmerian Terrane (parts of present-day Turkey, Iran, Afghanistan, Pakistan, Tibet, China, Myanmar, Thailand and Malaysia) during the early Permian and the opening of the Neo-Tethys Ocean.
Along the southeastern and southern margin of Gondwana (eastern Australia and Antarctica), northward subduction of Panthalassa continued. Changes in the relative motion of the plates resulted in the Early Carboniferous Kanimblan Orogeny. Continental arc magmatism continued into the Late Carboniferous and extended round to connect with the developing proto-Andean subduction zone along the western South American margin of Gondwana.
Shallow seas covered much of the Siberian craton in the Early Carboniferous. These retreated as sea levels fell in the Pennsylvanian and as the continent drifted north into more temperate zones extensive coal deposits formed in the Kuznetsk Basin.
The northwest to eastern margins of Siberia were passive margins along the Mongol-Okhotsk Ocean on the far side of which lay Amuria. From the mid Carboniferous, subduction zones with associated magmatic arcs developed along both margins of the ocean.
The southwestern margin of Siberia was the site of the long lasting and complex accretionary orogen. The Devonian to Early Carboniferous Siberian and South Chinese Altai accretionary complexes developed above an east-dipping subduction zone, whilst further south, the Zharma-Saur arc formed along the northeastern margin of Kazakhstania. By the Late Carboniferous, all these complexes had accreted to the Siberian craton as shown by the intrusion of post-orogenic granites across the region. As Kazakhstania had already accreted to Laurussia, Siberia was effectively part of Pangea by 310Ma, although major transcurrent movements continued between it and Laurussia into the Permian.
The Kazakhstanian microcontinent is composed of a series of Devonian and older accretionary complexes. It was strongly deformed during the Carboniferous as its western margin collided with Laurussia during the Uralian Orogen and its northeastern margin collided with Siberia. Continuing transcurrent motion between Laurussia and Siberia led the formerly elongate microcontinent to bend into an orocline.
During the Carboniferous, the Tarim craton lay along the northwestern edge of North China. Subduction along the Kazakhstanian margin of the Turkestan Ocean resulted in collision between northern Tarim and Kazakhstania during the mid Carboniferous as the ocean closed. The South Tian Shan fold and thrust belt, which extends over 2000 km from Uzbekistan to Northwest China, is the remains of this accretionary complex and forms the suture between Kazakhstania and Tarim. A continental magmatic arc above a south-dipping subduction zone lay along the northern North China margin, consuming the Paleoasian Ocean. Northward subduction of the Paleo-Tethys beneath the southern margins of North China and Tarim continued during the Carboniferous, with the South Qinling block accreted to North China during the mid to Late Carboniferous.
No sediments are preserved from the Early Carboniferous in North China. However, bauxite deposits immediately above the regional mid Carboniferous unconformity indicate warm tropical conditions and are overlain by cyclothems including extensive coals.
South China and Annamia (Mainland Southeast Asia) rifted from Gondwana during the Devonian. During the Carboniferous, they were separated from each other and North China by the Paleoasian Ocean with the Paleo-Tethys to the southwest and Panthalassa to the northeast. Cyclothem sediments with coal and evaporites were deposited across the passive margins that surrounded both continents. Offshore eastern South China the proto-Japanese islands lay above a subduction zone consuming the Panthalassic Ocean.
Average global temperatures in the Early Carboniferous Period were high: approximately 20 °C (68 °F). However, cooling during the Middle Carboniferous reduced average global temperatures to about 12 °C (54 °F). Atmospheric carbon dioxide levels fell during the Carboniferous Period from roughly 8 times the current level in the beginning, to a level similar to today's at the end. The Carboniferous is considered part of the Late Palaeozoic Ice Age, which began in the latest Devonian with the formation of small glaciers in Gondwana. During the Tournaisian the climate warmed, before cooling, there was another warm interval during the Viséan, but cooling began again during the early Serpukhovian. At the beginning of the Pennsylvanian around 323 million years ago, glaciers began to form around the South Pole, which grew to cover a vast area of Gondwana. This area extended from the southern reaches of the Amazon basin and covered large areas of southern Africa, as well as most of Australia and Antarctica. Cyclothems, which began around 313 million years ago, and continue into the following Permian indicate that the size of the glaciers were controlled by Milankovitch cycles akin to recent ice ages, with glacial periods and interglacials. Deep ocean temperatures during this time were cold due to the influx of cold bottom waters generated by seasonal melting of the ice cap.
Although it is often asserted that Carboniferous atmospheric oxygen concentrations were signficiantly higher than today, at around 30% of total atmospheric concentration, prehistoric atmospheric oxygen concentration estimates are highly uncertain, with other estimates suggesting that the amount of oxygen was actually lower than that present in todays atmosphere.
The cooling and drying of the climate led to the Carboniferous Rainforest Collapse (CRC) during the late Carboniferous. Tropical rainforests fragmented and then were eventually devastated by climate change.
As the continents assembled to form Pangea, the growth of the Variscan-Alleghanian-Ouachita mountains led to increased weathering and carbonate sedimentation on the ocean floor, whilst the distribution of continents across the paleo-tropics meant vast areas of land were available for the spread of tropical rainforests. Together these two factors significantly increased CO2 drawdown from the atmosphere, lowering global temperatures, increasing ocean pH and triggering the Late Paleozoic Ice Age. The growth of the supercontinent also changed seafloor spreading rates and led to a decrease in the length and volume of mid-ocean ridge systems.
During the Early Carboniferous, the Mg/Ca ratio in seawater began to rise and by the mid-Mississippian aragonite seas had replaced calcite seas. The concentration of calcium in seawater is largely controlled by ocean pH, and as this increased the calcium concentration was reduced. At the same time, the increase in weathering, increased the amount of magnesium entering the marine environment. As magnesium is removed from seawater and calcium added along mid-ocean ridges where seawater reacts with the newly formed lithosphere, the reduction in length of mid-ocean ridge systems increased the Mg/Ca ratio further. The Mg/Ca ratio of the seas also affects the ability of organisms to biomineralize. The Carboniferous aragonite seas favoured those that secreted aragonite and the dominant reef builders of the time were aragonitic sponges and corals.
The strontium isotopic composition (Sr/Sr) of seawater represents a mix of strontium derived from continental weathering which is rich in Sr and from mantle sources e.g. mid-ocean ridges, which are relatively depleted in Sr. Sr/Sr ratios above 0.7075 indicate continental weathering is the main source of Sr, whilst ratios below indicate mantle-derived sources are the principal contributor.
Sr/Sr values varied through the Carboniferous, although they remained above 0.775, indicating continental weathering dominated as the source of Sr throughout. The Sr/Sr during the Tournaisian was c. 0.70840, it decreased through the Visean to 0.70771 before increasing during the Serpukhovian to the lowermost Gzhelian where it plateaued at 0.70827, before decreasing again to 0.70814 at the Carboniferous-Permian boundary. These variations reflect the changing influence of weathering and sediment supply to the oceans of the growing Variscan-Alleghanian-Ouachita mountain belt. By the Serpukhovian basement rocks, such as granite, had been uplifted and exposed to weathering. The decline towards the end of the Carboniferous is interpreted as a decrease in continental weathering due to the more arid conditions.
Unlike Mg/Ca and Sr/Sr isotope ratios, which are consistent across the world's oceans at any one time, δO and δC preserved in the fossil record can be affected by regional factors. Carboniferous δO and δC records show regional differences between the South China open-water setting and the epicontinental seas of Laurussia. These differences are due to variations in seawater salinity and evaporation between epicontinental seas relative to the more open waters. However, large scale trends can still be determined. δC rose rapidly from c. 0 to 1‰ (parts per thousand) to c. 5 to 7‰ in the earliest Mississippian and remained high for the duration of the Late Paleozoic Ice Age (c. 3–6‰) into the earliest Permian. Similarly from the Early Mississippian there was a long-term increase in δO values as the climate cooled.
Both δC and δO records show significant global isotope changes (known as excursions) during the Carboniferous. The mid-Tournaisian positive δC and δO excursions lasted between 6 and 10 million years and were also accompanied by c. 6‰ positive excursion in organic matter δN values, a negative excursion in carbonate δU and a positive excursion in carbonate-associated sulphate δS. These changes in seawater geochemistry are interpreted as a decrease in atmospheric CO2 due to increased organic matter burial and widespread ocean anoxia triggering climate cooling and onset of glaciation.
The Mississippian-Pennsylvanian boundary positive δO excursion occurred at the same time as global sea-level falls and widespread glacial deposits across southern Gondwana, indicating climate cooling and ice build-up. The rise in Sr/Sr just before the δO excursion suggests climate cooling in this case was due to increased continental weathering of the growing Variscan-Alleghanian-Ouachita mountains and the influence of the orogeny on precipitation and surface water flow rather than increased burial of organic matter. δC values show more regional variation and it is unclear whether there is a positive δC excursion or a readjustment from previous lower values.
During the earliest Kasimovian there was a short (<1myr), intense glacial period, which came to a sudden end as atmospheric CO2 concentrations rapidly rose. The Kasimovian saw a steady increase in arid conditions across tropical regions and a major reduction in the extent of tropical rainforests, as shown by the widespread loss of coal deposits from this time. The resulting reduction in productivity and burial of organic matter led to increasing atmospheric CO2 levels, which were recorded by a negative δC excursion and an accompanying, but smaller decrease in δO values.
Early Carboniferous land plants, some of which were preserved in coal balls, were very similar to those of the preceding Late Devonian, but new groups also appeared at this time. The main Early Carboniferous plants were the Equisetales (horse-tails), Sphenophyllales (scrambling plants), Lycopodiales (club mosses), Lepidodendrales (scale trees), Filicales (ferns), Medullosales (informally included in the "seed ferns", an assemblage of a number of early gymnosperm groups) and the Cordaitales. These continued to dominate throughout the period, but during late Carboniferous, several other groups, Cycadophyta (cycads), the Callistophytales (another group of "seed ferns"), and the Voltziales, appeared.
The Carboniferous lycophytes of the order Lepidodendrales, which are cousins (but not ancestors) of the tiny club-moss of today, were huge trees with trunks 30 meters high and up to 1.5 meters in diameter. These included Lepidodendron (with its cone called Lepidostrobus), Anabathra, Lepidophloios and Sigillaria. The roots of several of these forms are known as Stigmaria. Unlike present-day trees, their secondary growth took place in the cortex, which also provided stability, instead of the xylem. The Cladoxylopsids were large trees, that were ancestors of ferns, first arising in the Carboniferous.
The fronds of some Carboniferous ferns are almost identical with those of living species. Probably many species were epiphytic. Fossil ferns and "seed ferns" include Pecopteris, Cyclopteris, Neuropteris, Alethopteris, and Sphenopteris; Megaphyton and Caulopteris were tree ferns.
The Equisetales included the common giant form Calamites, with a trunk diameter of 30 to 60 cm (24 in) and a height of up to 20 m (66 ft). Sphenophyllum was a slender climbing plant with whorls of leaves, which was probably related both to the calamites and the lycopods.
Cordaites, a tall plant (6 to over 30 meters) with strap-like leaves, was related to the cycads and conifers; the catkin-like reproductive organs, which bore ovules/seeds, is called Cardiocarpus. These plants were thought to live in swamps. True coniferous trees (Walchia, of the order Voltziales) appear later in the Carboniferous, and preferred higher drier ground.
In the oceans the marine invertebrate groups are the Foraminifera, corals, Bryozoa, Ostracoda, brachiopods, ammonoids, hederelloids, microconchids and echinoderms (especially crinoids). The diversity of brachiopods and fusilinid foraminiferans, surged beginning in the Visean, continuing through the end of the Carboniferous, although cephalopod and nektonic conodont diversity declined. This evolutionary radiation was known as the Carboniferous-Earliest Permian Biodiversification Event. For the first time foraminifera take a prominent part in the marine faunas. The large spindle-shaped genus Fusulina and its relatives were abundant in what is now Russia, China, Japan, North America; other important genera include Valvulina, Endothyra, Archaediscus, and Saccammina (the latter common in Britain and Belgium). Some Carboniferous genera are still extant. The first true priapulids appeared during this period.
The microscopic shells of radiolarians are found in cherts of this age in the Culm of Devon and Cornwall, and in Russia, Germany and elsewhere. Sponges are known from spicules and anchor ropes, and include various forms such as the Calcispongea Cotyliscus and Girtycoelia, the demosponge Chaetetes, and the genus of unusual colonial glass sponges Titusvillia.
Both reef-building and solitary corals diversify and flourish; these include both rugose (for example, Caninia, Corwenia, Neozaphrentis), heterocorals, and tabulate (for example, Chladochonus, Michelinia) forms. Conularids were well represented by Conularia
Bryozoa are abundant in some regions; the fenestellids including Fenestella, Polypora, and Archimedes, so named because it is in the shape of an Archimedean screw. Brachiopods are also abundant; they include productids, some of which reached very large for brachiopods size and had very thick shells (for example, the 30 cm (12 in)-wide Gigantoproductus), while others like Chonetes were more conservative in form. Athyridids, spiriferids, rhynchonellids, and terebratulids are also very common. Inarticulate forms include Discina and Crania. Some species and genera had a very wide distribution with only minor variations.
Annelids such as Serpulites are common fossils in some horizons. Among the mollusca, the bivalves continue to increase in numbers and importance. Typical genera include Aviculopecten, Posidonomya, Nucula, Carbonicola, Edmondia, and Modiola. Gastropods are also numerous, including the genera Murchisonia, Euomphalus, Naticopsis. Nautiloid cephalopods are represented by tightly coiled nautilids, with straight-shelled and curved-shelled forms becoming increasingly rare. Goniatite ammonoids such as Aenigmatoceras are common.
Trilobites are rarer than in previous periods, on a steady trend towards extinction, represented only by the proetid group. Ostracoda, a class of crustaceans, were abundant as representatives of the meiobenthos; genera included Amphissites, Bairdia, Beyrichiopsis, Cavellina, Coryellina, Cribroconcha, Hollinella, Kirkbya, Knoxiella, and Libumella.
Crinoids were highly numerous during the Carboniferous, though they suffered a gradual decline in diversity during the middle Mississippian. Dense submarine thickets of long-stemmed crinoids appear to have flourished in shallow seas, and their remains were consolidated into thick beds of rock. Prominent genera include Cyathocrinus, Woodocrinus, and Actinocrinus. Echinoids such as Archaeocidaris and Palaeechinus were also present. The blastoids, which included the Pentreinitidae and Codasteridae and superficially resembled crinoids in the possession of long stalks attached to the seabed, attain their maximum development at this time.
Freshwater Carboniferous invertebrates include various bivalve molluscs that lived in brackish or fresh water, such as Anthraconaia, Naiadites, and Carbonicola; diverse crustaceans such as Candona, Carbonita, Darwinula, Estheria, Acanthocaris, Dithyrocaris, and Anthrapalaemon.
The eurypterids were also diverse, and are represented by such genera as Adelophthalmus, Megarachne (originally misinterpreted as a giant spider, hence its name) and the specialised very large Hibbertopterus. Many of these were amphibious.
Frequently a temporary return of marine conditions resulted in marine or brackish water genera such as Lingula, Orbiculoidea, and Productus being found in the thin beds known as marine bands.
Fossil remains of air-breathing insects, myriapods and arachnids are known from the Carboniferous. Their diversity when they do appear, however, shows that these arthropods were both well-developed and numerous. Some arthropods grew to large sizes with the up to 2.6-meter-long (8.5 ft) millipede-like Arthropleura being the largest-known land invertebrate of all time. Among the insect groups are the huge predatory Protodonata (griffinflies), among which was Meganeura, a giant dragonfly-like insect and with a wingspan of ca. 75 cm (30 in)—the largest flying insect ever to roam the planet. Further groups are the Syntonopterodea (relatives of present-day mayflies), the abundant and often large sap-sucking Palaeodictyopteroidea, the diverse herbivorous Protorthoptera, and numerous basal Dictyoptera (ancestors of cockroaches). Many insects have been obtained from the coalfields of Saarbrücken and Commentry, and from the hollow trunks of fossil trees in Nova Scotia. Some British coalfields have yielded good specimens: Archaeoptilus, from the Derbyshire coalfield, had a large wing with 4.3 cm (2 in) preserved part, and some specimens (Brodia) still exhibit traces of brilliant wing colors. In the Nova Scotian tree trunks land snails (Archaeozonites, Dendropupa) have been found.
Many fish inhabited the Carboniferous seas; predominantly Elasmobranchs (sharks and their relatives). These included some, like Psammodus, with crushing pavement-like teeth adapted for grinding the shells of brachiopods, crustaceans, and other marine organisms. Other groups of elasmobranchs, like the ctenacanthiformes grew to large sizes, with some genera like Saivodus reaching around 6-9 meters (20-30 feet). Other fish had piercing teeth, such as the Symmoriida; some, the petalodonts, had peculiar cycloid cutting teeth. Most of the other cartilaginous fish were marine, but others like the Xenacanthida, and several genera like Bandringa invaded fresh waters of the coal swamps. Among the bony fish, the Palaeonisciformes found in coastal waters also appear to have migrated to rivers. Sarcopterygian fish were also prominent, and one group, the Rhizodonts, reached very large size.
Most species of Carboniferous marine fish have been described largely from teeth, fin spines and dermal ossicles, with smaller freshwater fish preserved whole.
Freshwater fish were abundant, and include the genera Ctenodus, Uronemus, Acanthodes, Cheirodus, and Gyracanthus.
Chondrichthyes (especially holocephalans like the Stethacanthids) underwent a major evolutionary radiation during the Carboniferous. It is believed that this evolutionary radiation occurred because the decline of the placoderms at the end of the Devonian Period caused many environmental niches to become unoccupied and allowed new organisms to evolve and fill these niches. As a result of the evolutionary radiation Carboniferous holocephalans assumed a wide variety of bizarre shapes including Stethacanthus which possessed a flat brush-like dorsal fin with a patch of denticles on its top. Stethacanthus's unusual fin may have been used in mating rituals. Other groups like the eugeneodonts filled in the niches left by large predatory placoderms. These fish were unique as they only possessed one row of teeth in their upper or lower jaws in the form of elaborate tooth whorls. The first members of the helicoprionidae, a family eugeneodonts that were characterized by the presence of one circular tooth whorl in the lower jaw, appeared during the lower Carboniferous. Perhaps the most bizarre radiation of holocephalans at this time was that of the iniopterygiformes, an order of holocephalans that greatly resembled modern day flying fish that could have also "flown" in the water with their massive, elongated pectoral fins. They were further characterized by their large eye sockets, club-like structures on their tails, and spines on the tips of their fins.
Carboniferous amphibians were diverse and common by the middle of the period, more so than they are today; some were as long as 6 meters, and those fully terrestrial as adults had scaly skin. They included a number of basal tetrapod groups classified in early books under the Labyrinthodontia. These had long bodies, a head covered with bony plates and generally weak or undeveloped limbs. The largest were over 2 meters long. They were accompanied by an assemblage of smaller amphibians included under the Lepospondyli, often only about 15 cm (6 in) long. Some Carboniferous amphibians were aquatic and lived in rivers (Loxomma, Eogyrinus, Proterogyrinus); others may have been semi-aquatic (Ophiderpeton, Amphibamus, Hyloplesion) or terrestrial (Dendrerpeton, Tuditanus, Anthracosaurus).
The Carboniferous Rainforest Collapse slowed the evolution of amphibians who could not survive as well in the cooler, drier conditions. Amniotes, however, prospered due to specific key adaptations. One of the greatest evolutionary innovations of the Carboniferous was the amniote egg, which allowed the laying of eggs in a dry environment, as well as keratinized scales and claws, allowing for the further exploitation of the land by certain tetrapods. These included the earliest sauropsid reptiles (Hylonomus), and the earliest known synapsid (Archaeothyris). Synapsids quickly became huge and diversified in the Permian, only for their dominance to stop during the Mesozoic Era. Sauropsids (reptiles, and also, later, birds) also diversified but remained small until the Mesozoic, during which they dominated the land, as well as the water and sky, only for their dominance to stop during the Cenozoic Era.
Reptiles underwent a major evolutionary radiation in response to the drier climate that preceded the rainforest collapse. By the end of the Carboniferous Period, amniotes had already diversified into a number of groups, including several families of synapsid pelycosaurs, protorothyridids, captorhinids, saurians and araeoscelids.
As plants and animals were growing in size and abundance in this time (for example, Lepidodendron), land fungi diversified further. Marine fungi still occupied the oceans. All modern classes of fungi were present in the Late Carboniferous (Pennsylvanian Epoch).
During the Carboniferous, animals and bacteria had great difficulty with processing the lignin and cellulose that made up the gigantic trees of the period. Microbes had not evolved that could process them. The trees, after they died, simply piled up on the ground, occasionally becoming part of long-running wildfires after a lightning strike, with others very slowly degrading into coal. White rot fungus were the first organisms to be able to process these and break them down in any reasonable quantity and timescale. Thus, some have proposed that fungi helped end the Carboniferous Period, stopping accumulation of undegraded plant matter, although this idea remains highly controversial.
The first 15 million years of the Carboniferous had very limited terrestrial fossils. This gap in the fossil record is called Romer's gap after the American palaentologist Alfred Romer. While it has long been debated whether the gap is a result of fossilisation or relates to an actual event, recent work indicates the gap period saw a drop in atmospheric oxygen levels, indicating some sort of ecological collapse. The gap saw the demise of the Devonian fish-like ichthyostegalian labyrinthodonts, and the rise of the more advanced temnospondyl and reptiliomorphan amphibians that so typify the Carboniferous terrestrial vertebrate fauna.
Before the end of the Carboniferous Period, an extinction event occurred. On land this event is referred to as the Carboniferous Rainforest Collapse (CRC). Vast tropical rainforests collapsed suddenly as the climate changed from hot and humid to cool and arid. This was likely caused by intense glaciation and a drop in sea levels.
The new climatic conditions were not favorable to the growth of rainforest and the animals within them. Rainforests shrank into isolated islands, surrounded by seasonally dry habitats. Towering lycopsid forests with a heterogeneous mixture of vegetation were replaced by much less diverse tree-fern dominated flora.
Amphibians, the dominant vertebrates at the time, fared poorly through this event with large losses in biodiversity; reptiles continued to diversify due to key adaptations that let them survive in the drier habitat, specifically the hard-shelled egg and scales, both of which retain water better than their amphibian counterparts. | [
{
"paragraph_id": 0,
"text": "The Carboniferous (/ˌkɑːrbəˈnɪfərəs/ KAR-bə-NIF-ər-əs) is a geologic period and system of the Paleozoic that spans 60 million years from the end of the Devonian Period 358.9 million years ago (mya), to the beginning of the Permian Period, 298.9 mya. In North America, the Carboniferous is often treated as two separate geological periods, the earlier Mississippian and the later Pennsylvanian.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The name Carboniferous means \"coal-bearing\", from the Latin carbō (\"coal\") and ferō (\"bear, carry\"), and refers to the many coal beds formed globally during that time. The first of the modern \"system\" names, it was coined by geologists William Conybeare and William Phillips in 1822, based on a study of the British rock succession.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Carboniferous is the period during which both terrestrial animal and land plant life was well established. Stegocephalia (four-limbed vertebrates including true tetrapods), whose forerunners (tetrapodomorphs) had evolved from lobe-finned fish during the preceding Devonian period, became pentadactylous during the Carboniferous. The period is sometimes called the Age of Amphibians due to the diversification of early amphibians such as the temnospondyls, which became dominant land vertebrates, as well as the first appearance of amniotes including synapsids (the clade to which modern mammals belong) and sauropsids (which include modern reptiles and birds) during the late Carboniferous. Due to the raised atmospheric oxygen level, land arthropods such as arachnids (e.g. trigonotarbids and Pulmonoscorpius), myriapods (e.g. Arthropleura) and insects (e.g. Meganeura) also underwent a major evolutionary radiation during the late Carboniferous. Vast swaths of forests and swamps covered the land, which eventually became the coal beds characteristic of the Carboniferous stratigraphy evident today.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The later half of the period experienced glaciations, low sea level, and mountain building as the continents collided to form Pangaea. A minor marine and terrestrial extinction event, the Carboniferous rainforest collapse, occurred at the end of the period, caused by climate change.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The development of a Carboniferous chronostratigraphic timescale began in the late 18th century. The term \"Carboniferous\" was first used as an adjective by Irish geologist Richard Kirwan in 1799, and later used in a heading entitled \"Coal-measures or Carboniferous Strata\" by John Farey Sr. in 1811. Four units were originally ascribed to the Carboniferous, in ascending order, the Old Red Sandstone, Carboniferous Limestone, Millstone Grit and the Coal Measures. These four units were placed into a formalised Carboniferous unit by William Conybeare and William Phillips in 1822, and then into the Carboniferous System by Phillips in 1835. The Old Red Sandstone was later considered Devonian in age.",
"title": "Etymology and history"
},
{
"paragraph_id": 5,
"text": "The similarity in successions between the British Isles and Western Europe led to the development of a common European timescale with the Carboniferous System divided into the lower Dinantian, dominated by carbonate deposition and the upper Silesian with mainly siliciclastic deposition. The Dinantian was divided into the Tournaisian and Viséan stages. The Silesian into the Namurian, Westphalian and Stephanian stages. The Tournaisian is the same length as the International Commission on Stratigraphy (ICS) stage, but the Viséan is longer, extending into the lower Serpukhovian. North American geologists recognised a similar stratigraphy, but divided it into two systems rather than one. These are the lower carbonate-rich sequence of the Mississippian System and the upper siliciclastic and coal-rich sequence of the Pennsylvanian. The United States Geological Survey officially recognised these two systems in 1953. In Russia, in the 1840’s British and Russian geologists divided the Carboniferous into the Lower, Middle and Upper series based on Russian sequences. In the 1890’s these became the Dinantian, Moscovian and Uralian stages. The Serpukivian was proposed as part of the Lower Carboniferous, and the Upper Carboniferous was divided into the Moscovian and Gzhelian. The Bashkirian was added in 1934.",
"title": "Etymology and history"
},
{
"paragraph_id": 6,
"text": "In 1975, the ICS formally ratified the Carboniferous System, with the Mississippian and Pennsylvanian subsystems from the North American timescale, the Tournaisian and Visean stages from the Western European and the Serpukhovian, Bashkirian, Moscovian, Kasimovian and Gzhelian from the Russian. With the formal ratification of the Carboniferous System, the Dinantian, Silesian, Namurian, Westphalian and Stephanian became redundant terms, although the latter three are still in common use in Western Europe.",
"title": "Etymology and history"
},
{
"paragraph_id": 7,
"text": "The Carboniferous is divided into two subsystems; the Mississippian and Pennsylvanian. These are divided into three series and seven stages. The Tournaisian, Visean and Serpukhovian stages equate to the Lower, Middle and Upper series of the Mississippian respectively. The Bashkirian and Moscovian stages, the Lower and Middle Pennsylvanian and the Kasimovian and Gzhelian stages the Upper Pennsylvanian.",
"title": "Geology"
},
{
"paragraph_id": 8,
"text": "Stages can be defined globally or regionally. For global stratigraphic correlation, the ICS ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. Only the boundaries of the Carboniferous System and three of the stage bases are defined by global stratotype sections and points because of the complexity of the geology. The ICS subdivisions from youngest to oldest are as follows:",
"title": "Geology"
},
{
"paragraph_id": 9,
"text": "The Mississippian was proposed by Alexander Winchell in 1870 named after the extensive exposure of Lower Carboniferous limestone in the upper Mississippi valley. During the Mississippian, there was a marine connection between the Paleo-Tethys and Panthalassa through the Rheic Ocean resulting in the near worldwide distribution of marine faunas and so allowing widespread correlations using marine biostratigraphy. However, there are few Mississippian volcanic rocks and so obtaining radiometric dates is difficult.",
"title": "Geology"
},
{
"paragraph_id": 10,
"text": "The Tournaisian Stage is named after the Belgian city of Tournai. It was introduced in scientific literature by Belgian geologist André Dumont in 1832. The GSSP for the base of the Carboniferous System, Mississippian Subsystem and Tournaisian Stage is located at the La Serre section in Montagne Noire, southern France. It is defined by the first appearance of the conodont Siphonodella sulcata within the evolutionary lineage from Siphonodella praesulcata to Siphonodella sulcata. This was ratified by the ICS in 1990. However, in 2006 further study revealed the presence of Siphonodella sulcata below the boundary, and the presence of Siphonodella praesulcata and Siphonodella sulcata together above a local unconformity. This means the evolution of one species to the other, the definition of the boundary, is not seen at the La Serre site making precise correlation difficult.",
"title": "Geology"
},
{
"paragraph_id": 11,
"text": "The Viséan Stage was introduced by André Dumont in 1832 and is named after the city of Visé, Liège Province, Belgium. In 1967, the base of the Visean was officially defined as the first black limestone in the Leffe facies at the Bastion Section in the Dinant Basin. These changes are now thought to be ecologically driven rather than due to evolutionary change, and so this has not been used as the location for the GSSP. Instead, the GSSP for the base of the Visean is located in Bed 83 of the sequence of dark grey limestones and shales at the Pengchong section, Guangxi, southern China. It is defined by the first appearance of the fusulinid Eoparastaffella simplex in the evolutionary lineage Eoparastaffella ovalis – Eoparastaffella simplex and was ratified in 2009.",
"title": "Geology"
},
{
"paragraph_id": 12,
"text": "The Serpukhovian Stage was proposed in 1890 by Russian stratigrapher Sergei Nikitin. It is named after the city of Serpukhov, near Moscow. The Serpukhovian Stage currently lacks a defined GSSP. The Visean-Serpukhovian boundary coincides with a major period of glaciation. The resulting sea level fall and climatic changes led to the loss of connections between marine basins and endemism of marine fauna across the Russian margin. This means changes in biota are environmental rather than evolutionary making wider correlation difficult. Work is underway in the Urals and Nashui, Guizhou Province, southwestern China for a suitable site for the GSSP with the proposed definition for the base of the Serpukhovian as the first appearance of conodont Lochriea ziegleri.",
"title": "Geology"
},
{
"paragraph_id": 13,
"text": "The Pennsylvanian was proposed by J.J.Stevenson in 1888, named after the widespread coal-rich strata found across the state of Pennsylvania. The closure of the Rheic Ocean and formation of Pangea during the Pennsylvanian, together with widespread glaciation across Gondwana led to major climate and sea level changes, which restricted marine fauna to particular geographic areas thereby reducing widespread biostratigraphic correlations. Extensive volcanic events associated with the assembling of Pangea means more radiometric dating is possible relative to the Mississippian.",
"title": "Geology"
},
{
"paragraph_id": 14,
"text": "The Bashkirian Stage was proposed by Russian stratigrapher Sofia Semikhatova in 1934. It was named after Bashkiria, the then Russian name of the republic of Bashkortostan in the southern Ural Mountains of Russia. The GSSP for the base of the Pennsylvanian Subsystem and Bashkirian Stage is located at Arrow Canyon in Nevada, US and was ratified in 1996. It is defined by the first appearance of the conodont Declinognathodus noduliferus. Arrow Canyon lay in a shallow, tropical seaway which stretched from Southern California to Alaska. The boundary is within a cyclothem sequence of transgressive limestones and fine sandstones, and regressive mudstones and brecciated limestones.",
"title": "Geology"
},
{
"paragraph_id": 15,
"text": "The Moscovian Stage is named after shallow marine limestones and colourful clays found around Moscow, Russia. It was first introduced by Sergei Nikitin in 1890. The Moscovian currently lacks a defined GSSP. The fusulinid Aljutovella aljutovica can be used to define the base of the Moscovian across the northern and eastern margins of Pangea, however, it is restricted in geographic area, which means it cannot be used for global correlations. The first appearance of the conodonts Declinognathodus donetzianus or Idiognathoides postsulcatus have been proposed as a boundary marking species and potential sites in the Urals and Nashui, Guizhou Province, southwestern China are being considered.",
"title": "Geology"
},
{
"paragraph_id": 16,
"text": "The Kasimovian is the first stage in the Upper Pennsylvanian. It is named after the Russian city of Kasimov, and was originally included as part of Nikitin's 1890 definition of the Moscovian. It was first recognised as a distinct unit by A.P. Ivanov in 1926, who named it the \"Tiguliferina\" Horizon after a type of brachiopod. The boundary covers of period of globally low sea level, which has resulted in disconformities within many sequences of this age. This has created difficulties in finding suitable marine fauna that can used to correlate boundaries worldwide. The Kasimovian currently lacks a defined GSSP and potential sites in the southern Urals, southwest USA and Nashui, Guizhou Province, southwestern China are being considered.",
"title": "Geology"
},
{
"paragraph_id": 17,
"text": "The Gzhelian Stage is the second stage in the Upper Pennsylvanian. It is named after the Russian village of Gzhel, near Ramenskoye, not far from Moscow. The name and type locality were defined by Sergei Nikitin in 1890. The restricted geographic distribution of fauna is again a problem in defining the Kasimovian-Gzhelian boundary and the base of the Gzhelian currently lacks a defined GSSP. The first appearance of the fusulinid Rauserites rossicus and Rauserites stuckenbergi can be used in the Boreal Sea and Paleo-Tethyan regions but not eastern Pangea or Panthalassa margins. Potential sites in the Urals and Nashui, Guizhou Province, southwestern China for the GSSP are being considered.",
"title": "Geology"
},
{
"paragraph_id": 18,
"text": "The GSSP for the base of the Permian is located in the Aidaralash River valley near Aqtöbe, Kazakhstan and was ratified in 1996. The beginning of the stage is defined by the first appearance of the conodont Streptognathodus postfusus.",
"title": "Geology"
},
{
"paragraph_id": 19,
"text": "A cyclothem is a succession of non-marine and marine sedimentary rocks, deposited during a single sedimentary cycle, with an erosional surface at its base. Whilst individual cyclothems are often only metres to a few tens of metres thick, cyclothem sequences can be many hundreds to thousands of metres thick, and contain tens to hundreds of individual cyclothems. Cyclothems were deposited along continental shelves where the very gentle gradient of the shelves meant even small changes in sea level led to large advances or retreats of the sea. Cyclothem lithologies vary from mudrock and carbonate-dominated to coarse siliciclastic sediment-dominated sequences depending on the paleo-topography, climate and supply of sediments to the shelf.",
"title": "Geology"
},
{
"paragraph_id": 20,
"text": "The main period of cyclothem deposition occurred during the Late Paleozoic Ice Age (LPIA) from the Late Mississippian to Early Permian, when the waxing and waning of ice sheets led to rapid changes in eustatic sea level. The growth of ice sheets led global sea levels to fall as water was lock away in glaciers. Falling sea levels exposed large tracts of the continental shelves across which river systems eroded channels and valleys and vegetation broke down the surface to form soils. The non-marine sediments deposited on this erosional surface form the base of the cyclothem. As sea levels began to rise, the rivers flowed through increasingly water-logged landscapes of swamps and lakes. Peat mires developed in these wet and oxygen-poor conditions, leading to coal formation. With continuing sea level rise, coastlines migrated landward and deltas, lagoons and esturaries developed; their sediments deposited over the peat mires. As fully marine conditions were established, limestones succeeded these marginal marine deposits. The limestones were in turn overlain by deep water black shales as maximum sea levels were reached. Ideally, this sequence would be reversed as sea levels began to fall again, however, sea level falls tend to be protracted, whilst sea level rises are rapid - ice sheets grow slowly, but melt quickly. Therefore, the majority of a cyclothem sequence occurred during falling sea levels, when rates of erosion were high, meaning they were often periods of non-deposition. Erosion during sea level falls could also result in the full or partial removal of previous cyclothem sequences. Individual cyclothems are generally less than 10 m thick because the speed at which sea level rose gave only limited time for sediments to accumulate.",
"title": "Geology"
},
{
"paragraph_id": 21,
"text": "During the Pennsylvanian, cyclothems were deposited in shallow, epicontinental seas across the tropical regions of Laurussia (present day western and central US, Europe, Russia and central Asia) and the North and South China cratons. The rapid sea levels fluctuations they represent correlate with the glacial cycles of the Late Paleozoic Ice Age. The advance and retreat of ice sheets across Gondwana followed a 100 kyr Milankovitch cycle and so each cyclothem represents a cycle of sea level fall and rise over a 100 kyr period.",
"title": "Geology"
},
{
"paragraph_id": 22,
"text": "The Carboniferous coal beds provided much of the fuel for power generation during the Industrial Revolution and are still of great economic importance.",
"title": "Geology"
},
{
"paragraph_id": 23,
"text": "The large coal deposits of the Carboniferous owe their existence primarily to two factors. The first is the appearance of wood tissue and bark-bearing trees. The evolution of the wood fiber lignin and the bark-sealing, waxy substance suberin variously opposed decay organisms so effectively that dead materials accumulated long enough to fossilise on a large scale. The second factor was the lower sea levels that occurred during the Carboniferous as compared to the preceding Devonian Period. This fostered the development of extensive lowland swamps and forests. Based on a genetic analysis of basidiomycetes, it is proposed that large quantities of wood were buried during this period because animals and decomposing bacteria and fungi had not yet evolved enzymes that could effectively digest the resistant phenolic lignin polymers and waxy suberin polymers. They suggest fungi that could break those substances down effectively became dominant only towards the end of the period, making subsequent coal formation much rarer. The delayed fungal evolution hypothesis has been challenged by other researchers, who conclude that tectonic and climatic conditions during the formation of Pangaea, which created water filled basins alongside developing mountain ranges, resulted in the development of widespread humid, tropical conditions and the burial of massive quantities of organic matter, were responsible for the high rate of coal formation, with large amounts of coal also being formed during the Mesozoic and Cenozoic well after lignin digesting fungi had become well established, and that fungal degradation of lignin had likely already evolved by the end of the Devonian, even if the specific enzymes used by basidiomycetes had not.",
"title": "Geology"
},
{
"paragraph_id": 24,
"text": "During the Carboniferous, there was an increased rate in tectonic plate movements as the supercontinent of Pangea assembled. The continents themselves formed a near circle around the opening Paleo-Tethys Ocean, with the massive Panthalassic Ocean beyond. The largest continent, Gondwana (modern day Africa, Arabia, South America, India, Madagascar, West Australia and East Antarctica), covered the south polar region. To its northwest was Laurussia (modern day North America, Greenland, Scandinavia, and much of Western Europe). These two continents slowly collided to form the core of Pangea. To the north of Laurussia lay Siberia and Amuria (central Mongolia). To the east of Siberia, Kazakhstania, North China and South China formed the northern margin of the Paleo-Tethys, with Annamia (Mainland Southeast Asia) laying to the south.",
"title": "Palaeogeography"
},
{
"paragraph_id": 25,
"text": "An Early Carboniferous global marine transgression resulted in the widespread deposition of limestones in the warm, shallow seas of equatorial regions. Sea levels then dropped as the Late Paleozoic Ice Age (LPIA) intensified in the Pennsylvanian, exposing large areas of continental shelf. As glaciers waxed and waned repeated rises and falls in sea levels produced a distinctive pattern of terrestrial and marine sediments known as cyclothems. These consist of river channel and delta deposits with peat mires, followed by estuarine, coastal and offshore marine deposits as river deltas and wetlands built out across the continental shelves, only to be drowned as sea levels rose again.",
"title": "Palaeogeography"
},
{
"paragraph_id": 26,
"text": "Today the Variscan-Alleghanian-Ouachita Orogen stretches over 10,000 km from the present day Gulf of Mexico in the east to Turkey in the west. It formed between the Middle Devonian and Early Permian as a series of continental collisions between Laurussia, Gondwana and the Armorican Terrane Assemblage (much of modern day Central and Western Europe including Iberia) as the Rheic Ocean closed and Pangea formed.",
"title": "Palaeogeography"
},
{
"paragraph_id": 27,
"text": "The Armorican terranes rifted away from Gondwana during the Late Ordovician. As they drifted northwards the Rheic Ocean closed in front of them and they began to collide with southeastern Laurussia in the Middle Devonian. The resulting Variscan Orogeny involved a complex series of oblique collisions with associated metamorphism, igneous activity, and large-scale deformation between these terranes and Laurussia, which continued into the Carboniferous.",
"title": "Palaeogeography"
},
{
"paragraph_id": 28,
"text": "During the mid Carboniferous, the South American sector of Gondwana collided obliquely with Laurussia’s southern margin resulting in the Ouachita Orogeny. The major strike-slip faulting that occurred between Laurussia and Gondwana extended eastwards into the Appalachian Mountains where early deformation in the Alleghanian Orogeny was predominantly strike-slip. As the West African sector of Gondwana collided with Laurussia, during the Late Pennsylvanian, deformation along the Alleghanian orogen became northwesterly-directed compression.",
"title": "Palaeogeography"
},
{
"paragraph_id": 29,
"text": "The Ural Orogen is a north-south trending fold and thrust belt that forms the western edge of the Central Asian Orogenic Belt. The Uralian Orogeny began in the Late Devonian and continued, with some hiatuses, into the Jurassic. From the Late Devonian to Early Carboniferous, the Magnitogorsk island arc, which lay between Kazakhstania and Laurussia in the Palaeo-Uralian Ocean, collided with the passive margin of northeastern Laurussia (Baltica craton). The suture zone between the former island arc complex and the continental margin formed the Main Uralian Fault, a major structure that runs for more than 2000 km along the orogen.(6) Accretion of the island arc was complete by the Tournaisian, but subduction of the Paleo-Ural Ocean between Kazakhstania and Laurussia continued until the Bashkirian when the ocean finally closed and continental collision began. Significant strike-slip movement along this zone indicates the collision was oblique. Deformation continued into the Permian and during the Late Carboniferous and Permian the region was extensively intruded by granites.",
"title": "Palaeogeography"
},
{
"paragraph_id": 30,
"text": "The Laurussian continent was formed by the collision between Laurentia, Baltica and Avalonia during the Devonian. At the beginning of the Carboniferous it lay at low latitude in the southern hemisphere and drifted north during the Carboniferous, crossing the equator during the mid-to-Late Carboniferous and reaching low latitudes in the northern hemisphere by the end of the Carboniferous. The Variscan-Appalachian-Ouachita mountain ranges drew in moist air from the Paleo-Tethys resulting in heavy precipitation and a tropical wetland environment. Extensive coaldeposits developed within the cyclothem sequences that dominated the Pennsylvanian sedimentary basins associated with the growing orogenic belt.",
"title": "Palaeogeography"
},
{
"paragraph_id": 31,
"text": "Whilst the southern and southeastern margins of Laurussia were dominated by the Variscan-Alleghanian-Ouachita Orogeny and the northeasterly margin by the Uralian Orogeny, subduction of the Panthalassic oceanic plate along its western margin resulted in the Antler Orogeny in the Late Devonian to early Mississippian. Further north along the margin, slab roll-back, beginning in the early Mississippian, led to the rifting of the Yukon-Tanana terrane and the opening of the Slide Mountain Ocean. Along the northern margin of Laurussia, orogenic collapse of the Late Devonian to early Mississippian Ellesmerian or Innuitian Orogeny led to the development of the Sverdrup Basin.",
"title": "Palaeogeography"
},
{
"paragraph_id": 32,
"text": "Much of Gondwana lay in the southern polar region during the Carboniferous. As the plate moved, the South Pole drifted from southern Africa in the Early Carboniferous to East Antarctica by the end of the period. Glacial deposits are widespread across Gondwana and indicate multiple ice centres and long distance movement of ice.",
"title": "Palaeogeography"
},
{
"paragraph_id": 33,
"text": "The northern to northeastern margin of Gondwana (Northeast Africa, Arabia, India and northeastern West Australia) was a passive margin along the southern edge of the Paleo-Tethys with cyclothem deposition including, during more temperate intervals, coal swamps in Western Australia. The Mexican terranes along the northwestern Gondwanan margin, were affected by the subduction of the Rheic Ocean. However, they lay to west of the Ouachita Orogeny and were not impacted by continental collision, but became part of the active margin of the Pacific. The Moroccan margin was affected by periods of widespread dextral strike-slip deformation, magmatism and metamorphism associated with the Variscan Orogeny.",
"title": "Palaeogeography"
},
{
"paragraph_id": 34,
"text": "Towards the end of the Carboniferous, extension and rifting across the northern margin of Gondwana would led to the breaking away of the Cimmerian Terrane (parts of present-day Turkey, Iran, Afghanistan, Pakistan, Tibet, China, Myanmar, Thailand and Malaysia) during the early Permian and the opening of the Neo-Tethys Ocean.",
"title": "Palaeogeography"
},
{
"paragraph_id": 35,
"text": "Along the southeastern and southern margin of Gondwana (eastern Australia and Antarctica), northward subduction of Panthalassa continued. Changes in the relative motion of the plates resulted in the Early Carboniferous Kanimblan Orogeny. Continental arc magmatism continued into the Late Carboniferous and extended round to connect with the developing proto-Andean subduction zone along the western South American margin of Gondwana.",
"title": "Palaeogeography"
},
{
"paragraph_id": 36,
"text": "Shallow seas covered much of the Siberian craton in the Early Carboniferous. These retreated as sea levels fell in the Pennsylvanian and as the continent drifted north into more temperate zones extensive coal deposits formed in the Kuznetsk Basin.",
"title": "Palaeogeography"
},
{
"paragraph_id": 37,
"text": "The northwest to eastern margins of Siberia were passive margins along the Mongol-Okhotsk Ocean on the far side of which lay Amuria. From the mid Carboniferous, subduction zones with associated magmatic arcs developed along both margins of the ocean.",
"title": "Palaeogeography"
},
{
"paragraph_id": 38,
"text": "The southwestern margin of Siberia was the site of the long lasting and complex accretionary orogen. The Devonian to Early Carboniferous Siberian and South Chinese Altai accretionary complexes developed above an east-dipping subduction zone, whilst further south, the Zharma-Saur arc formed along the northeastern margin of Kazakhstania. By the Late Carboniferous, all these complexes had accreted to the Siberian craton as shown by the intrusion of post-orogenic granites across the region. As Kazakhstania had already accreted to Laurussia, Siberia was effectively part of Pangea by 310Ma, although major transcurrent movements continued between it and Laurussia into the Permian.",
"title": "Palaeogeography"
},
{
"paragraph_id": 39,
"text": "The Kazakhstanian microcontinent is composed of a series of Devonian and older accretionary complexes. It was strongly deformed during the Carboniferous as its western margin collided with Laurussia during the Uralian Orogen and its northeastern margin collided with Siberia. Continuing transcurrent motion between Laurussia and Siberia led the formerly elongate microcontinent to bend into an orocline.",
"title": "Palaeogeography"
},
{
"paragraph_id": 40,
"text": "During the Carboniferous, the Tarim craton lay along the northwestern edge of North China. Subduction along the Kazakhstanian margin of the Turkestan Ocean resulted in collision between northern Tarim and Kazakhstania during the mid Carboniferous as the ocean closed. The South Tian Shan fold and thrust belt, which extends over 2000 km from Uzbekistan to Northwest China, is the remains of this accretionary complex and forms the suture between Kazakhstania and Tarim. A continental magmatic arc above a south-dipping subduction zone lay along the northern North China margin, consuming the Paleoasian Ocean. Northward subduction of the Paleo-Tethys beneath the southern margins of North China and Tarim continued during the Carboniferous, with the South Qinling block accreted to North China during the mid to Late Carboniferous.",
"title": "Palaeogeography"
},
{
"paragraph_id": 41,
"text": "No sediments are preserved from the Early Carboniferous in North China. However, bauxite deposits immediately above the regional mid Carboniferous unconformity indicate warm tropical conditions and are overlain by cyclothems including extensive coals.",
"title": "Palaeogeography"
},
{
"paragraph_id": 42,
"text": "South China and Annamia (Mainland Southeast Asia) rifted from Gondwana during the Devonian. During the Carboniferous, they were separated from each other and North China by the Paleoasian Ocean with the Paleo-Tethys to the southwest and Panthalassa to the northeast. Cyclothem sediments with coal and evaporites were deposited across the passive margins that surrounded both continents. Offshore eastern South China the proto-Japanese islands lay above a subduction zone consuming the Panthalassic Ocean.",
"title": "Palaeogeography"
},
{
"paragraph_id": 43,
"text": "Average global temperatures in the Early Carboniferous Period were high: approximately 20 °C (68 °F). However, cooling during the Middle Carboniferous reduced average global temperatures to about 12 °C (54 °F). Atmospheric carbon dioxide levels fell during the Carboniferous Period from roughly 8 times the current level in the beginning, to a level similar to today's at the end. The Carboniferous is considered part of the Late Palaeozoic Ice Age, which began in the latest Devonian with the formation of small glaciers in Gondwana. During the Tournaisian the climate warmed, before cooling, there was another warm interval during the Viséan, but cooling began again during the early Serpukhovian. At the beginning of the Pennsylvanian around 323 million years ago, glaciers began to form around the South Pole, which grew to cover a vast area of Gondwana. This area extended from the southern reaches of the Amazon basin and covered large areas of southern Africa, as well as most of Australia and Antarctica. Cyclothems, which began around 313 million years ago, and continue into the following Permian indicate that the size of the glaciers were controlled by Milankovitch cycles akin to recent ice ages, with glacial periods and interglacials. Deep ocean temperatures during this time were cold due to the influx of cold bottom waters generated by seasonal melting of the ice cap.",
"title": "Climate"
},
{
"paragraph_id": 44,
"text": "Although it is often asserted that Carboniferous atmospheric oxygen concentrations were signficiantly higher than today, at around 30% of total atmospheric concentration, prehistoric atmospheric oxygen concentration estimates are highly uncertain, with other estimates suggesting that the amount of oxygen was actually lower than that present in todays atmosphere.",
"title": "Climate"
},
{
"paragraph_id": 45,
"text": "The cooling and drying of the climate led to the Carboniferous Rainforest Collapse (CRC) during the late Carboniferous. Tropical rainforests fragmented and then were eventually devastated by climate change.",
"title": "Climate"
},
{
"paragraph_id": 46,
"text": "As the continents assembled to form Pangea, the growth of the Variscan-Alleghanian-Ouachita mountains led to increased weathering and carbonate sedimentation on the ocean floor, whilst the distribution of continents across the paleo-tropics meant vast areas of land were available for the spread of tropical rainforests. Together these two factors significantly increased CO2 drawdown from the atmosphere, lowering global temperatures, increasing ocean pH and triggering the Late Paleozoic Ice Age. The growth of the supercontinent also changed seafloor spreading rates and led to a decrease in the length and volume of mid-ocean ridge systems.",
"title": "Geochemistry"
},
{
"paragraph_id": 47,
"text": "During the Early Carboniferous, the Mg/Ca ratio in seawater began to rise and by the mid-Mississippian aragonite seas had replaced calcite seas. The concentration of calcium in seawater is largely controlled by ocean pH, and as this increased the calcium concentration was reduced. At the same time, the increase in weathering, increased the amount of magnesium entering the marine environment. As magnesium is removed from seawater and calcium added along mid-ocean ridges where seawater reacts with the newly formed lithosphere, the reduction in length of mid-ocean ridge systems increased the Mg/Ca ratio further. The Mg/Ca ratio of the seas also affects the ability of organisms to biomineralize. The Carboniferous aragonite seas favoured those that secreted aragonite and the dominant reef builders of the time were aragonitic sponges and corals.",
"title": "Geochemistry"
},
{
"paragraph_id": 48,
"text": "The strontium isotopic composition (Sr/Sr) of seawater represents a mix of strontium derived from continental weathering which is rich in Sr and from mantle sources e.g. mid-ocean ridges, which are relatively depleted in Sr. Sr/Sr ratios above 0.7075 indicate continental weathering is the main source of Sr, whilst ratios below indicate mantle-derived sources are the principal contributor.",
"title": "Geochemistry"
},
{
"paragraph_id": 49,
"text": "Sr/Sr values varied through the Carboniferous, although they remained above 0.775, indicating continental weathering dominated as the source of Sr throughout. The Sr/Sr during the Tournaisian was c. 0.70840, it decreased through the Visean to 0.70771 before increasing during the Serpukhovian to the lowermost Gzhelian where it plateaued at 0.70827, before decreasing again to 0.70814 at the Carboniferous-Permian boundary. These variations reflect the changing influence of weathering and sediment supply to the oceans of the growing Variscan-Alleghanian-Ouachita mountain belt. By the Serpukhovian basement rocks, such as granite, had been uplifted and exposed to weathering. The decline towards the end of the Carboniferous is interpreted as a decrease in continental weathering due to the more arid conditions.",
"title": "Geochemistry"
},
{
"paragraph_id": 50,
"text": "Unlike Mg/Ca and Sr/Sr isotope ratios, which are consistent across the world's oceans at any one time, δO and δC preserved in the fossil record can be affected by regional factors. Carboniferous δO and δC records show regional differences between the South China open-water setting and the epicontinental seas of Laurussia. These differences are due to variations in seawater salinity and evaporation between epicontinental seas relative to the more open waters. However, large scale trends can still be determined. δC rose rapidly from c. 0 to 1‰ (parts per thousand) to c. 5 to 7‰ in the earliest Mississippian and remained high for the duration of the Late Paleozoic Ice Age (c. 3–6‰) into the earliest Permian. Similarly from the Early Mississippian there was a long-term increase in δO values as the climate cooled.",
"title": "Geochemistry"
},
{
"paragraph_id": 51,
"text": "Both δC and δO records show significant global isotope changes (known as excursions) during the Carboniferous. The mid-Tournaisian positive δC and δO excursions lasted between 6 and 10 million years and were also accompanied by c. 6‰ positive excursion in organic matter δN values, a negative excursion in carbonate δU and a positive excursion in carbonate-associated sulphate δS. These changes in seawater geochemistry are interpreted as a decrease in atmospheric CO2 due to increased organic matter burial and widespread ocean anoxia triggering climate cooling and onset of glaciation.",
"title": "Geochemistry"
},
{
"paragraph_id": 52,
"text": "The Mississippian-Pennsylvanian boundary positive δO excursion occurred at the same time as global sea-level falls and widespread glacial deposits across southern Gondwana, indicating climate cooling and ice build-up. The rise in Sr/Sr just before the δO excursion suggests climate cooling in this case was due to increased continental weathering of the growing Variscan-Alleghanian-Ouachita mountains and the influence of the orogeny on precipitation and surface water flow rather than increased burial of organic matter. δC values show more regional variation and it is unclear whether there is a positive δC excursion or a readjustment from previous lower values.",
"title": "Geochemistry"
},
{
"paragraph_id": 53,
"text": "During the earliest Kasimovian there was a short (<1myr), intense glacial period, which came to a sudden end as atmospheric CO2 concentrations rapidly rose. The Kasimovian saw a steady increase in arid conditions across tropical regions and a major reduction in the extent of tropical rainforests, as shown by the widespread loss of coal deposits from this time. The resulting reduction in productivity and burial of organic matter led to increasing atmospheric CO2 levels, which were recorded by a negative δC excursion and an accompanying, but smaller decrease in δO values.",
"title": "Geochemistry"
},
{
"paragraph_id": 54,
"text": "Early Carboniferous land plants, some of which were preserved in coal balls, were very similar to those of the preceding Late Devonian, but new groups also appeared at this time. The main Early Carboniferous plants were the Equisetales (horse-tails), Sphenophyllales (scrambling plants), Lycopodiales (club mosses), Lepidodendrales (scale trees), Filicales (ferns), Medullosales (informally included in the \"seed ferns\", an assemblage of a number of early gymnosperm groups) and the Cordaitales. These continued to dominate throughout the period, but during late Carboniferous, several other groups, Cycadophyta (cycads), the Callistophytales (another group of \"seed ferns\"), and the Voltziales, appeared.",
"title": "Life"
},
{
"paragraph_id": 55,
"text": "The Carboniferous lycophytes of the order Lepidodendrales, which are cousins (but not ancestors) of the tiny club-moss of today, were huge trees with trunks 30 meters high and up to 1.5 meters in diameter. These included Lepidodendron (with its cone called Lepidostrobus), Anabathra, Lepidophloios and Sigillaria. The roots of several of these forms are known as Stigmaria. Unlike present-day trees, their secondary growth took place in the cortex, which also provided stability, instead of the xylem. The Cladoxylopsids were large trees, that were ancestors of ferns, first arising in the Carboniferous.",
"title": "Life"
},
{
"paragraph_id": 56,
"text": "The fronds of some Carboniferous ferns are almost identical with those of living species. Probably many species were epiphytic. Fossil ferns and \"seed ferns\" include Pecopteris, Cyclopteris, Neuropteris, Alethopteris, and Sphenopteris; Megaphyton and Caulopteris were tree ferns.",
"title": "Life"
},
{
"paragraph_id": 57,
"text": "The Equisetales included the common giant form Calamites, with a trunk diameter of 30 to 60 cm (24 in) and a height of up to 20 m (66 ft). Sphenophyllum was a slender climbing plant with whorls of leaves, which was probably related both to the calamites and the lycopods.",
"title": "Life"
},
{
"paragraph_id": 58,
"text": "Cordaites, a tall plant (6 to over 30 meters) with strap-like leaves, was related to the cycads and conifers; the catkin-like reproductive organs, which bore ovules/seeds, is called Cardiocarpus. These plants were thought to live in swamps. True coniferous trees (Walchia, of the order Voltziales) appear later in the Carboniferous, and preferred higher drier ground.",
"title": "Life"
},
{
"paragraph_id": 59,
"text": "In the oceans the marine invertebrate groups are the Foraminifera, corals, Bryozoa, Ostracoda, brachiopods, ammonoids, hederelloids, microconchids and echinoderms (especially crinoids). The diversity of brachiopods and fusilinid foraminiferans, surged beginning in the Visean, continuing through the end of the Carboniferous, although cephalopod and nektonic conodont diversity declined. This evolutionary radiation was known as the Carboniferous-Earliest Permian Biodiversification Event. For the first time foraminifera take a prominent part in the marine faunas. The large spindle-shaped genus Fusulina and its relatives were abundant in what is now Russia, China, Japan, North America; other important genera include Valvulina, Endothyra, Archaediscus, and Saccammina (the latter common in Britain and Belgium). Some Carboniferous genera are still extant. The first true priapulids appeared during this period.",
"title": "Life"
},
{
"paragraph_id": 60,
"text": "The microscopic shells of radiolarians are found in cherts of this age in the Culm of Devon and Cornwall, and in Russia, Germany and elsewhere. Sponges are known from spicules and anchor ropes, and include various forms such as the Calcispongea Cotyliscus and Girtycoelia, the demosponge Chaetetes, and the genus of unusual colonial glass sponges Titusvillia.",
"title": "Life"
},
{
"paragraph_id": 61,
"text": "Both reef-building and solitary corals diversify and flourish; these include both rugose (for example, Caninia, Corwenia, Neozaphrentis), heterocorals, and tabulate (for example, Chladochonus, Michelinia) forms. Conularids were well represented by Conularia",
"title": "Life"
},
{
"paragraph_id": 62,
"text": "Bryozoa are abundant in some regions; the fenestellids including Fenestella, Polypora, and Archimedes, so named because it is in the shape of an Archimedean screw. Brachiopods are also abundant; they include productids, some of which reached very large for brachiopods size and had very thick shells (for example, the 30 cm (12 in)-wide Gigantoproductus), while others like Chonetes were more conservative in form. Athyridids, spiriferids, rhynchonellids, and terebratulids are also very common. Inarticulate forms include Discina and Crania. Some species and genera had a very wide distribution with only minor variations.",
"title": "Life"
},
{
"paragraph_id": 63,
"text": "Annelids such as Serpulites are common fossils in some horizons. Among the mollusca, the bivalves continue to increase in numbers and importance. Typical genera include Aviculopecten, Posidonomya, Nucula, Carbonicola, Edmondia, and Modiola. Gastropods are also numerous, including the genera Murchisonia, Euomphalus, Naticopsis. Nautiloid cephalopods are represented by tightly coiled nautilids, with straight-shelled and curved-shelled forms becoming increasingly rare. Goniatite ammonoids such as Aenigmatoceras are common.",
"title": "Life"
},
{
"paragraph_id": 64,
"text": "Trilobites are rarer than in previous periods, on a steady trend towards extinction, represented only by the proetid group. Ostracoda, a class of crustaceans, were abundant as representatives of the meiobenthos; genera included Amphissites, Bairdia, Beyrichiopsis, Cavellina, Coryellina, Cribroconcha, Hollinella, Kirkbya, Knoxiella, and Libumella.",
"title": "Life"
},
{
"paragraph_id": 65,
"text": "Crinoids were highly numerous during the Carboniferous, though they suffered a gradual decline in diversity during the middle Mississippian. Dense submarine thickets of long-stemmed crinoids appear to have flourished in shallow seas, and their remains were consolidated into thick beds of rock. Prominent genera include Cyathocrinus, Woodocrinus, and Actinocrinus. Echinoids such as Archaeocidaris and Palaeechinus were also present. The blastoids, which included the Pentreinitidae and Codasteridae and superficially resembled crinoids in the possession of long stalks attached to the seabed, attain their maximum development at this time.",
"title": "Life"
},
{
"paragraph_id": 66,
"text": "Freshwater Carboniferous invertebrates include various bivalve molluscs that lived in brackish or fresh water, such as Anthraconaia, Naiadites, and Carbonicola; diverse crustaceans such as Candona, Carbonita, Darwinula, Estheria, Acanthocaris, Dithyrocaris, and Anthrapalaemon.",
"title": "Life"
},
{
"paragraph_id": 67,
"text": "The eurypterids were also diverse, and are represented by such genera as Adelophthalmus, Megarachne (originally misinterpreted as a giant spider, hence its name) and the specialised very large Hibbertopterus. Many of these were amphibious.",
"title": "Life"
},
{
"paragraph_id": 68,
"text": "Frequently a temporary return of marine conditions resulted in marine or brackish water genera such as Lingula, Orbiculoidea, and Productus being found in the thin beds known as marine bands.",
"title": "Life"
},
{
"paragraph_id": 69,
"text": "Fossil remains of air-breathing insects, myriapods and arachnids are known from the Carboniferous. Their diversity when they do appear, however, shows that these arthropods were both well-developed and numerous. Some arthropods grew to large sizes with the up to 2.6-meter-long (8.5 ft) millipede-like Arthropleura being the largest-known land invertebrate of all time. Among the insect groups are the huge predatory Protodonata (griffinflies), among which was Meganeura, a giant dragonfly-like insect and with a wingspan of ca. 75 cm (30 in)—the largest flying insect ever to roam the planet. Further groups are the Syntonopterodea (relatives of present-day mayflies), the abundant and often large sap-sucking Palaeodictyopteroidea, the diverse herbivorous Protorthoptera, and numerous basal Dictyoptera (ancestors of cockroaches). Many insects have been obtained from the coalfields of Saarbrücken and Commentry, and from the hollow trunks of fossil trees in Nova Scotia. Some British coalfields have yielded good specimens: Archaeoptilus, from the Derbyshire coalfield, had a large wing with 4.3 cm (2 in) preserved part, and some specimens (Brodia) still exhibit traces of brilliant wing colors. In the Nova Scotian tree trunks land snails (Archaeozonites, Dendropupa) have been found.",
"title": "Life"
},
{
"paragraph_id": 70,
"text": "Many fish inhabited the Carboniferous seas; predominantly Elasmobranchs (sharks and their relatives). These included some, like Psammodus, with crushing pavement-like teeth adapted for grinding the shells of brachiopods, crustaceans, and other marine organisms. Other groups of elasmobranchs, like the ctenacanthiformes grew to large sizes, with some genera like Saivodus reaching around 6-9 meters (20-30 feet). Other fish had piercing teeth, such as the Symmoriida; some, the petalodonts, had peculiar cycloid cutting teeth. Most of the other cartilaginous fish were marine, but others like the Xenacanthida, and several genera like Bandringa invaded fresh waters of the coal swamps. Among the bony fish, the Palaeonisciformes found in coastal waters also appear to have migrated to rivers. Sarcopterygian fish were also prominent, and one group, the Rhizodonts, reached very large size.",
"title": "Life"
},
{
"paragraph_id": 71,
"text": "Most species of Carboniferous marine fish have been described largely from teeth, fin spines and dermal ossicles, with smaller freshwater fish preserved whole.",
"title": "Life"
},
{
"paragraph_id": 72,
"text": "Freshwater fish were abundant, and include the genera Ctenodus, Uronemus, Acanthodes, Cheirodus, and Gyracanthus.",
"title": "Life"
},
{
"paragraph_id": 73,
"text": "Chondrichthyes (especially holocephalans like the Stethacanthids) underwent a major evolutionary radiation during the Carboniferous. It is believed that this evolutionary radiation occurred because the decline of the placoderms at the end of the Devonian Period caused many environmental niches to become unoccupied and allowed new organisms to evolve and fill these niches. As a result of the evolutionary radiation Carboniferous holocephalans assumed a wide variety of bizarre shapes including Stethacanthus which possessed a flat brush-like dorsal fin with a patch of denticles on its top. Stethacanthus's unusual fin may have been used in mating rituals. Other groups like the eugeneodonts filled in the niches left by large predatory placoderms. These fish were unique as they only possessed one row of teeth in their upper or lower jaws in the form of elaborate tooth whorls. The first members of the helicoprionidae, a family eugeneodonts that were characterized by the presence of one circular tooth whorl in the lower jaw, appeared during the lower Carboniferous. Perhaps the most bizarre radiation of holocephalans at this time was that of the iniopterygiformes, an order of holocephalans that greatly resembled modern day flying fish that could have also \"flown\" in the water with their massive, elongated pectoral fins. They were further characterized by their large eye sockets, club-like structures on their tails, and spines on the tips of their fins.",
"title": "Life"
},
{
"paragraph_id": 74,
"text": "Carboniferous amphibians were diverse and common by the middle of the period, more so than they are today; some were as long as 6 meters, and those fully terrestrial as adults had scaly skin. They included a number of basal tetrapod groups classified in early books under the Labyrinthodontia. These had long bodies, a head covered with bony plates and generally weak or undeveloped limbs. The largest were over 2 meters long. They were accompanied by an assemblage of smaller amphibians included under the Lepospondyli, often only about 15 cm (6 in) long. Some Carboniferous amphibians were aquatic and lived in rivers (Loxomma, Eogyrinus, Proterogyrinus); others may have been semi-aquatic (Ophiderpeton, Amphibamus, Hyloplesion) or terrestrial (Dendrerpeton, Tuditanus, Anthracosaurus).",
"title": "Life"
},
{
"paragraph_id": 75,
"text": "The Carboniferous Rainforest Collapse slowed the evolution of amphibians who could not survive as well in the cooler, drier conditions. Amniotes, however, prospered due to specific key adaptations. One of the greatest evolutionary innovations of the Carboniferous was the amniote egg, which allowed the laying of eggs in a dry environment, as well as keratinized scales and claws, allowing for the further exploitation of the land by certain tetrapods. These included the earliest sauropsid reptiles (Hylonomus), and the earliest known synapsid (Archaeothyris). Synapsids quickly became huge and diversified in the Permian, only for their dominance to stop during the Mesozoic Era. Sauropsids (reptiles, and also, later, birds) also diversified but remained small until the Mesozoic, during which they dominated the land, as well as the water and sky, only for their dominance to stop during the Cenozoic Era.",
"title": "Life"
},
{
"paragraph_id": 76,
"text": "Reptiles underwent a major evolutionary radiation in response to the drier climate that preceded the rainforest collapse. By the end of the Carboniferous Period, amniotes had already diversified into a number of groups, including several families of synapsid pelycosaurs, protorothyridids, captorhinids, saurians and araeoscelids.",
"title": "Life"
},
{
"paragraph_id": 77,
"text": "As plants and animals were growing in size and abundance in this time (for example, Lepidodendron), land fungi diversified further. Marine fungi still occupied the oceans. All modern classes of fungi were present in the Late Carboniferous (Pennsylvanian Epoch).",
"title": "Life"
},
{
"paragraph_id": 78,
"text": "During the Carboniferous, animals and bacteria had great difficulty with processing the lignin and cellulose that made up the gigantic trees of the period. Microbes had not evolved that could process them. The trees, after they died, simply piled up on the ground, occasionally becoming part of long-running wildfires after a lightning strike, with others very slowly degrading into coal. White rot fungus were the first organisms to be able to process these and break them down in any reasonable quantity and timescale. Thus, some have proposed that fungi helped end the Carboniferous Period, stopping accumulation of undegraded plant matter, although this idea remains highly controversial.",
"title": "Life"
},
{
"paragraph_id": 79,
"text": "The first 15 million years of the Carboniferous had very limited terrestrial fossils. This gap in the fossil record is called Romer's gap after the American palaentologist Alfred Romer. While it has long been debated whether the gap is a result of fossilisation or relates to an actual event, recent work indicates the gap period saw a drop in atmospheric oxygen levels, indicating some sort of ecological collapse. The gap saw the demise of the Devonian fish-like ichthyostegalian labyrinthodonts, and the rise of the more advanced temnospondyl and reptiliomorphan amphibians that so typify the Carboniferous terrestrial vertebrate fauna.",
"title": "Extinction events"
},
{
"paragraph_id": 80,
"text": "Before the end of the Carboniferous Period, an extinction event occurred. On land this event is referred to as the Carboniferous Rainforest Collapse (CRC). Vast tropical rainforests collapsed suddenly as the climate changed from hot and humid to cool and arid. This was likely caused by intense glaciation and a drop in sea levels.",
"title": "Extinction events"
},
{
"paragraph_id": 81,
"text": "The new climatic conditions were not favorable to the growth of rainforest and the animals within them. Rainforests shrank into isolated islands, surrounded by seasonally dry habitats. Towering lycopsid forests with a heterogeneous mixture of vegetation were replaced by much less diverse tree-fern dominated flora.",
"title": "Extinction events"
},
{
"paragraph_id": 82,
"text": "Amphibians, the dominant vertebrates at the time, fared poorly through this event with large losses in biodiversity; reptiles continued to diversify due to key adaptations that let them survive in the drier habitat, specifically the hard-shelled egg and scales, both of which retain water better than their amphibian counterparts.",
"title": "Extinction events"
}
] | The Carboniferous is a geologic period and system of the Paleozoic that spans 60 million years from the end of the Devonian Period 358.9 million years ago (mya), to the beginning of the Permian Period, 298.9 mya. In North America, the Carboniferous is often treated as two separate geological periods, the earlier Mississippian and the later Pennsylvanian. The name Carboniferous means "coal-bearing", from the Latin carbō ("coal") and ferō, and refers to the many coal beds formed globally during that time. The first of the modern "system" names, it was coined by geologists William Conybeare and William Phillips in 1822, based on a study of the British rock succession. Carboniferous is the period during which both terrestrial animal and land plant life was well established. Stegocephalia, whose forerunners (tetrapodomorphs) had evolved from lobe-finned fish during the preceding Devonian period, became pentadactylous during the Carboniferous. The period is sometimes called the Age of Amphibians due to the diversification of early amphibians such as the temnospondyls, which became dominant land vertebrates, as well as the first appearance of amniotes including synapsids and sauropsids during the late Carboniferous. Due to the raised atmospheric oxygen level, land arthropods such as arachnids, myriapods and insects also underwent a major evolutionary radiation during the late Carboniferous. Vast swaths of forests and swamps covered the land, which eventually became the coal beds characteristic of the Carboniferous stratigraphy evident today. The later half of the period experienced glaciations, low sea level, and mountain building as the continents collided to form Pangaea. A minor marine and terrestrial extinction event, the Carboniferous rainforest collapse, occurred at the end of the period, caused by climate change. | 2001-04-14T21:00:10Z | 2023-12-28T12:17:27Z | [
"Template:IPAc-en",
"Template:Wiktlat",
"Template:Main",
"Template:Cite journal",
"Template:Carboniferous footer",
"Template:Period start",
"Template:'s",
"Template:Reflist",
"Template:Citation",
"Template:Wikisource portal",
"Template:Infobox geologic timespan",
"Template:Respell",
"Template:Clear",
"Template:Cite encyclopedia",
"Template:Short description",
"Template:Cite web",
"Template:Cite book",
"Template:Authority control",
"Template:For",
"Template:AmCyc Poster",
"Template:EB1911",
"Template:Commons category",
"Template:Period end",
"Template:Convert",
"Template:Cvt",
"Template:Sfn",
"Template:Citation needed",
"Template:Geological history"
] | https://en.wikipedia.org/wiki/Carboniferous |
5,403 | Comoros | The Comoros, officially the Union of the Comoros, is an archipelagic country made up of three islands in Southeastern Africa, located at the northern end of the Mozambique Channel in the Indian Ocean. Its capital and largest city is Moroni. The religion of the majority of the population, and the official state religion, is Sunni Islam. Comoros proclaimed its independence from France on 6 July 1975. A member of the Arab League, it is the only country in the Arab world which is entirely in the Southern Hemisphere. It is a member state of the African Union, the Organisation internationale de la Francophonie, the Organisation of Islamic Co-operation, and the Indian Ocean Commission. The country has three official languages: Shikomori, French and Arabic.
The sovereign state consists of three major islands and numerous smaller islands, all of the volcanic Comoro Islands with the exception of Mayotte. Mayotte voted against independence from France in a referendum in 1974, and continues to be administered by France as an overseas department. France has vetoed United Nations Security Council resolutions that would affirm Comorian sovereignty over the island. Mayotte became an overseas department and a region of France in 2011 following a referendum which was passed overwhelmingly.
At 1,659 km (641 sq mi), the Comoros is the third-smallest African country by area. In 2019, its population was estimated to be 850,886.
The Comoros were likely first settled by Austronesian/Malagasy peoples, Bantu speakers from East Africa, and seafaring Arab traders. It became part of the French colonial empire during the 19th century, before its independence in 1975. It has experienced more than 20 coups or attempted coups, with various heads of state assassinated. Along with this constant political instability, it has one of the worst levels of income inequality of any nation, and ranks in the lowest quartile on the Human Development Index. As of 2008, about half the population lived below the international poverty line of US$1.25 a day.
The name "Comoros" derives from the Arabic word قمر qamar ("moon").
According to mythology, a jinni (spirit) dropped a jewel, which formed a great circular inferno. This became the Karthala volcano, which created the island of Ngazidja (Grande Comore). King Solomon is also said to have visited the island accompanied by his queen Bilqis.
The first attested human inhabitants of the Comoro Islands are now thought to have been Austronesian settlers travelling by boat from islands in Southeast Asia. These people arrived in the area no later than the eighth century AD, the date of the earliest known archaeological site, found on Mayotte, although settlement beginning as late as the first century has been postulated.
Subsequent settlers came from the east coast of Africa, the Arabian Peninsula and the Persian Gulf, the Malay Archipelago, and Madagascar. Bantu-speaking settlers were present on the islands from the beginnings of settlement [dates?], probably brought to the islands as slaves.
Development of the Comoros is divided into phases. The earliest reliably recorded phase is the Dembeni phase (eighth to tenth centuries), during which there were several small settlements on each island. From the eleventh to the fifteenth centuries, trade with the island of Madagascar and merchants from the Swahili coast and the Middle East flourished, more villages were founded and existing villages grew. Many Comorians can trace their genealogies to ancestors from the Arabian peninsula, particularly Hadhramaut, who arrived during this period.
According to legend, in 632, upon hearing of Islam, islanders are said to have dispatched an emissary, Mtswa-Mwindza, to Mecca—but by the time he arrived there, the Islamic prophet Muhammad had died. Nonetheless, after a stay in Mecca, he returned to Ngazidja, where he built a mosque in his home town of Ntsaweni, and led the gradual conversion of the islanders to Islam.
In 933, the Comoros was referred to by Omani sailors as the Perfume Islands.
Among the earliest accounts of East Africa, the works of Al-Masudi describe early Islamic trade routes, and how the coast and islands were frequently visited by Muslims including Persian and Arab merchants and sailors in search of coral, ambergris, ivory, tortoiseshell, gold and slaves. They also brought Islam to the people of the Zanj including the Comoros. As the importance of the Comoros grew along the East African coast, both small and large mosques were constructed. The Comoros are part of the Swahili cultural and economic complex and the islands became a major hub of trade and an important location in a network of trading towns that included Kilwa, in present-day Tanzania, Sofala (an outlet for Zimbabwean gold), in Mozambique, and Mombasa in Kenya.
The Portuguese arrived in the Indian Ocean at the end of the 15th century and the first Portuguese visit to the islands seems to have been that of Vasco da Gama's second fleet in 1503. For much of the 16th century the islands provided provisions to the Portuguese fort at Mozambique and although there was no formal attempt by the Portuguese crown to take possession, a number of Portuguese traders settled and married local women.
By the end of the 16th century local rulers on the African mainland were beginning to push back and, with the support of the Omani Sultan Saif bin Sultan they began to defeat the Dutch and the Portuguese. One of his successors, Said bin Sultan, increased Omani Arab influence in the region, moving his administration to nearby Zanzibar, which came under Omani rule. Nevertheless, the Comoros remained independent, and although the three smaller islands were usually politically unified, the largest island, Ngazidja, was divided into a number of autonomous kingdoms (ntsi).
The islands were well placed to meet the needs of Europeans, initially supplying the Portuguese in Mozambique, then ships, particularly the English, on the route to India, and, later, slaves to the plantation islands in the Mascarenes.
In the last decade of the 18th century, Malagasy warriors, mostly Betsimisaraka and Sakalava, started raiding the Comoros for slaves and the islands were devastated as crops were destroyed and the people were slaughtered, taken into captivity or fled to the African mainland: it is said that by the time the raids finally ended in the second decade of the 19th century only one man remained on Mwali. The islands were repopulated by slaves from the mainland, who were traded to the French in Mayotte and the Mascarenes. On the Comoros, it was estimated in 1865 that as much as 40% of the population consisted of slaves.
France first established colonial rule in the Comoros by taking possession of Mayotte in 1841 when the Sakalava usurper sultan Andriantsoly [fr] (also known as Tsy Levalo) signed the Treaty of April 1841, which ceded the island to the French authorities. After its annexation, France attempted to convert Mayotte into a sugar plantation colony.
Meanwhile, Ndzwani (or Johanna as it was known to the British) continued to serve as a way station for English merchants sailing to India and the Far East, as well as American whalers, although the British gradually abandoned it following their possession of Mauritius in 1814, and by the time the Suez Canal opened in 1869 there was no longer any significant supply trade at Ndzwani. Local commodities exported by the Comoros were, in addition to slaves, coconuts, timber, cattle and tortoiseshell. British and American settlers, as well as the island's sultan, established a plantation-based economy that used about one-third of the land for export crops. In addition to sugar on Mayotte, ylang-ylang and other perfume plants, vanilla, cloves, coffee, cocoa beans, and sisal were introduced.
In 1886, Mwali was placed under French protection by its Sultan Mardjani Abdou Cheikh. That same year, Sultan Said Ali of Bambao, one of the sultanates on Ngazidja, placed the island under French protection in exchange for French support of his claim to the entire island, which he retained until his abdication in 1910. In 1908 the four islands were unified under a single administration (Colonie de Mayotte et dépendances) and placed under the authority of the French colonial Governor-General of Madagascar. In 1909, Sultan Said Muhamed of Ndzwani abdicated in favour of French rule and in 1912 the protectorates were abolished and the islands administered as a single colony. Two years later the colony was abolished and the islands became a province of the colony of Madagascar.
Agreement was reached with France in 1973 for the Comoros to become independent in 1978, despite the deputies of Mayotte voting for increased integration with France. A referendum was held on all four of the islands. Three voted for independence by large margins, while Mayotte voted against. On 6 July 1975, however, the Comorian parliament passed a unilateral resolution declaring independence. Ahmed Abdallah proclaimed the independence of the Comorian State (État comorien; دولة القمر) and became its first president. France did not recognise the new state until 31 December, and retained control of Mayotte.
The next 30 years were a period of political turmoil. On 3 August 1975, less than one month after independence, president Ahmed Abdallah was removed from office in an armed coup and replaced with United National Front of the Comoros (FNUK) member Said Mohamed Jaffar. Months later, in January 1976, Jaffar was ousted in favour of his Minister of Defence Ali Soilihi.
The population of Mayotte voted against independence from France in three referendums during this period. The first, held on all the islands on 22 December 1974, won 63.8% support for maintaining ties with France on Mayotte; the second, held in February 1976, confirmed that vote with an overwhelming 99.4%, while the third, in April 1976, confirmed that the people of Mayotte wished to remain a French territory. The three remaining islands, ruled by President Soilihi, instituted a number of socialist and isolationist policies that soon strained relations with France. On 13 May 1978, Bob Denard, once again commissioned by the French intelligence service (SDECE), returned to overthrow President Soilihi and reinstate Abdallah with the support of the French, Rhodesian and South African governments. Ali Soilihi was captured and executed a few weeks later.
In contrast to Soilihi, Abdallah's presidency was marked by authoritarian rule and increased adherence to traditional Islam and the country was renamed the Federal Islamic Republic of the Comoros (République Fédérale Islamique des Comores; جمهورية القمر الإتحادية الإسلامية). Bob Denard served as Abdallah's first advisor; nicknamed the "Viceroy of the Comoros," he was sometimes considered the real strongman of the regime. Very close to South Africa, which financed his "presidential guard," he allowed Paris to circumvent the international embargo on the apartheid regime via Moroni. He also set up from the archipelago a permanent mercenary corps, called upon to intervene at the request of Paris or Pretoria in conflicts in Africa. Abdallah continued as president until 1989 when, fearing a probable coup, he signed a decree ordering the Presidential Guard, led by Bob Denard, to disarm the armed forces. Shortly after the signing of the decree, Abdallah was allegedly shot dead in his office by a disgruntled military officer, though later sources claim an antitank missile was launched into his bedroom and killed him. Although Denard was also injured, it is suspected that Abdallah's killer was a soldier under his command.
A few days later, Bob Denard was evacuated to South Africa by French paratroopers. Said Mohamed Djohar, Soilihi's older half-brother, then became president, and served until September 1995, when Bob Denard returned and attempted another coup. This time France intervened with paratroopers and forced Denard to surrender. The French removed Djohar to Reunion, and the Paris-backed Mohamed Taki Abdoulkarim became president by election. He led the country from 1996, during a time of labour crises, government suppression, and secessionist conflicts, until his death in November 1998. He was succeeded by Interim President Tadjidine Ben Said Massounde.
The islands of Ndzwani and Mwali declared their independence from the Comoros in 1997, in an attempt to restore French rule. But France rejected their request, leading to bloody confrontations between federal troops and rebels. In April 1999, Colonel Azali Assoumani, Army Chief of Staff, seized power in a bloodless coup, overthrowing the Interim President Massounde, citing weak leadership in the face of the crisis. This was the Comoros' 18th coup, or attempted coup d'état since independence in 1975.
Azali failed to consolidate power and reestablish control over the islands, which was the subject of international criticism. The African Union, under the auspices of President Thabo Mbeki of South Africa, imposed sanctions on Ndzwani to help broker negotiations and effect reconciliation. Under the terms of the Fomboni Accords, signed in December 2001 by the leaders of all three islands, the official name of the country was changed to the Union of the Comoros; the new state was to be highly decentralised and the central union government would devolve most powers to the new island governments, each led by a president. The Union president, although elected by national elections, would be chosen in rotation from each of the islands every five years.
Azali stepped down in 2002 to run in the democratic election of the President of the Comoros, which he won. Under ongoing international pressure, as a military ruler who had originally come to power by force, and was not always democratic while in office, Azali led the Comoros through constitutional changes that enabled new elections. A Loi des compétences law was passed in early 2005 that defines the responsibilities of each governmental body, and is in the process of implementation. The elections in 2006 were won by Ahmed Abdallah Mohamed Sambi, a Sunni Muslim cleric nicknamed the "Ayatollah" for his time spent studying Islam in Iran. Azali honoured the election results, thus allowing the first peaceful and democratic exchange of power for the archipelago.
Colonel Mohammed Bacar, a French-trained former gendarme elected President of Ndzwani in 2001, refused to step down at the end of his five-year mandate. He staged a vote in June 2007 to confirm his leadership that was rejected as illegal by the Comoros federal government and the African Union. On 25 March 2008 hundreds of soldiers from the African Union and the Comoros seized rebel-held Ndzwani, generally welcomed by the population: there have been reports of hundreds, if not thousands, of people tortured during Bacar's tenure. Some rebels were killed and injured, but there are no official figures. At least 11 civilians were wounded. Some officials were imprisoned. Bacar fled in a speedboat to Mayotte to seek asylum. Anti-French protests followed in the Comoros (see 2008 invasion of Anjouan). Bacar was eventually granted asylum in Benin.
Since independence from France, the Comoros experienced more than 20 coups or attempted coups.
Following elections in late 2010, former Vice-president Ikililou Dhoinine was inaugurated as president on 26 May 2011. A member of the ruling party, Dhoinine was supported in the election by the incumbent President Ahmed Abdallah Mohamed Sambi. Dhoinine, a pharmacist by training, is the first President of the Comoros from the island of Mwali. Following the 2016 elections, Azali Assoumani, from Ngazidja, became president for a third term. In 2018 Azali held a referendum on constitutional reform that would permit a president to serve two terms. The amendments passed, although the vote was widely contested and boycotted by the opposition, and in April 2019, and to widespread opposition, Azali was re-elected president to serve the first of potentially two five-year terms.
In January 2020, the legislative elections in Comoros were dominated by President Azali Assoumani's party, the Convention for the Renewal of the Comoros, CRC. It took an overwhelming majority in the parliament, meaning his hold on power strengthened. CRC took 17 out of 24 seats of the parliament.
In 2021, Comoros signed and ratified the Treaty on the Prohibition of Nuclear Weapons, making it a nuclear-weapon-free state. and in 2023, Comoros was invited as a non-member guest to the G7 summit in Hiroshima.
On 18 February 2023 the Comoros assumed the presidency of the African Union.
The Comoros is formed by Ngazidja (Grande Comore), Mwali (Mohéli) and Ndzwani (Anjouan), three major islands in the Comoros Archipelago, as well as many minor islets. The islands are officially known by their Comorian language names, though international sources still use their French names (given in parentheses above). The capital and largest city, Moroni, is located on Ngazidja. The archipelago is situated in the Indian Ocean, in the Mozambique Channel, between the African coast (nearest to Mozambique and Tanzania) and Madagascar, with no land borders.
At 1,659 km (641 sq mi), it is one of the smallest countries in the world. The Comoros also has claim to 320 km (120 sq mi) of territorial seas. The interiors of the islands vary from steep mountains to low hills.
The areas and populations (at the 2017 Census) of the main islands are as follows:
Ngazidja is the largest of the Comoros Archipelago, with an area of 1,024 km. It is also the most recent island, and therefore has rocky soil. The island's two volcanoes, Karthala (active) and La Grille (dormant), and the lack of good harbours are distinctive characteristics of its terrain. Mwali, with its capital at Fomboni, is the smallest of the four major islands. Ndzwani, whose capital is Mutsamudu, has a distinctive triangular shape caused by three mountain chains – Shisiwani, Nioumakele and Jimilime – emanating from a central peak, Mount Ntingui [fr] (1,575 m or 5,167 ft).
The islands of the Comoros Archipelago were formed by volcanic activity. Mount Karthala, an active shield volcano located on Ngazidja, is the country's highest point, at 2,361 metres (7,746 feet). It contains the Comoros' largest patch of disappearing rainforest. Karthala is currently one of the most active volcanoes in the world, with a minor eruption in May 2006, and prior eruptions as recently as April 2005 and 1991. In the 2005 eruption, which lasted from 17 to 19 April, 40,000 citizens were evacuated, and the crater lake in the volcano's three-by-four-kilometre (2-by-2+1⁄2-mile) caldera was destroyed.
The Comoros also lays claim to the Îles Éparses or Îles éparses de l'océan indien (Scattered Islands in the Indian Ocean) – Glorioso Islands, comprising Grande Glorieuse, Île du Lys, Wreck Rock, South Rock, Verte Rocks [fr] (three islets) and three unnamed islets – one of France's overseas districts. The Glorioso Islands were administered by the colonial Comoros before 1975, and are therefore sometimes considered part of the Comoros Archipelago. Banc du Geyser, a former island in the Comoros Archipelago, now submerged, is geographically located in the Îles Éparses, but was annexed by Madagascar in 1976 as an unclaimed territory. The Comoros and France each still view the Banc du Geyser as part of the Glorioso Islands and, thus, part of its particular exclusive economic zone.
The climate is generally tropical and mild, and the two major seasons are distinguishable by their raininess. The temperature reaches an average of 29–30 °C (84–86 °F) in March, the hottest month in the rainy season (called kashkazi/kaskazi [meaning north monsoon], which runs from November to April), and an average low of 19 °C (66 °F) in the cool, dry season (kusi (meaning south monsoon), which proceeds from May to October). The islands are rarely subject to cyclones.
The Comoros constitute an ecoregion in their own right, Comoros forests. It had a 2018 Forest Landscape Integrity Index mean score of 7.69/10, ranking it 33rd globally out of 172 countries.
In December 1952 a specimen of the West Indian Ocean coelacanth fish was re-discovered off the Comoros coast. The 66 million-year-old species was thought to have been long extinct until its first recorded appearance in 1938 off the South African coast. Between 1938 and 1975, 84 specimens were caught and recorded.
There are six national parks in the Comoros – Karthala, Coelacanth, and Mitsamiouli Ndroudi on Grande Comore, Mount Ntringui and Shisiwani on Anjouan, and Mohéli National Park on Mohéli. Karthala and Mount Ntrigui national parks cover the highest peaks on the respective islands, and Coelacanth, Mitsamiouli Ndroudi, and Shisiwani are marine national parks that protect the island's coastal waters and fringing reefs. Mohéli National Park includes both terrestrial and marine areas.
Politics of the Comoros takes place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. The Constitution of the Union of the Comoros was ratified by referendum on 23 December 2001, and the islands' constitutions and executives were elected in the following months. It had previously been considered a military dictatorship, and the transfer of power from Azali Assoumani to Ahmed Abdallah Mohamed Sambi in May 2006 was a watershed moment as it was the first peaceful transfer in Comorian history.
Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The preamble of the constitution guarantees an Islamic inspiration in governance, a commitment to human rights, and several specific enumerated rights, democracy, "a common destiny" for all Comorians. Each of the islands (according to Title II of the Constitution) has a great amount of autonomy in the Union, including having their own constitutions (or Fundamental Law), president, and Parliament. The presidency and Assembly of the Union are distinct from each of the islands' governments. The presidency of the Union rotates between the islands. Despite widespread misgivings about the durability of the system of presidential rotation, Ngazidja holds the current presidency rotation, and Azali is President of the Union; Ndzwani is in theory to provide the next president.
The Comorian legal system rests on Islamic law, an inherited French (Napoleonic Code) legal code, and customary law (mila na ntsi). Village elders, kadis or civilian courts settle most disputes. The judiciary is independent of the legislative and the executive. The Supreme Court acts as a Constitutional Council in resolving constitutional questions and supervising presidential elections. As High Court of Justice, the Supreme Court also arbitrates in cases where the government is accused of malpractice. The Supreme Court consists of two members selected by the president, two elected by the Federal Assembly, and one by the council of each island.
Around 80 percent of the central government's annual budget is spent on the country's complex administrative system which provides for a semi-autonomous government and president for each of the three islands and a rotating presidency for the overarching Union government. A referendum took place on 16 May 2009 to decide whether to cut down the government's unwieldy political bureaucracy. 52.7% of those eligible voted, and 93.8% of votes were cast in approval of the referendum. Following the implementation of the changes, each island's president became a governor and the ministers became councillors.
In November 1975, the Comoros became the 143rd member of the United Nations. The new nation was defined as comprising the entire archipelago, although the citizens of Mayotte chose to become French citizens and keep their island as a French territory.
The Comoros has repeatedly pressed its claim to Mayotte before the United Nations General Assembly, which adopted a series of resolutions under the caption "Question of the Comorian Island of Mayotte", opining that Mayotte belongs to the Comoros under the principle that the territorial integrity of colonial territories should be preserved upon independence. As a practical matter, however, these resolutions have little effect and there is no foreseeable likelihood that Mayotte will become de facto part of the Comoros without its people's consent. More recently, the Assembly has maintained this item on its agenda but deferred it from year to year without taking action. Other bodies, including the Organization of African Unity, the Movement of Non-Aligned Countries and the Organisation of Islamic Cooperation, have similarly questioned French sovereignty over Mayotte. To close the debate and to avoid being integrated by force in the Union of the Comoros, the population of Mayotte overwhelmingly chose to become an overseas department and a region of France in a 2009 referendum. The new status was effective on 31 March 2011 and Mayotte has been recognised as an outermost region by the European Union on 1 January 2014. This decision legally integrates Mayotte in the French Republic.
The Comoros is a member of the United Nations, the African Union, the Arab League, the World Bank, the International Monetary Fund, the Indian Ocean Commission and the African Development Bank. On 10 April 2008, the Comoros became the 179th nation to accept the Kyoto Protocol to the United Nations Framework Convention on Climate Change. The Comoros signed the UN treaty on the Prohibition of Nuclear Weapons. Azali Assoumani, President of the Comoros and Chair of the African Union, attended the 2023 Russia–Africa Summit in Saint Petersburg.
In May 2013 the Union of the Comoros became known for filing a referral to the Office of the Prosecutor of the International Criminal Court (ICC) regarding the events of "the 31 May 2010 Israeli raid on the Humanitarian Aid Flotilla bound for [the] Gaza Strip". In November 2014 the ICC Prosecutor eventually decided that the events did constitute war crimes but did not meet the gravity standards of bringing the case before ICC.
The emigration rate of skilled workers was about 21.2% in 2000.
The military resources of the Comoros consist of a small standing army and a 500-member police force, as well as a 500-member defence force. A defence treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains the presence of a few senior officers in the Comoros at government request, as well as a small maritime base and a Foreign Legion Detachment (DLEM) on Mayotte.
Once the new government was installed in May–June 2011, an expert mission from UNREC (Lomé) came to the Comoros and produced guidelines for the elaboration of a national security policy, which were discussed by different actors, notably the national defence authorities and civil society. By the end of the programme in end March 2012, a normative framework agreed upon by all entities involved in SSR will have been established. This will then have to be adopted by Parliament and implemented by the authorities.
Both male and female same-sex sexual acts are illegal in Comoros. Such acts are punished with up to five years imprisonment.
The level of poverty in the Comoros is high, but "judging by the international poverty threshold of $1.9 per person per day, only two out of every ten Comorians could be classified as poor, a rate that places the Comoros ahead of other low-income countries and 30 percentage points ahead of other countries in Sub-Saharan Africa." Poverty declined by about 10% between 2014 and 2018, and living conditions generally improved. Economic inequality remains widespread, with a major gap between rural and urban areas. Remittances through the sizable Comorian diaspora form a substantial part of the country's GDP and have contributed to decreases in poverty and increases in living standards.
According to ILO's ILOSTAT statistical database, between 1991 and 2019 the unemployment rate as a percent of the total labor force ranged from 4.38% to 4.3%. An October 2005 paper by the Comoros Ministry of Planning and Regional Development, however, reported that "registered unemployment rate is 14.3 percent, distributed very unevenly among and within the islands, but with marked incidence in urban areas."
In 2019, more than 56% of the labor force was employed in agriculture, with 29% employed in industry and 14% employed in services. The islands' agricultural sector is based on the export of spices, including vanilla, cinnamon, and cloves, and thus susceptible to price fluctuations in the volatile world commodity market for these goods. The Comoros is the world's largest producer of ylang-ylang, a plant whose extracted essential oil is used in the perfume industry; some 80% of the world's supply comes from the Comoros.
High population densities, as much as 1000 per square kilometre in the densest agricultural zones, for what is still a mostly rural, agricultural economy may lead to an environmental crisis in the near future, especially considering the high rate of population growth. In 2004 the Comoros' real GDP growth was a low 1.9% and real GDP per capita continued to decline. These declines are explained by factors including declining investment, drops in consumption, rising inflation, and an increase in trade imbalance due in part to lowered cash crop prices, especially vanilla.
Fiscal policy is constrained by erratic fiscal revenues, a bloated civil service wage bill, and an external debt that is far above the HIPC threshold. Membership in the franc zone, the main anchor of stability, has nevertheless helped contain pressures on domestic prices.
The Comoros has an inadequate transportation system, a young and rapidly increasing population, and few natural resources. The low educational level of the labour force contributes to a subsistence level of economic activity, high unemployment, and a heavy dependence on foreign grants and technical assistance. Agriculture contributes 40% to GDP and provides most of the exports.
The government is struggling to upgrade education and technical training, to privatise commercial and industrial enterprises, to improve health services, to diversify exports, to promote tourism, and to reduce the high population growth rate.
The Comoros is a member of the Organization for the Harmonization of Business Law in Africa (OHADA).
With about 850,000 residents, the Comoros is one of the least-populous countries in the world, but its population density is high, with an average of 275 inhabitants per square kilometre (710/sq mi). In 2001, 34% of the population was considered urban, but the urban population has since grown; in recent years rural population growth has been negative, while overall population growth is still relatively high. In 1958 the population was 183,133.
Almost half the population of the Comoros is under the age of 15. Major urban centres include Moroni, Mitsamihuli, Foumbouni, Mutsamudu, Domoni, and Fomboni. There are between 200,000 and 350,000 Comorians in France.
The islands of the Comoros are 97.1% ethnically Comorian, which is a mixture of Bantu, Malagasy, and Arab people. Minorities include Makua and Indian (mostly Ismaili). There are recent immigrants of Chinese origin in Grande Comore (especially Moroni). Although most French left after independence in 1975, a small Creole community, descended from settlers from France, Madagascar and Réunion, lives in the Comoros.
The most common languages in the Comoros are the Comorian languages, collectively known as Shikomori. They are related to Swahili, and the four different variants (Shingazidja, Shimwali, Shindzwani and Shimaore) are spoken on each of the four islands. Arabic and Latin scripts are both used, Arabic being the more widely used, and an official orthography has recently been developed for the Latin script.
Arabic and French are also official languages, along with Comorian. Arabic is widely known as a second language, being the language of Quranic teaching. French is the administrative language and the language of most non-Quranic formal education.
Sunni Islam is the dominant religion, followed by as much as 99% of the population. Comoros is the only Muslim-majority country in Southern Africa and one of the three southernmost Muslim-majority territories, along with Mayotte and the Australian territory of Cocos Islands. A minority of the population of the Comoros are Christian, both Catholic and Protestant denominations are represented, and most Malagasy residents are also Christian. Immigrants from metropolitan France are mostly Catholic.
There are 15 physicians per 100,000 people. The fertility rate was 4.7 per adult woman in 2004. Life expectancy at birth is 67 for females and 62 for males.
Almost all children attend Quranic schools, usually before, although increasingly in tandem with regular schooling. Children are taught about the Qur'an, and memorise it, and learn the Arabic script. Most parents prefer their children to attend Koran schools before moving on to the French-based schooling system. Although the state sector is plagued by a lack of resources, and the teachers by unpaid salaries, there are numerous private and community schools of relatively good standard. The national curriculum, apart from a few years during the revolutionary period immediately post-independence, has been very much based on the French system, both because resources are French and most Comorians hope to go on to further education in France. There have recently been moves to Comorianise the syllabus and integrate the two systems, the formal and the Quran schools, into one, thus moving away from the secular educational system inherited from France.
Pre-colonization education systems in Comoros focused on necessary skills such as agriculture, caring for livestock and completing household tasks. Religious education also taught children the virtues of Islam. The education system underwent a transformation during colonization in the early 1900s which brought secular education based on the French system. This was mainly for children of the elite. After Comoros gained independence in 1975, the education system changed again. Funding for teachers' salaries was lost, and many went on strike. Thus, the public education system was not functioning between 1997 and 2001. Since gaining independence, the education system has also undergone a democratization and options exist for those other than the elite. Enrollment has also grown.
In 2000, 44.2% of children aged 5 to 14 years were attending school. There is a general lack of facilities, equipment, qualified teachers, textbooks and other resources. Salaries for teachers are often so far in arrears that many refuse to work.
Prior to 2000, students seeking a university education had to attend school outside of the country. However, in the early 2000s a university was created in the country. This served to help economic growth and to fight the "flight" of many educated people who were not returning to the islands to work.
Comorian has no native script, but both the Arabic and Latin alphabets are used. In 2004, about 57 percent of the population was literate in the Latin script while more than 90 percent were literate in the Arabic script.
Traditionally, women on Ndzwani wear red and white patterned garments called shiromani, while on Ngazidja and Mwali colourful shawls called leso are worn. Many women apply a paste of ground sandalwood and coral called msindzano to their faces. Traditional male clothing is a long white shirt known as a nkandu, and a bonnet called a kofia.
There are two types of marriages in Comoros, the little marriage (known as Mna daho on Ngazidja) and the customary marriage (known as ada on Ngazidja, harusi on the other islands). The little marriage is a simple legal marriage. It is small, intimate, and inexpensive, and the bride's dowry is nominal. A man may undertake a number of Mna daho marriages in his lifetime, often at the same time, a woman fewer; but both men and women will usually only undertake one ada, or grand marriage, and this must generally be within the village. The hallmarks of the grand marriage are dazzling gold jewelry, two weeks of celebration and an enormous bridal dowry. Although the expenses are shared between both families as well as with a wider social circle, an ada wedding on Ngazidja can cost up to €50,000. Many couples take a lifetime to save for their ada, and it is not uncommon for a marriage to be attended by a couple's adult children.
The ada marriage marks a man's transition in the Ngazidja age system from youth to elder. His status in the social hierarchy greatly increases, and he will henceforth be entitled to speak in public and participate in the political process, both in his village and more widely across the island. He will be entitled to display his status by wearing a mharuma, a type of shawl, across his shoulders, and he can enter the mosque by the door reserved for elders, and sit at the front. A woman's status also changes, although less formally, as she becomes a "mother" and moves into her own house. The system is less formalised on the other islands, but the marriage is nevertheless a significant and costly event across the archipelago. The ada is often criticized because of its great expense, but at the same time it is a source of social cohesion and the main reason why migrants in France and elsewhere continue to send money home. Increasingly, marriages are also being taxed for the purposes of village development.
Comorian society has a bilateral descent system. Lineage membership and inheritance of immovable goods (land, housing) is matrilineal, passed in the maternal line, similar to many Bantu peoples who are also matrilineal, while other goods and patronymics are passed in the male line. However, there are differences between the islands, the matrilineal element being stronger on Ngazidja.
Twarab music, imported from Zanzibar in the early 20th century, remains the most influential genre on the islands and is popular at ada marriages.
There are two daily national newspapers published in the Comoros, the government-owned Al-Watwan, and the privately owned La Gazette des Comores, both published in Moroni. There are a number of smaller newsletters published on an irregular basis as well as a variety of news websites. The government-owned ORTC (Office de Radio et Télévision des Comores) provides national radio and television service. There is a TV station run by the Anjouan regional government, and regional governments on the islands of Grande Comore and Anjouan each operate a radio station. There are also a few independent and small community radio stations that operate on the islands of Grande Comore and Mohéli, and these two islands have access to Mayotte Radio and French TV.
12°18′S 43°42′E / 12.3°S 43.7°E / -12.3; 43.7 | [
{
"paragraph_id": 0,
"text": "The Comoros, officially the Union of the Comoros, is an archipelagic country made up of three islands in Southeastern Africa, located at the northern end of the Mozambique Channel in the Indian Ocean. Its capital and largest city is Moroni. The religion of the majority of the population, and the official state religion, is Sunni Islam. Comoros proclaimed its independence from France on 6 July 1975. A member of the Arab League, it is the only country in the Arab world which is entirely in the Southern Hemisphere. It is a member state of the African Union, the Organisation internationale de la Francophonie, the Organisation of Islamic Co-operation, and the Indian Ocean Commission. The country has three official languages: Shikomori, French and Arabic.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The sovereign state consists of three major islands and numerous smaller islands, all of the volcanic Comoro Islands with the exception of Mayotte. Mayotte voted against independence from France in a referendum in 1974, and continues to be administered by France as an overseas department. France has vetoed United Nations Security Council resolutions that would affirm Comorian sovereignty over the island. Mayotte became an overseas department and a region of France in 2011 following a referendum which was passed overwhelmingly.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At 1,659 km (641 sq mi), the Comoros is the third-smallest African country by area. In 2019, its population was estimated to be 850,886.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Comoros were likely first settled by Austronesian/Malagasy peoples, Bantu speakers from East Africa, and seafaring Arab traders. It became part of the French colonial empire during the 19th century, before its independence in 1975. It has experienced more than 20 coups or attempted coups, with various heads of state assassinated. Along with this constant political instability, it has one of the worst levels of income inequality of any nation, and ranks in the lowest quartile on the Human Development Index. As of 2008, about half the population lived below the international poverty line of US$1.25 a day.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The name \"Comoros\" derives from the Arabic word قمر qamar (\"moon\").",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "According to mythology, a jinni (spirit) dropped a jewel, which formed a great circular inferno. This became the Karthala volcano, which created the island of Ngazidja (Grande Comore). King Solomon is also said to have visited the island accompanied by his queen Bilqis.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first attested human inhabitants of the Comoro Islands are now thought to have been Austronesian settlers travelling by boat from islands in Southeast Asia. These people arrived in the area no later than the eighth century AD, the date of the earliest known archaeological site, found on Mayotte, although settlement beginning as late as the first century has been postulated.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Subsequent settlers came from the east coast of Africa, the Arabian Peninsula and the Persian Gulf, the Malay Archipelago, and Madagascar. Bantu-speaking settlers were present on the islands from the beginnings of settlement [dates?], probably brought to the islands as slaves.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Development of the Comoros is divided into phases. The earliest reliably recorded phase is the Dembeni phase (eighth to tenth centuries), during which there were several small settlements on each island. From the eleventh to the fifteenth centuries, trade with the island of Madagascar and merchants from the Swahili coast and the Middle East flourished, more villages were founded and existing villages grew. Many Comorians can trace their genealogies to ancestors from the Arabian peninsula, particularly Hadhramaut, who arrived during this period.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "According to legend, in 632, upon hearing of Islam, islanders are said to have dispatched an emissary, Mtswa-Mwindza, to Mecca—but by the time he arrived there, the Islamic prophet Muhammad had died. Nonetheless, after a stay in Mecca, he returned to Ngazidja, where he built a mosque in his home town of Ntsaweni, and led the gradual conversion of the islanders to Islam.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 933, the Comoros was referred to by Omani sailors as the Perfume Islands.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Among the earliest accounts of East Africa, the works of Al-Masudi describe early Islamic trade routes, and how the coast and islands were frequently visited by Muslims including Persian and Arab merchants and sailors in search of coral, ambergris, ivory, tortoiseshell, gold and slaves. They also brought Islam to the people of the Zanj including the Comoros. As the importance of the Comoros grew along the East African coast, both small and large mosques were constructed. The Comoros are part of the Swahili cultural and economic complex and the islands became a major hub of trade and an important location in a network of trading towns that included Kilwa, in present-day Tanzania, Sofala (an outlet for Zimbabwean gold), in Mozambique, and Mombasa in Kenya.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Portuguese arrived in the Indian Ocean at the end of the 15th century and the first Portuguese visit to the islands seems to have been that of Vasco da Gama's second fleet in 1503. For much of the 16th century the islands provided provisions to the Portuguese fort at Mozambique and although there was no formal attempt by the Portuguese crown to take possession, a number of Portuguese traders settled and married local women.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "By the end of the 16th century local rulers on the African mainland were beginning to push back and, with the support of the Omani Sultan Saif bin Sultan they began to defeat the Dutch and the Portuguese. One of his successors, Said bin Sultan, increased Omani Arab influence in the region, moving his administration to nearby Zanzibar, which came under Omani rule. Nevertheless, the Comoros remained independent, and although the three smaller islands were usually politically unified, the largest island, Ngazidja, was divided into a number of autonomous kingdoms (ntsi).",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The islands were well placed to meet the needs of Europeans, initially supplying the Portuguese in Mozambique, then ships, particularly the English, on the route to India, and, later, slaves to the plantation islands in the Mascarenes.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In the last decade of the 18th century, Malagasy warriors, mostly Betsimisaraka and Sakalava, started raiding the Comoros for slaves and the islands were devastated as crops were destroyed and the people were slaughtered, taken into captivity or fled to the African mainland: it is said that by the time the raids finally ended in the second decade of the 19th century only one man remained on Mwali. The islands were repopulated by slaves from the mainland, who were traded to the French in Mayotte and the Mascarenes. On the Comoros, it was estimated in 1865 that as much as 40% of the population consisted of slaves.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "France first established colonial rule in the Comoros by taking possession of Mayotte in 1841 when the Sakalava usurper sultan Andriantsoly [fr] (also known as Tsy Levalo) signed the Treaty of April 1841, which ceded the island to the French authorities. After its annexation, France attempted to convert Mayotte into a sugar plantation colony.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Meanwhile, Ndzwani (or Johanna as it was known to the British) continued to serve as a way station for English merchants sailing to India and the Far East, as well as American whalers, although the British gradually abandoned it following their possession of Mauritius in 1814, and by the time the Suez Canal opened in 1869 there was no longer any significant supply trade at Ndzwani. Local commodities exported by the Comoros were, in addition to slaves, coconuts, timber, cattle and tortoiseshell. British and American settlers, as well as the island's sultan, established a plantation-based economy that used about one-third of the land for export crops. In addition to sugar on Mayotte, ylang-ylang and other perfume plants, vanilla, cloves, coffee, cocoa beans, and sisal were introduced.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1886, Mwali was placed under French protection by its Sultan Mardjani Abdou Cheikh. That same year, Sultan Said Ali of Bambao, one of the sultanates on Ngazidja, placed the island under French protection in exchange for French support of his claim to the entire island, which he retained until his abdication in 1910. In 1908 the four islands were unified under a single administration (Colonie de Mayotte et dépendances) and placed under the authority of the French colonial Governor-General of Madagascar. In 1909, Sultan Said Muhamed of Ndzwani abdicated in favour of French rule and in 1912 the protectorates were abolished and the islands administered as a single colony. Two years later the colony was abolished and the islands became a province of the colony of Madagascar.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Agreement was reached with France in 1973 for the Comoros to become independent in 1978, despite the deputies of Mayotte voting for increased integration with France. A referendum was held on all four of the islands. Three voted for independence by large margins, while Mayotte voted against. On 6 July 1975, however, the Comorian parliament passed a unilateral resolution declaring independence. Ahmed Abdallah proclaimed the independence of the Comorian State (État comorien; دولة القمر) and became its first president. France did not recognise the new state until 31 December, and retained control of Mayotte.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The next 30 years were a period of political turmoil. On 3 August 1975, less than one month after independence, president Ahmed Abdallah was removed from office in an armed coup and replaced with United National Front of the Comoros (FNUK) member Said Mohamed Jaffar. Months later, in January 1976, Jaffar was ousted in favour of his Minister of Defence Ali Soilihi.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The population of Mayotte voted against independence from France in three referendums during this period. The first, held on all the islands on 22 December 1974, won 63.8% support for maintaining ties with France on Mayotte; the second, held in February 1976, confirmed that vote with an overwhelming 99.4%, while the third, in April 1976, confirmed that the people of Mayotte wished to remain a French territory. The three remaining islands, ruled by President Soilihi, instituted a number of socialist and isolationist policies that soon strained relations with France. On 13 May 1978, Bob Denard, once again commissioned by the French intelligence service (SDECE), returned to overthrow President Soilihi and reinstate Abdallah with the support of the French, Rhodesian and South African governments. Ali Soilihi was captured and executed a few weeks later.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In contrast to Soilihi, Abdallah's presidency was marked by authoritarian rule and increased adherence to traditional Islam and the country was renamed the Federal Islamic Republic of the Comoros (République Fédérale Islamique des Comores; جمهورية القمر الإتحادية الإسلامية). Bob Denard served as Abdallah's first advisor; nicknamed the \"Viceroy of the Comoros,\" he was sometimes considered the real strongman of the regime. Very close to South Africa, which financed his \"presidential guard,\" he allowed Paris to circumvent the international embargo on the apartheid regime via Moroni. He also set up from the archipelago a permanent mercenary corps, called upon to intervene at the request of Paris or Pretoria in conflicts in Africa. Abdallah continued as president until 1989 when, fearing a probable coup, he signed a decree ordering the Presidential Guard, led by Bob Denard, to disarm the armed forces. Shortly after the signing of the decree, Abdallah was allegedly shot dead in his office by a disgruntled military officer, though later sources claim an antitank missile was launched into his bedroom and killed him. Although Denard was also injured, it is suspected that Abdallah's killer was a soldier under his command.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "A few days later, Bob Denard was evacuated to South Africa by French paratroopers. Said Mohamed Djohar, Soilihi's older half-brother, then became president, and served until September 1995, when Bob Denard returned and attempted another coup. This time France intervened with paratroopers and forced Denard to surrender. The French removed Djohar to Reunion, and the Paris-backed Mohamed Taki Abdoulkarim became president by election. He led the country from 1996, during a time of labour crises, government suppression, and secessionist conflicts, until his death in November 1998. He was succeeded by Interim President Tadjidine Ben Said Massounde.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The islands of Ndzwani and Mwali declared their independence from the Comoros in 1997, in an attempt to restore French rule. But France rejected their request, leading to bloody confrontations between federal troops and rebels. In April 1999, Colonel Azali Assoumani, Army Chief of Staff, seized power in a bloodless coup, overthrowing the Interim President Massounde, citing weak leadership in the face of the crisis. This was the Comoros' 18th coup, or attempted coup d'état since independence in 1975.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Azali failed to consolidate power and reestablish control over the islands, which was the subject of international criticism. The African Union, under the auspices of President Thabo Mbeki of South Africa, imposed sanctions on Ndzwani to help broker negotiations and effect reconciliation. Under the terms of the Fomboni Accords, signed in December 2001 by the leaders of all three islands, the official name of the country was changed to the Union of the Comoros; the new state was to be highly decentralised and the central union government would devolve most powers to the new island governments, each led by a president. The Union president, although elected by national elections, would be chosen in rotation from each of the islands every five years.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Azali stepped down in 2002 to run in the democratic election of the President of the Comoros, which he won. Under ongoing international pressure, as a military ruler who had originally come to power by force, and was not always democratic while in office, Azali led the Comoros through constitutional changes that enabled new elections. A Loi des compétences law was passed in early 2005 that defines the responsibilities of each governmental body, and is in the process of implementation. The elections in 2006 were won by Ahmed Abdallah Mohamed Sambi, a Sunni Muslim cleric nicknamed the \"Ayatollah\" for his time spent studying Islam in Iran. Azali honoured the election results, thus allowing the first peaceful and democratic exchange of power for the archipelago.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Colonel Mohammed Bacar, a French-trained former gendarme elected President of Ndzwani in 2001, refused to step down at the end of his five-year mandate. He staged a vote in June 2007 to confirm his leadership that was rejected as illegal by the Comoros federal government and the African Union. On 25 March 2008 hundreds of soldiers from the African Union and the Comoros seized rebel-held Ndzwani, generally welcomed by the population: there have been reports of hundreds, if not thousands, of people tortured during Bacar's tenure. Some rebels were killed and injured, but there are no official figures. At least 11 civilians were wounded. Some officials were imprisoned. Bacar fled in a speedboat to Mayotte to seek asylum. Anti-French protests followed in the Comoros (see 2008 invasion of Anjouan). Bacar was eventually granted asylum in Benin.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Since independence from France, the Comoros experienced more than 20 coups or attempted coups.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Following elections in late 2010, former Vice-president Ikililou Dhoinine was inaugurated as president on 26 May 2011. A member of the ruling party, Dhoinine was supported in the election by the incumbent President Ahmed Abdallah Mohamed Sambi. Dhoinine, a pharmacist by training, is the first President of the Comoros from the island of Mwali. Following the 2016 elections, Azali Assoumani, from Ngazidja, became president for a third term. In 2018 Azali held a referendum on constitutional reform that would permit a president to serve two terms. The amendments passed, although the vote was widely contested and boycotted by the opposition, and in April 2019, and to widespread opposition, Azali was re-elected president to serve the first of potentially two five-year terms.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In January 2020, the legislative elections in Comoros were dominated by President Azali Assoumani's party, the Convention for the Renewal of the Comoros, CRC. It took an overwhelming majority in the parliament, meaning his hold on power strengthened. CRC took 17 out of 24 seats of the parliament.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In 2021, Comoros signed and ratified the Treaty on the Prohibition of Nuclear Weapons, making it a nuclear-weapon-free state. and in 2023, Comoros was invited as a non-member guest to the G7 summit in Hiroshima.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "On 18 February 2023 the Comoros assumed the presidency of the African Union.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The Comoros is formed by Ngazidja (Grande Comore), Mwali (Mohéli) and Ndzwani (Anjouan), three major islands in the Comoros Archipelago, as well as many minor islets. The islands are officially known by their Comorian language names, though international sources still use their French names (given in parentheses above). The capital and largest city, Moroni, is located on Ngazidja. The archipelago is situated in the Indian Ocean, in the Mozambique Channel, between the African coast (nearest to Mozambique and Tanzania) and Madagascar, with no land borders.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "At 1,659 km (641 sq mi), it is one of the smallest countries in the world. The Comoros also has claim to 320 km (120 sq mi) of territorial seas. The interiors of the islands vary from steep mountains to low hills.",
"title": "Geography"
},
{
"paragraph_id": 35,
"text": "The areas and populations (at the 2017 Census) of the main islands are as follows:",
"title": "Geography"
},
{
"paragraph_id": 36,
"text": "Ngazidja is the largest of the Comoros Archipelago, with an area of 1,024 km. It is also the most recent island, and therefore has rocky soil. The island's two volcanoes, Karthala (active) and La Grille (dormant), and the lack of good harbours are distinctive characteristics of its terrain. Mwali, with its capital at Fomboni, is the smallest of the four major islands. Ndzwani, whose capital is Mutsamudu, has a distinctive triangular shape caused by three mountain chains – Shisiwani, Nioumakele and Jimilime – emanating from a central peak, Mount Ntingui [fr] (1,575 m or 5,167 ft).",
"title": "Geography"
},
{
"paragraph_id": 37,
"text": "The islands of the Comoros Archipelago were formed by volcanic activity. Mount Karthala, an active shield volcano located on Ngazidja, is the country's highest point, at 2,361 metres (7,746 feet). It contains the Comoros' largest patch of disappearing rainforest. Karthala is currently one of the most active volcanoes in the world, with a minor eruption in May 2006, and prior eruptions as recently as April 2005 and 1991. In the 2005 eruption, which lasted from 17 to 19 April, 40,000 citizens were evacuated, and the crater lake in the volcano's three-by-four-kilometre (2-by-2+1⁄2-mile) caldera was destroyed.",
"title": "Geography"
},
{
"paragraph_id": 38,
"text": "The Comoros also lays claim to the Îles Éparses or Îles éparses de l'océan indien (Scattered Islands in the Indian Ocean) – Glorioso Islands, comprising Grande Glorieuse, Île du Lys, Wreck Rock, South Rock, Verte Rocks [fr] (three islets) and three unnamed islets – one of France's overseas districts. The Glorioso Islands were administered by the colonial Comoros before 1975, and are therefore sometimes considered part of the Comoros Archipelago. Banc du Geyser, a former island in the Comoros Archipelago, now submerged, is geographically located in the Îles Éparses, but was annexed by Madagascar in 1976 as an unclaimed territory. The Comoros and France each still view the Banc du Geyser as part of the Glorioso Islands and, thus, part of its particular exclusive economic zone.",
"title": "Geography"
},
{
"paragraph_id": 39,
"text": "The climate is generally tropical and mild, and the two major seasons are distinguishable by their raininess. The temperature reaches an average of 29–30 °C (84–86 °F) in March, the hottest month in the rainy season (called kashkazi/kaskazi [meaning north monsoon], which runs from November to April), and an average low of 19 °C (66 °F) in the cool, dry season (kusi (meaning south monsoon), which proceeds from May to October). The islands are rarely subject to cyclones.",
"title": "Geography"
},
{
"paragraph_id": 40,
"text": "The Comoros constitute an ecoregion in their own right, Comoros forests. It had a 2018 Forest Landscape Integrity Index mean score of 7.69/10, ranking it 33rd globally out of 172 countries.",
"title": "Geography"
},
{
"paragraph_id": 41,
"text": "In December 1952 a specimen of the West Indian Ocean coelacanth fish was re-discovered off the Comoros coast. The 66 million-year-old species was thought to have been long extinct until its first recorded appearance in 1938 off the South African coast. Between 1938 and 1975, 84 specimens were caught and recorded.",
"title": "Geography"
},
{
"paragraph_id": 42,
"text": "There are six national parks in the Comoros – Karthala, Coelacanth, and Mitsamiouli Ndroudi on Grande Comore, Mount Ntringui and Shisiwani on Anjouan, and Mohéli National Park on Mohéli. Karthala and Mount Ntrigui national parks cover the highest peaks on the respective islands, and Coelacanth, Mitsamiouli Ndroudi, and Shisiwani are marine national parks that protect the island's coastal waters and fringing reefs. Mohéli National Park includes both terrestrial and marine areas.",
"title": "Geography"
},
{
"paragraph_id": 43,
"text": "Politics of the Comoros takes place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. The Constitution of the Union of the Comoros was ratified by referendum on 23 December 2001, and the islands' constitutions and executives were elected in the following months. It had previously been considered a military dictatorship, and the transfer of power from Azali Assoumani to Ahmed Abdallah Mohamed Sambi in May 2006 was a watershed moment as it was the first peaceful transfer in Comorian history.",
"title": "Government"
},
{
"paragraph_id": 44,
"text": "Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The preamble of the constitution guarantees an Islamic inspiration in governance, a commitment to human rights, and several specific enumerated rights, democracy, \"a common destiny\" for all Comorians. Each of the islands (according to Title II of the Constitution) has a great amount of autonomy in the Union, including having their own constitutions (or Fundamental Law), president, and Parliament. The presidency and Assembly of the Union are distinct from each of the islands' governments. The presidency of the Union rotates between the islands. Despite widespread misgivings about the durability of the system of presidential rotation, Ngazidja holds the current presidency rotation, and Azali is President of the Union; Ndzwani is in theory to provide the next president.",
"title": "Government"
},
{
"paragraph_id": 45,
"text": "The Comorian legal system rests on Islamic law, an inherited French (Napoleonic Code) legal code, and customary law (mila na ntsi). Village elders, kadis or civilian courts settle most disputes. The judiciary is independent of the legislative and the executive. The Supreme Court acts as a Constitutional Council in resolving constitutional questions and supervising presidential elections. As High Court of Justice, the Supreme Court also arbitrates in cases where the government is accused of malpractice. The Supreme Court consists of two members selected by the president, two elected by the Federal Assembly, and one by the council of each island.",
"title": "Government"
},
{
"paragraph_id": 46,
"text": "Around 80 percent of the central government's annual budget is spent on the country's complex administrative system which provides for a semi-autonomous government and president for each of the three islands and a rotating presidency for the overarching Union government. A referendum took place on 16 May 2009 to decide whether to cut down the government's unwieldy political bureaucracy. 52.7% of those eligible voted, and 93.8% of votes were cast in approval of the referendum. Following the implementation of the changes, each island's president became a governor and the ministers became councillors.",
"title": "Government"
},
{
"paragraph_id": 47,
"text": "In November 1975, the Comoros became the 143rd member of the United Nations. The new nation was defined as comprising the entire archipelago, although the citizens of Mayotte chose to become French citizens and keep their island as a French territory.",
"title": "Government"
},
{
"paragraph_id": 48,
"text": "The Comoros has repeatedly pressed its claim to Mayotte before the United Nations General Assembly, which adopted a series of resolutions under the caption \"Question of the Comorian Island of Mayotte\", opining that Mayotte belongs to the Comoros under the principle that the territorial integrity of colonial territories should be preserved upon independence. As a practical matter, however, these resolutions have little effect and there is no foreseeable likelihood that Mayotte will become de facto part of the Comoros without its people's consent. More recently, the Assembly has maintained this item on its agenda but deferred it from year to year without taking action. Other bodies, including the Organization of African Unity, the Movement of Non-Aligned Countries and the Organisation of Islamic Cooperation, have similarly questioned French sovereignty over Mayotte. To close the debate and to avoid being integrated by force in the Union of the Comoros, the population of Mayotte overwhelmingly chose to become an overseas department and a region of France in a 2009 referendum. The new status was effective on 31 March 2011 and Mayotte has been recognised as an outermost region by the European Union on 1 January 2014. This decision legally integrates Mayotte in the French Republic.",
"title": "Government"
},
{
"paragraph_id": 49,
"text": "The Comoros is a member of the United Nations, the African Union, the Arab League, the World Bank, the International Monetary Fund, the Indian Ocean Commission and the African Development Bank. On 10 April 2008, the Comoros became the 179th nation to accept the Kyoto Protocol to the United Nations Framework Convention on Climate Change. The Comoros signed the UN treaty on the Prohibition of Nuclear Weapons. Azali Assoumani, President of the Comoros and Chair of the African Union, attended the 2023 Russia–Africa Summit in Saint Petersburg.",
"title": "Government"
},
{
"paragraph_id": 50,
"text": "In May 2013 the Union of the Comoros became known for filing a referral to the Office of the Prosecutor of the International Criminal Court (ICC) regarding the events of \"the 31 May 2010 Israeli raid on the Humanitarian Aid Flotilla bound for [the] Gaza Strip\". In November 2014 the ICC Prosecutor eventually decided that the events did constitute war crimes but did not meet the gravity standards of bringing the case before ICC.",
"title": "Government"
},
{
"paragraph_id": 51,
"text": "The emigration rate of skilled workers was about 21.2% in 2000.",
"title": "Government"
},
{
"paragraph_id": 52,
"text": "The military resources of the Comoros consist of a small standing army and a 500-member police force, as well as a 500-member defence force. A defence treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains the presence of a few senior officers in the Comoros at government request, as well as a small maritime base and a Foreign Legion Detachment (DLEM) on Mayotte.",
"title": "Government"
},
{
"paragraph_id": 53,
"text": "Once the new government was installed in May–June 2011, an expert mission from UNREC (Lomé) came to the Comoros and produced guidelines for the elaboration of a national security policy, which were discussed by different actors, notably the national defence authorities and civil society. By the end of the programme in end March 2012, a normative framework agreed upon by all entities involved in SSR will have been established. This will then have to be adopted by Parliament and implemented by the authorities.",
"title": "Government"
},
{
"paragraph_id": 54,
"text": "Both male and female same-sex sexual acts are illegal in Comoros. Such acts are punished with up to five years imprisonment.",
"title": "Government"
},
{
"paragraph_id": 55,
"text": "The level of poverty in the Comoros is high, but \"judging by the international poverty threshold of $1.9 per person per day, only two out of every ten Comorians could be classified as poor, a rate that places the Comoros ahead of other low-income countries and 30 percentage points ahead of other countries in Sub-Saharan Africa.\" Poverty declined by about 10% between 2014 and 2018, and living conditions generally improved. Economic inequality remains widespread, with a major gap between rural and urban areas. Remittances through the sizable Comorian diaspora form a substantial part of the country's GDP and have contributed to decreases in poverty and increases in living standards.",
"title": "Economy"
},
{
"paragraph_id": 56,
"text": "According to ILO's ILOSTAT statistical database, between 1991 and 2019 the unemployment rate as a percent of the total labor force ranged from 4.38% to 4.3%. An October 2005 paper by the Comoros Ministry of Planning and Regional Development, however, reported that \"registered unemployment rate is 14.3 percent, distributed very unevenly among and within the islands, but with marked incidence in urban areas.\"",
"title": "Economy"
},
{
"paragraph_id": 57,
"text": "In 2019, more than 56% of the labor force was employed in agriculture, with 29% employed in industry and 14% employed in services. The islands' agricultural sector is based on the export of spices, including vanilla, cinnamon, and cloves, and thus susceptible to price fluctuations in the volatile world commodity market for these goods. The Comoros is the world's largest producer of ylang-ylang, a plant whose extracted essential oil is used in the perfume industry; some 80% of the world's supply comes from the Comoros.",
"title": "Economy"
},
{
"paragraph_id": 58,
"text": "High population densities, as much as 1000 per square kilometre in the densest agricultural zones, for what is still a mostly rural, agricultural economy may lead to an environmental crisis in the near future, especially considering the high rate of population growth. In 2004 the Comoros' real GDP growth was a low 1.9% and real GDP per capita continued to decline. These declines are explained by factors including declining investment, drops in consumption, rising inflation, and an increase in trade imbalance due in part to lowered cash crop prices, especially vanilla.",
"title": "Economy"
},
{
"paragraph_id": 59,
"text": "Fiscal policy is constrained by erratic fiscal revenues, a bloated civil service wage bill, and an external debt that is far above the HIPC threshold. Membership in the franc zone, the main anchor of stability, has nevertheless helped contain pressures on domestic prices.",
"title": "Economy"
},
{
"paragraph_id": 60,
"text": "The Comoros has an inadequate transportation system, a young and rapidly increasing population, and few natural resources. The low educational level of the labour force contributes to a subsistence level of economic activity, high unemployment, and a heavy dependence on foreign grants and technical assistance. Agriculture contributes 40% to GDP and provides most of the exports.",
"title": "Economy"
},
{
"paragraph_id": 61,
"text": "The government is struggling to upgrade education and technical training, to privatise commercial and industrial enterprises, to improve health services, to diversify exports, to promote tourism, and to reduce the high population growth rate.",
"title": "Economy"
},
{
"paragraph_id": 62,
"text": "The Comoros is a member of the Organization for the Harmonization of Business Law in Africa (OHADA).",
"title": "Economy"
},
{
"paragraph_id": 63,
"text": "With about 850,000 residents, the Comoros is one of the least-populous countries in the world, but its population density is high, with an average of 275 inhabitants per square kilometre (710/sq mi). In 2001, 34% of the population was considered urban, but the urban population has since grown; in recent years rural population growth has been negative, while overall population growth is still relatively high. In 1958 the population was 183,133.",
"title": "Demographics"
},
{
"paragraph_id": 64,
"text": "Almost half the population of the Comoros is under the age of 15. Major urban centres include Moroni, Mitsamihuli, Foumbouni, Mutsamudu, Domoni, and Fomboni. There are between 200,000 and 350,000 Comorians in France.",
"title": "Demographics"
},
{
"paragraph_id": 65,
"text": "The islands of the Comoros are 97.1% ethnically Comorian, which is a mixture of Bantu, Malagasy, and Arab people. Minorities include Makua and Indian (mostly Ismaili). There are recent immigrants of Chinese origin in Grande Comore (especially Moroni). Although most French left after independence in 1975, a small Creole community, descended from settlers from France, Madagascar and Réunion, lives in the Comoros.",
"title": "Demographics"
},
{
"paragraph_id": 66,
"text": "The most common languages in the Comoros are the Comorian languages, collectively known as Shikomori. They are related to Swahili, and the four different variants (Shingazidja, Shimwali, Shindzwani and Shimaore) are spoken on each of the four islands. Arabic and Latin scripts are both used, Arabic being the more widely used, and an official orthography has recently been developed for the Latin script.",
"title": "Demographics"
},
{
"paragraph_id": 67,
"text": "Arabic and French are also official languages, along with Comorian. Arabic is widely known as a second language, being the language of Quranic teaching. French is the administrative language and the language of most non-Quranic formal education.",
"title": "Demographics"
},
{
"paragraph_id": 68,
"text": "Sunni Islam is the dominant religion, followed by as much as 99% of the population. Comoros is the only Muslim-majority country in Southern Africa and one of the three southernmost Muslim-majority territories, along with Mayotte and the Australian territory of Cocos Islands. A minority of the population of the Comoros are Christian, both Catholic and Protestant denominations are represented, and most Malagasy residents are also Christian. Immigrants from metropolitan France are mostly Catholic.",
"title": "Demographics"
},
{
"paragraph_id": 69,
"text": "There are 15 physicians per 100,000 people. The fertility rate was 4.7 per adult woman in 2004. Life expectancy at birth is 67 for females and 62 for males.",
"title": "Demographics"
},
{
"paragraph_id": 70,
"text": "Almost all children attend Quranic schools, usually before, although increasingly in tandem with regular schooling. Children are taught about the Qur'an, and memorise it, and learn the Arabic script. Most parents prefer their children to attend Koran schools before moving on to the French-based schooling system. Although the state sector is plagued by a lack of resources, and the teachers by unpaid salaries, there are numerous private and community schools of relatively good standard. The national curriculum, apart from a few years during the revolutionary period immediately post-independence, has been very much based on the French system, both because resources are French and most Comorians hope to go on to further education in France. There have recently been moves to Comorianise the syllabus and integrate the two systems, the formal and the Quran schools, into one, thus moving away from the secular educational system inherited from France.",
"title": "Demographics"
},
{
"paragraph_id": 71,
"text": "Pre-colonization education systems in Comoros focused on necessary skills such as agriculture, caring for livestock and completing household tasks. Religious education also taught children the virtues of Islam. The education system underwent a transformation during colonization in the early 1900s which brought secular education based on the French system. This was mainly for children of the elite. After Comoros gained independence in 1975, the education system changed again. Funding for teachers' salaries was lost, and many went on strike. Thus, the public education system was not functioning between 1997 and 2001. Since gaining independence, the education system has also undergone a democratization and options exist for those other than the elite. Enrollment has also grown.",
"title": "Demographics"
},
{
"paragraph_id": 72,
"text": "In 2000, 44.2% of children aged 5 to 14 years were attending school. There is a general lack of facilities, equipment, qualified teachers, textbooks and other resources. Salaries for teachers are often so far in arrears that many refuse to work.",
"title": "Demographics"
},
{
"paragraph_id": 73,
"text": "Prior to 2000, students seeking a university education had to attend school outside of the country. However, in the early 2000s a university was created in the country. This served to help economic growth and to fight the \"flight\" of many educated people who were not returning to the islands to work.",
"title": "Demographics"
},
{
"paragraph_id": 74,
"text": "Comorian has no native script, but both the Arabic and Latin alphabets are used. In 2004, about 57 percent of the population was literate in the Latin script while more than 90 percent were literate in the Arabic script.",
"title": "Demographics"
},
{
"paragraph_id": 75,
"text": "Traditionally, women on Ndzwani wear red and white patterned garments called shiromani, while on Ngazidja and Mwali colourful shawls called leso are worn. Many women apply a paste of ground sandalwood and coral called msindzano to their faces. Traditional male clothing is a long white shirt known as a nkandu, and a bonnet called a kofia.",
"title": "Culture"
},
{
"paragraph_id": 76,
"text": "There are two types of marriages in Comoros, the little marriage (known as Mna daho on Ngazidja) and the customary marriage (known as ada on Ngazidja, harusi on the other islands). The little marriage is a simple legal marriage. It is small, intimate, and inexpensive, and the bride's dowry is nominal. A man may undertake a number of Mna daho marriages in his lifetime, often at the same time, a woman fewer; but both men and women will usually only undertake one ada, or grand marriage, and this must generally be within the village. The hallmarks of the grand marriage are dazzling gold jewelry, two weeks of celebration and an enormous bridal dowry. Although the expenses are shared between both families as well as with a wider social circle, an ada wedding on Ngazidja can cost up to €50,000. Many couples take a lifetime to save for their ada, and it is not uncommon for a marriage to be attended by a couple's adult children.",
"title": "Culture"
},
{
"paragraph_id": 77,
"text": "The ada marriage marks a man's transition in the Ngazidja age system from youth to elder. His status in the social hierarchy greatly increases, and he will henceforth be entitled to speak in public and participate in the political process, both in his village and more widely across the island. He will be entitled to display his status by wearing a mharuma, a type of shawl, across his shoulders, and he can enter the mosque by the door reserved for elders, and sit at the front. A woman's status also changes, although less formally, as she becomes a \"mother\" and moves into her own house. The system is less formalised on the other islands, but the marriage is nevertheless a significant and costly event across the archipelago. The ada is often criticized because of its great expense, but at the same time it is a source of social cohesion and the main reason why migrants in France and elsewhere continue to send money home. Increasingly, marriages are also being taxed for the purposes of village development.",
"title": "Culture"
},
{
"paragraph_id": 78,
"text": "Comorian society has a bilateral descent system. Lineage membership and inheritance of immovable goods (land, housing) is matrilineal, passed in the maternal line, similar to many Bantu peoples who are also matrilineal, while other goods and patronymics are passed in the male line. However, there are differences between the islands, the matrilineal element being stronger on Ngazidja.",
"title": "Culture"
},
{
"paragraph_id": 79,
"text": "Twarab music, imported from Zanzibar in the early 20th century, remains the most influential genre on the islands and is popular at ada marriages.",
"title": "Culture"
},
{
"paragraph_id": 80,
"text": "There are two daily national newspapers published in the Comoros, the government-owned Al-Watwan, and the privately owned La Gazette des Comores, both published in Moroni. There are a number of smaller newsletters published on an irregular basis as well as a variety of news websites. The government-owned ORTC (Office de Radio et Télévision des Comores) provides national radio and television service. There is a TV station run by the Anjouan regional government, and regional governments on the islands of Grande Comore and Anjouan each operate a radio station. There are also a few independent and small community radio stations that operate on the islands of Grande Comore and Mohéli, and these two islands have access to Mayotte Radio and French TV.",
"title": "Culture"
},
{
"paragraph_id": 81,
"text": "12°18′S 43°42′E / 12.3°S 43.7°E / -12.3; 43.7",
"title": "External links"
}
] | The Comoros, officially the Union of the Comoros, is an archipelagic country made up of three islands in Southeastern Africa, located at the northern end of the Mozambique Channel in the Indian Ocean. Its capital and largest city is Moroni. The religion of the majority of the population, and the official state religion, is Sunni Islam. Comoros proclaimed its independence from France on 6 July 1975. A member of the Arab League, it is the only country in the Arab world which is entirely in the Southern Hemisphere. It is a member state of the African Union, the Organisation internationale de la Francophonie, the Organisation of Islamic Co-operation, and the Indian Ocean Commission. The country has three official languages: Shikomori, French and Arabic. The sovereign state consists of three major islands and numerous smaller islands, all of the volcanic Comoro Islands with the exception of Mayotte. Mayotte voted against independence from France in a referendum in 1974, and continues to be administered by France as an overseas department. France has vetoed United Nations Security Council resolutions that would affirm Comorian sovereignty over the island. Mayotte became an overseas department and a region of France in 2011 following a referendum which was passed overwhelmingly. At 1,659 km2 (641 sq mi), the Comoros is the third-smallest African country by area. In 2019, its population was estimated to be 850,886. The Comoros were likely first settled by Austronesian/Malagasy peoples, Bantu speakers from East Africa, and seafaring Arab traders. It became part of the French colonial empire during the 19th century, before its independence in 1975. It has experienced more than 20 coups or attempted coups, with various heads of state assassinated. Along with this constant political instability, it has one of the worst levels of income inequality of any nation, and ranks in the lowest quartile on the Human Development Index. As of 2008, about half the population lived below the international poverty line of US$1.25 a day. | 2001-04-10T16:26:51Z | 2023-12-30T19:36:43Z | [
"Template:Short description",
"Template:About",
"Template:Further",
"Template:Reflist",
"Template:Comoros topics",
"Template:Lang",
"Template:More citations needed section",
"Template:Update inline",
"Template:Largest cities",
"Template:CIA World Factbook",
"Template:NoteTag",
"Template:As of",
"Template:Wikiatlas",
"Template:Cite book",
"Template:GovPubs",
"Template:Navboxes",
"Template:Pp-move",
"Template:Ill",
"Template:Convert",
"Template:NoteFoot",
"Template:Cite news",
"Template:UN Population",
"Template:Portal",
"Template:Curlie",
"Template:Clear",
"Template:WWF ecoregion",
"Template:Coord",
"Template:Infobox country",
"Template:Cvt",
"Template:Notelist",
"Template:Webarchive",
"Template:Cite encyclopedia",
"Template:Refbegin",
"Template:Sister project links",
"Template:Authority control",
"Template:EngvarB",
"Template:Use dmy dates",
"Template:Main",
"Template:Cite web",
"Template:Cite journal",
"Template:Distinguish",
"Template:See also",
"Template:Refend"
] | https://en.wikipedia.org/wiki/Comoros |
5,404 | Critical philosophy | The critical philosophy (German: kritische Philosophie) movement, attributed to Immanuel Kant (1724–1804), sees the primary task of philosophy as criticism rather than justification of knowledge. Criticism, for Kant, meant judging as to the possibilities of knowledge before advancing to knowledge itself (from the Greek kritike (techne), or "art of judgment"). The basic task of philosophers, according to this view, is not to establish and demonstrate theories about reality, but rather to subject all theories—including those about philosophy itself—to critical review, and measure their validity by how well they withstand criticism.
"Critical philosophy" is also used as another name for Kant's philosophy itself. Kant said that philosophy's proper inquiry is not about what is out there in reality, but rather about the character and foundations of experience itself. We must first judge how human reason works, and within what limits, so that we can afterwards correctly apply it to sense experience and determine whether it can be applied at all to metaphysical objects.
The principal three sources on which the critical philosophy is based are the three critiques, namely Critique of Pure Reason, Critique of Practical Reason and Critique of Judgement, published between 1781 and 1790 and mostly concerned, respectively, with metaphysics, ethics and aesthetics. | [
{
"paragraph_id": 0,
"text": "The critical philosophy (German: kritische Philosophie) movement, attributed to Immanuel Kant (1724–1804), sees the primary task of philosophy as criticism rather than justification of knowledge. Criticism, for Kant, meant judging as to the possibilities of knowledge before advancing to knowledge itself (from the Greek kritike (techne), or \"art of judgment\"). The basic task of philosophers, according to this view, is not to establish and demonstrate theories about reality, but rather to subject all theories—including those about philosophy itself—to critical review, and measure their validity by how well they withstand criticism.",
"title": ""
},
{
"paragraph_id": 1,
"text": "\"Critical philosophy\" is also used as another name for Kant's philosophy itself. Kant said that philosophy's proper inquiry is not about what is out there in reality, but rather about the character and foundations of experience itself. We must first judge how human reason works, and within what limits, so that we can afterwards correctly apply it to sense experience and determine whether it can be applied at all to metaphysical objects.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The principal three sources on which the critical philosophy is based are the three critiques, namely Critique of Pure Reason, Critique of Practical Reason and Critique of Judgement, published between 1781 and 1790 and mostly concerned, respectively, with metaphysics, ethics and aesthetics.",
"title": ""
},
{
"paragraph_id": 3,
"text": "",
"title": "References"
}
] | The critical philosophy movement, attributed to Immanuel Kant (1724–1804), sees the primary task of philosophy as criticism rather than justification of knowledge. Criticism, for Kant, meant judging as to the possibilities of knowledge before advancing to knowledge itself. The basic task of philosophers, according to this view, is not to establish and demonstrate theories about reality, but rather to subject all theories—including those about philosophy itself—to critical review, and measure their validity by how well they withstand criticism. "Critical philosophy" is also used as another name for Kant's philosophy itself. Kant said that philosophy's proper inquiry is not about what is out there in reality, but rather about the character and foundations of experience itself. We must first judge how human reason works, and within what limits, so that we can afterwards correctly apply it to sense experience and determine whether it can be applied at all to metaphysical objects. The principal three sources on which the critical philosophy is based are the three critiques, namely Critique of Pure Reason, Critique of Practical Reason and Critique of Judgement, published between 1781 and 1790 and mostly concerned, respectively, with metaphysics, ethics and aesthetics. | 2023-06-15T06:11:21Z | [
"Template:Short description",
"Template:Immanuel Kant",
"Template:Lang-de",
"Template:Authority control",
"Template:Philo-stub"
] | https://en.wikipedia.org/wiki/Critical_philosophy |
|
5,405 | China | China (Chinese: 中国; pinyin: Zhōngguó), officially the People's Republic of China (PRC), is a country in East Asia. With a population exceeding 1.4 billion, it is the world's second-most-populous country. China spans the equivalent of five time zones and borders fourteen countries by land. With an area of nearly 9.6 million square kilometers (3,700,000 sq mi), it is the third-largest country by total land area. The country is divided into 22 provinces, five autonomous regions, four municipalities, and two semi-autonomous special administrative regions. Beijing is the national capital, while Shanghai is the most populous city and largest financial center.
The region has been inhabited since the Paleolithic era. The earliest Chinese dynastic states, such as the Shang and the Zhou, emerged in the basin of the Yellow River before the late second millennium BCE. The eighth to third centuries BCE saw a breakdown in Zhou authority and significant conflict, as well as the emergence of Classical Chinese literature and philosophy. In 221 BCE, China was unified under an emperor for the first time, ushering in more than two millennia in which China was governed by one or more imperial dynasties, including the Han, Tang, Yuan, Ming, and Qing. Some of China's most notable achievements—such as the invention of gunpowder and paper, the establishment of the Silk Road, and the building of the Great Wall—occurred during this period. The imperial Chinese culture—including languages, traditions, architecture, philosophy and more—has heavily influenced East Asia.
In 1912, the monarchy was overthrown and the Republic of China was established. The Republic saw consistent conflict for most of the mid-20th century, including a civil war between the Kuomintang government and the Chinese Communist Party (CCP), which began in 1927, as well as the Second Sino-Japanese War that began in 1937 and continued until 1945, therefore becoming involved in World War II. The latter led to a temporary stop in the civil war and numerous Japanese atrocities such as the Nanjing Massacre, which continue to influence China–Japan relations. In 1949, the CCP established control over China as the Kuomintang fled to Taiwan. Early communist rule saw two major projects: the Great Leap Forward, which resulted in a sharp economic decline and massive famine; and the Cultural Revolution, a movement to purge all non-communist elements of Chinese society that led to mass violence and persecution. Beginning in 1978, the Chinese government launched economic reforms that moved the country away from planned economics, but political reforms were cut short by the 1989 Tiananmen Square protests and massacre. Economic reform continued to strengthen the nation's economy in the following decades while raising China's standard of living significantly.
China is a unitary one-party socialist republic led by the CCP. It is one of the five permanent members of the UN Security Council and a founding member of several multilateral and regional organizations such as the Asian Infrastructure Investment Bank, the Silk Road Fund, the New Development Bank, and the RCEP. It is a member of the BRICS, the G20, APEC, the SCO, and the East Asia Summit. China ranks poorly in measures of democracy, transparency, and human rights, including for press freedom, religious freedom, and ethnic equality. Making up around one-fifth of the world economy, China is the world's largest economy by GDP at purchasing power parity, the second-largest economy by nominal GDP, and the second-wealthiest country. The country is one of the fastest-growing major economies and is the world's largest manufacturer and exporter, as well as the second-largest importer, although its economic growth has slowed greatly in the 2020s. China is a nuclear-weapon state with the world's largest standing army by military personnel and the second-largest defense budget.
The word "China" has been used in English since the 16th century; however, it was not used by the Chinese themselves during this period. Its origin has been traced through Portuguese, Malay, and Persian back to the Sanskrit word Cīna, used in ancient India. "China" appears in Richard Eden's 1555 translation of the 1516 journal of the Portuguese explorer Duarte Barbosa. Barbosa's usage was derived from Persian Chīn (چین), which in turn derived from Sanskrit Cīna (चीन). Cīna was first used in early Hindu scripture, including the Mahabharata (5th century BCE) and the Laws of Manu (2nd century BCE). In 1655, Martino Martini suggested that the word China is derived ultimately from the name of the Qin dynasty (221–206 BCE). Although usage in Indian sources precedes this dynasty, this derivation is still given in various sources. The origin of the Sanskrit word is a matter of debate. Alternative suggestions include the names for Yelang and the Jing or Chu state.
The official name of the modern state is the "People's Republic of China" (simplified Chinese: 中华人民共和国; traditional Chinese: 中華人民共和國; pinyin: Zhōnghuá Rénmín Gònghéguó). The shorter form is "China" Zhōngguó (中国; 中國) from zhōng ("central") and guó ("state"), a term which developed under the Western Zhou dynasty in reference to its royal demesne. It was used in official documents as an synonym for the state under the Qing. The name Zhongguo is also translated as "Middle Kingdom" in English. China (PRC) is sometimes referred to as the Mainland when distinguishing the ROC from the PRC.
Archaeological evidence suggests that early hominids inhabited China 2.25 million years ago. The hominid fossils of Peking Man, a Homo erectus who used fire, have been dated to between 680,000 and 780,000 years ago. The fossilized teeth of Homo sapiens (dated to 125,000–80,000 years ago) have been discovered in Fuyan Cave. Chinese proto-writing existed in Jiahu around 6600 BCE, at Damaidi around 6000 BCE, Dadiwan from 5800 to 5400 BCE, and Banpo dating from the 5th millennium BCE. Some scholars have suggested that the Jiahu symbols (7th millennium BCE) constituted the earliest Chinese writing system.
According to Chinese tradition, the first dynasty was the Xia, which emerged around 2100 BCE. The Xia dynasty marked the beginning of China's political system based on hereditary monarchies, or dynasties. The Xia dynasty was considered mythical by historians until scientific excavations found early Bronze Age sites at Erlitou in 1959. It remains unclear whether these sites are the remains of the Xia dynasty or of another culture from the same period. The succeeding Shang dynasty is the earliest to be confirmed by contemporary records. The Shang ruled the plain of the Yellow River in eastern China from the 17th to the 11th century BCE. Their oracle bone script (from c. 1500 BCE) represents the oldest form of Chinese writing yet found and is a direct ancestor of modern Chinese characters.
The Shang was conquered by the Zhou, who ruled between the 11th and 5th centuries BCE, though centralized authority was slowly eroded by feudal warlords. Some principalities eventually emerged from the weakened Zhou and continually waged war with each other during the 300-year Spring and Autumn period. By the time of the Warring States period of the 5th–3rd centuries BCE, there were seven major powerful states left.
The Warring States period ended in 221 BCE after the state of Qin conquered the other six kingdoms, reunited China and established the dominant order of autocracy. King Zheng of Qin proclaimed himself the Emperor of the Qin dynasty, becoming the first emperor of a unified China. He enacted Qin's legalist reforms, notably the forced standardization of Chinese characters, measurements, road widths, and currency. His dynasty also conquered the Yue tribes in Guangxi, Guangdong, and Northern Vietnam. The Qin dynasty lasted only fifteen years, falling soon after the First Emperor's death, as his harsh authoritarian policies led to widespread rebellion.
Following a widespread civil war during which the imperial library was burned, the Han dynasty emerged to rule China between 206 BCE and CE 220, creating a cultural identity among its populace still remembered in the ethnonym of the modern Han Chinese. The Han expanded the empire's territory considerably, with military campaigns reaching Central Asia, Mongolia, Korea, and Yunnan, and the recovery of Guangdong and northern Vietnam from Nanyue. Han involvement in Central Asia and Sogdia helped establish the land route of the Silk Road, replacing the earlier path over the Himalayas to India. Han China gradually became the largest economy of the ancient world. Despite the Han's initial decentralization and the official abandonment of the Qin philosophy of Legalism in favor of Confucianism, Qin's legalist institutions and policies continued to be employed by the Han government and its successors.
After the end of the Han dynasty, a period of strife known as Three Kingdoms followed, at the end of which Wei was swiftly overthrown by the Jin dynasty. The Jin fell to civil war upon the ascension of a developmentally disabled emperor; the Five Barbarians then rebelled and ruled northern China as the Sixteen States. The Xianbei unified them as the Northern Wei, whose Emperor Xiaowen reversed his predecessors' apartheid policies and enforced a drastic sinification on his subjects. In the south, the general Liu Yu secured the abdication of the Jin in favor of the Liu Song. The various successors of these states became known as the Northern and Southern dynasties, with the two areas finally reunited by the Sui in 581. The Sui restored the Han to power through China, reformed its agriculture, economy and imperial examination system, constructed the Grand Canal, and patronized Buddhism. However, they fell quickly when their conscription for public works and a failed war in northern Korea provoked widespread unrest.
Under the succeeding Tang and Song dynasties, Chinese economy, technology, and culture entered a golden age. The Tang dynasty retained control of the Western Regions and the Silk Road, which brought traders to as far as Mesopotamia and the Horn of Africa, and made the capital Chang'an a cosmopolitan urban center. However, it was devastated and weakened by the An Lushan rebellion in the 8th century. In 907, the Tang disintegrated completely when the local military governors became ungovernable. The Song dynasty ended the separatist situation in 960, leading to a balance of power between the Song and the Liao dynasty. The Song was the first government in world history to issue paper money and the first Chinese polity to establish a permanent navy which was supported by the developed shipbuilding industry along with the sea trade.
Between the 10th and 11th century CE, the population of China doubled to around 100 million people, mostly because of the expansion of rice cultivation in central and southern China, and the production of abundant food surpluses. The Song dynasty also saw a revival of Confucianism, in response to the growth of Buddhism during the Tang, and a flourishing of philosophy and the arts, as landscape art and porcelain were brought to new levels of complexity. However, the military weakness of the Song army was observed by the Jin dynasty. In 1127, Emperor Huizong of Song and the capital Bianjing were captured during the Jin–Song Wars. The remnants of the Song retreated to southern China.
The Mongol conquest of China began in 1205 with the gradual conquest of Western Xia by Genghis Khan, who also invaded Jin territories. In 1271, the Mongol leader Kublai Khan established the Yuan dynasty, which conquered the last remnant of the Song dynasty in 1279. Before the Mongol invasion, the population of Song China was 120 million citizens; this was reduced to 60 million by the time of the census in 1300. A peasant named Zhu Yuanzhang overthrew the Yuan in 1368 and founded the Ming dynasty as the Hongwu Emperor. Under the Ming dynasty, China enjoyed another golden age, developing one of the strongest navies in the world and a rich and prosperous economy amid a flourishing of art and culture. It was during this period that admiral Zheng He led the Ming treasure voyages throughout the Indian Ocean, reaching as far as East Africa.
In the early Ming dynasty, China's capital was moved from Nanjing to Beijing. With the budding of capitalism, philosophers such as Wang Yangming critiqued and expanded Neo-Confucianism with concepts of individualism and equality of four occupations. The scholar-official stratum became a supporting force of industry and commerce in the tax boycott movements, which, together with the famines and defense against Japanese invasions of Korea (1592–1598) and Later Jin incursions led to an exhausted treasury. In 1644, Beijing was captured by a coalition of peasant rebel forces led by Li Zicheng. The Chongzhen Emperor committed suicide when the city fell. The Manchu Qing dynasty, then allied with Ming dynasty general Wu Sangui, overthrew Li's short-lived Shun dynasty and subsequently seized control of Beijing, which became the new capital of the Qing dynasty.
The Qing dynasty, which lasted from 1644 until 1912, was the last imperial dynasty of China. The Ming-Qing transition (1618–1683) cost 25 million lives, but the Qing appeared to have restored China's imperial power and inaugurated another flowering of the arts. After the Southern Ming ended, the further conquest of the Dzungar Khanate added Mongolia, Tibet and Xinjiang to the empire. Meanwhile, China's population growth resumed and shortly began to accelerate. It is commonly agreed that pre-modern China's population experienced two growth spurts, one during the Northern Song period (960-1127), and other during the Qing period (around 1700–1830). By the High Qing era China was possibly the most commercialized country in the world, and imperial China experienced a second commercial revolution by the end of the 18th century. On the other hand, the centralized autocracy was strengthened in part to suppress anti-Qing sentiment with the policy of valuing agriculture and restraining commerce, like the Haijin during the early Qing period and ideological control as represented by the literary inquisition, causing some social and technological stagnation.
In the mid-19th century, the Opium Wars with Britain and France forced China to pay compensation, open treaty ports, allow extraterritoriality for foreign nationals, and cede Hong Kong to the British under the 1842 Treaty of Nanking, the first of what have been termed as the "unequal treaties". The First Sino-Japanese War (1894–1895) resulted in Qing China's loss of influence in the Korean Peninsula, as well as the cession of Taiwan to Japan. The Qing dynasty also began experiencing internal unrest in which tens of millions of people died, especially in the White Lotus Rebellion, the failed Taiping Rebellion that ravaged southern China in the 1850s and 1860s and the Dungan Revolt (1862–1877) in the northwest. The initial success of the Self-Strengthening Movement of the 1860s was frustrated by a series of military defeats in the 1880s and 1890s.
In the 19th century, the great Chinese diaspora began. Losses due to emigration were added to by conflicts and catastrophes such as the Northern Chinese Famine of 1876–1879, in which between 9 and 13 million people died. The Guangxu Emperor drafted a reform plan in 1898 to establish a modern constitutional monarchy, but these plans were thwarted by the Empress Dowager Cixi. The ill-fated anti-foreign Boxer Rebellion of 1899–1901 further weakened the dynasty. Although Cixi sponsored a program of reforms known as the late Qing reforms, the Xinhai Revolution of 1911–1912 ended the Qing dynasty and established the Republic of China. Puyi, the last Emperor, abdicated in 1912.
On 1 January 1912, the Republic of China was established, and Sun Yat-sen of the Kuomintang (KMT) was proclaimed provisional president. In March 1912, the presidency was given to Yuan Shikai, a former Qing general who in 1915 proclaimed himself Emperor of China. In the face of popular condemnation and opposition from his own Beiyang Army, he was forced to abdicate and re-establish the republic in 1916.
After Yuan Shikai's death in 1916, China was politically fragmented. Its Beijing-based government was internationally recognized but virtually powerless; regional warlords controlled most of its territory. In the late 1920s, the Kuomintang under Chiang Kai-shek was able to reunify the country under its own control with a series of deft military and political maneuverings known collectively as the Northern Expedition. The Kuomintang moved the nation's capital to Nanjing and implemented "political tutelage", an intermediate stage of political development outlined in Sun Yat-sen's Three Principles of the People program for transforming China into a modern democratic state. The Kuomintang briefly allied with the Chinese Communist Party (CCP) during the Northern Expedition, though the alliance broke down in 1927 after Chiang violently suppressed the CCP and other leftists Shanghai, marking the beginning of the Chinese Civil War. The CCP declared areas of the country as the Chinese Soviet Republic (Jiangxi Soviet) in November 1931 in Ruijin, Jiangxi. The Jiangxi Soviet was wiped out by the KMT armies in 1934, leading the CCP to initiate the Long March and relocate to Yan'an in Shaanxi. It would be the base of the communists before major combat in the Chinese Civil War ended in 1949.
In 1931, Japan invaded and occupied Manchuria. Japan invaded other parts of China in 1937, precipitating the Second Sino-Japanese War (1937–1945), a theater of World War II. The war forced an uneasy alliance between the Kuomintang and the CCP. Japanese forces committed numerous war atrocities against the civilian population; as many as 20 million Chinese civilians died. An estimated 40,000 to 300,000 Chinese were massacred in Nanjing alone during the Japanese occupation. China, along with the UK, the United States, and the Soviet Union, were recognized as the Allied "Big Four" in the Declaration by United Nations. Along with the other three great powers, China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war. After the surrender of Japan in 1945, Taiwan, including the Penghu, was handed over to Chinese control; however, the validity of this handover is controversial.
China emerged victorious but war-ravaged and financially drained. The continued distrust between the Kuomintang and the Communists led to the resumption of civil war. Constitutional rule was established in 1947, but because of the ongoing unrest, many provisions of the ROC constitution were never implemented in mainland China. Afterwards, the CCP took control of most of mainland China, and the ROC government retreated offshore to Taiwan.
On 1 October 1949, CCP Chairman Mao Zedong formally proclaimed the People's Republic of China in Tiananmen Square, Beijing. In 1950, the PRC captured Hainan from the ROC and annexed Tibet. However, remaining Kuomintang forces continued to wage an insurgency in western China throughout the 1950s. The CCP consolidated its popularity among the peasants through the Land Reform Movement, which included the execution of between 1 and 2 million landlords. Though the PRC initially allied closely with the Soviet Union, the relations between the two communist nations gradually deteriorated, leading China to develop an independent industrial system and its own nuclear weapons.
The Chinese population increased from 550 million in 1950 to 900 million in 1974. However, the Great Leap Forward, an idealistic massive industrialization project, resulted in an estimated 15 to 55 million deaths between 1959 and 1961, mostly from starvation. In 1964, China's first atomic bomb exploded successfully. In 1966, Mao and his allies launched the Cultural Revolution, sparking a decade of political recrimination and social upheaval that lasted until Mao's death in 1976. In October 1971, the PRC replaced the ROC in the United Nations, and took its seat as a permanent member of the Security Council.
After Mao's death, the Gang of Four was quickly arrested by Hua Guofeng and held responsible for the excesses of the Cultural Revolution. Deng Xiaoping took power in 1978, and instituted large-scale political and economic reforms, together with the "Eight Elders", CCP members who held huge influence during this time. The CCP loosened governmental control over citizens' personal lives, and the communes were gradually disbanded in favor of working contracted to households. The Cultural Revolution was also rebuked, with millions of its victims being rehabilitated. Agricultural collectivization was dismantled and farmlands privatized, while foreign trade became a major new focus, leading to the creation of special economic zones (SEZs). Inefficient state-owned enterprises (SOEs) were restructured and unprofitable ones were closed outright, resulting in job losses. This marked China's transition from a planned economy to a mixed economy with an increasingly open-market environment. China adopted its current constitution on 4 December 1982.
In 1989, the country saw large pro-democracy protests, eventually leading to the Tiananmen Square massacre, bringing condemnations and sanctions from various foreign countries, though the effect on external relations was short-lived. Jiang Zemin was selected to replace the reformist Zhao Ziyang as the CCP general secretary; Zhao was put under house arrest for his sympathies to the protests. Jiang later additionally took the presidency and Central Military Commission chairmanship posts, effectively becoming China's top leader. Jiang continued economic reforms, further closing many SOEs and massively trimming down "iron rice bowl" (occupations with guaranteed job security). During Jiang's rule, China's economy grew sevenfold. British Hong Kong and Portuguese Macau returned to China in 1997 and 1999, respectively, as special administrative regions under the principle of one country, two systems. The country joined the World Trade Organization in 2001.
Between 2002 and 2003, Hu Jintao succeeded Jiang as the paramount leader. Under Hu, China maintained its high rate of economic growth, overtaking the United Kingdom, France, Germany and Japan to become the world's second-largest economy. However, the growth also severely impacted the country's resources and environment, and caused major social displacement. Hu and Wen also took a relatively more conservative approach towards economic reform, expanding support for SOEs.
Xi Jinping succeeded Hu as paramount leader and premier respectively between 2012 and 2013. Shortly after his ascension to power, Xi launched a vast anti-corruption crackdown, that prosecuted more than 2 million officials by 2022. Xi has also pursued changes to China's economy, supporting SOEs and making eradicating extreme poverty through "targeted poverty alleviation" a key goal. In 2013, Xi launched the Belt and Road Initiative, a global infrastructure investment project. Since 2017, the Chinese government has been engaged in a harsh crackdown in Xinjiang, with an estimated one million people, mostly Uyghurs, but including other ethnic and religious minorities, in internment camps. In 2020, the Standing Committee of the National People's Congress (NPCSC) passed a national security law that authorize the Hong Kong government wide-ranging tools to crack down on dissent. From December 2019 to December 2022, the COVID-19 pandemic led the government to enforce strict public health measures intended to completely eradicate the virus, a goal that was eventually abandoned after protests against the policy in 2022. The 2020s saw Chinese economic growth significantly slow due to factors such as a crisis in the country's real estate sector.
China's landscape is vast and diverse, ranging from the Gobi and Taklamakan Deserts in the arid north to the subtropical forests in the wetter south. The Himalaya, Karakoram, Pamir and Tian Shan mountain ranges separate China from much of South and Central Asia. The Yangtze and Yellow Rivers, the third- and sixth-longest in the world, respectively, run from the Tibetan Plateau to the densely populated eastern seaboard. China's coastline along the Pacific Ocean is 14,500 km (9,000 mi) long and is bounded by the Bohai, Yellow, East China and South China seas. China connects through the Kazakh border to the Eurasian Steppe.
The territory of China lies between latitudes 18° and 54° N, and longitudes 73° and 135° E. The geographical center of China is marked by the Center of the Country Monument at 35°50′40.9″N 103°27′7.5″E / 35.844694°N 103.452083°E / 35.844694; 103.452083 (Geographical center of China). China's landscapes vary significantly across its vast territory. In the east, along the shores of the Yellow Sea and the East China Sea, there are extensive and densely populated alluvial plains, while on the edges of the Inner Mongolian plateau in the north, broad grasslands predominate. Southern China is dominated by hills and low mountain ranges, while the central-east hosts the deltas of China's two major rivers, the Yellow River and the Yangtze River. Other major rivers include the Xi, Mekong, Brahmaputra and Amur. To the west sit major mountain ranges, most notably the Himalayas. High plateaus feature among the more arid landscapes of the north, such as the Taklamakan and the Gobi Desert. The world's highest point, Mount Everest (8,848 m), lies on the Sino-Nepalese border. The country's lowest point, and the world's third-lowest, is the dried lake bed of Ayding Lake (−154 m) in the Turpan Depression.
China's climate is mainly dominated by dry seasons and wet monsoons, which lead to pronounced temperature differences between winter and summer. In the winter, northern winds coming from high-latitude areas are cold and dry; in summer, southern winds from coastal areas at lower latitudes are warm and moist.
A major environmental issue in China is the continued expansion of its deserts, particularly the Gobi Desert. Although barrier tree lines planted since the 1970s have reduced the frequency of sandstorms, prolonged drought and poor agricultural practices have resulted in dust storms plaguing northern China each spring, which then spread to other parts of East Asia, including Japan and Korea. China's environmental watchdog, SEPA, stated in 2007 that China is losing 4,000 km (1,500 sq mi) per year to desertification. Water quality, erosion, and pollution control have become important issues in China's relations with other countries. Melting glaciers in the Himalayas could potentially lead to water shortages for hundreds of millions of people. According to academics, in order to limit climate change in China to 1.5 °C (2.7 °F) electricity generation from coal in China without carbon capture must be phased out by 2045. With current policies, the GHG emissions of China will probably peak in 2025, and by 2030 they will return to 2022 levels. However, such pathway still leads to three-degree temperature rise.
Official government statistics about Chinese agricultural productivity are considered unreliable, due to exaggeration of production at subsidiary government levels. Much of China has a climate very suitable for agriculture and the country has been the world's largest producer of rice, wheat, tomatoes, eggplant, grapes, watermelon, spinach, and many other crops. In 2021, 12 percent of global permanent meadows and pastures belonged to China, as well as 8% of global cropland.
China is one of 17 megadiverse countries, lying in two of the world's major biogeographic realms: the Palearctic and the Indomalayan. By one measure, China has over 34,687 species of animals and vascular plants, making it the third-most biodiverse country in the world, after Brazil and Colombia. The country is a party to the Convention on Biological Diversity; its National Biodiversity Strategy and Action Plan was received by the convention in 2010.
China is home to at least 551 species of mammals (the third-highest in the world), 1,221 species of birds (eighth), 424 species of reptiles (seventh) and 333 species of amphibians (seventh). Wildlife in China shares habitat with, and bears acute pressure from, the world's largest population of humans. At least 840 animal species are threatened, vulnerable or in danger of local extinction, due mainly to human activity such as habitat destruction, pollution and poaching for food, fur and traditional Chinese medicine. Endangered wildlife is protected by law, and as of 2005, the country has over 2,349 nature reserves, covering a total area of 149.95 million hectares, 15 percent of China's total land area. Most wild animals have been eliminated from the core agricultural regions of east and central China, but they have fared better in the mountainous south and west. The Baiji was confirmed extinct on 12 December 2006.
China has over 32,000 species of vascular plants, and is home to a variety of forest types. Cold coniferous forests predominate in the north of the country, supporting animal species such as moose and Asian black bear, along with over 120 bird species. The understory of moist conifer forests may contain thickets of bamboo. In higher montane stands of juniper and yew, the bamboo is replaced by rhododendrons. Subtropical forests, which are predominate in central and southern China, support a high density of plant species including numerous rare endemics. Tropical and seasonal rainforests, though confined to Yunnan and Hainan, contain a quarter of all the animal and plant species found in China. China has over 10,000 recorded species of fungi.
In the early 2000s, China has suffered from environmental deterioration and pollution due to its rapid pace of industrialization. Regulations such as the 1979 Environmental Protection Law are fairly stringent, though they are poorly enforced, frequently disregarded in favor of rapid economic development. China has the second highest death toll because of air pollution, after India, with approximately 1 million deaths. Although China ranks as the highest CO2 emitting country, it only emits 8 tons of CO2 per capita, significantly lower than developed countries such as the United States (16.1), Australia (16.8) and South Korea (13.6). Greenhouse gas emissions by China are the world's largest.
In recent years, China has clamped down on pollution. In March 2014, CCP General Secretary Xi Jinping "declared war" on pollution during the opening of the National People's Congress. In 2020, Xi announced that China aims to peak emissions before 2030 and go carbon-neutral by 2060 in accordance with the Paris Agreement, which, according to Climate Action Tracker, would lower the expected rise in global temperature by 0.2–0.3 degrees – "the biggest single reduction ever estimated by the Climate Action Tracker". In September 2021 Xi Jinping announced that China will not build "coal-fired power projects abroad".
The country has significant water pollution problems; only 84.8% of China's national surface water was graded suitable for human consumption by the Ministry of Ecology and Environment in 2021. In 2020, a sweeping law was passed by the Chinese government to protect the ecology of the Yangtze River. The new laws include strengthening ecological protection rules for hydropower projects, banning chemical plants within 1 kilometer of the river, relocating polluting industries, severely restricting sand mining as well as a complete fishing ban on all the natural waterways of the river, including all its major tributaries and lakes.
China is the world's leading investor in renewable energy and its commercialization, with $546 billion invested in 2022; it is a major manufacturer of renewable energy technologies and invests heavily in local-scale renewable energy projects. In 2022, 61.2% of China's electricity came from coal (largest producer in the world), 14.9% from hydroelectric power (largest), 9.3% from wind (largest), 4.7% from solar energy (largest), 4.7% from nuclear energy (second-largest), 3.1% from natural gas (fifth-largest), and 1.9% from bioenergy (largest); in total, 30.8% of China's energy came from renewable energy sources. Despite its emphasis on renewables, China remains deeply connected to global oil markets and next to India, has been the largest importer of Russian crude oil in 2022.
China is the second-largest country in the world by land area after Russia, and the third or fourth largest country in the world by total area. China's total area is generally stated as being approximately 9,600,000 km (3,700,000 sq mi). Specific area figures range from 9,572,900 km (3,696,100 sq mi) according to the Encyclopædia Britannica, to 9,596,961 km (3,705,407 sq mi) according to the UN Demographic Yearbook, and The World Factbook.
China has the longest combined land border in the world, measuring 22,117 km (13,743 mi) and its coastline covers approximately 14,500 km (9,000 mi) from the mouth of the Yalu River (Amnok River) to the Gulf of Tonkin. China borders 14 nations and covers the bulk of East Asia, bordering Vietnam, Laos, and Myanmar in Southeast Asia; India, Bhutan, Nepal, Pakistan and Afghanistan in South Asia; Tajikistan, Kyrgyzstan and Kazakhstan in Central Asia; and Russia, Mongolia, and North Korea in Inner Asia and Northeast Asia. It is narrowly separated from Bangladesh and Thailand to the southwest and south, and has several maritime neighbors such as Japan, Philippines, Malaysia, and Indonesia.
The People's Republic of China is a one-party state governed by the Marxist–Leninist Chinese Communist Party (CCP). This makes China one of the few countries governed by a communist party. The Chinese constitution states that the PRC "is a socialist state governed by a people's democratic dictatorship that is led by the working class and based on an alliance of workers and peasants," that the state institutions "shall practice the principle of democratic centralism," and that "the defining feature of socialism with Chinese characteristics is the leadership of the Communist Party of China."
The PRC officially terms itself as a democracy, using terms such as "socialist consultative democracy", and "whole-process people's democracy". However, the country is commonly described as an authoritarian one-party state and a dictatorship, with among the heaviest restrictions worldwide in many areas, most notably against freedom of the press, freedom of assembly, reproductive rights, free formation of social organizations, freedom of religion and free access to the Internet. China has consistently been ranked amongst the lowest as an "authoritarian regime" by the Economist Intelligence Unit's Democracy Index, ranking at 156th out of 167 countries in 2022.
According to the CCP constitution, its highest body is the National Congress held every five years. The National Congress elects the Central Committee, who then elects the party's Politburo, Politburo Standing Committee and the general secretary (party leader), the top leadership of the country. The general secretary holds ultimate power and authority over state and government and serves as the informal paramount leader. The current general secretary is Xi Jinping, who took office on 15 November 2012. At the local level, the secretary of the CCP committee of a subdivision outranks the local government level; CCP committee secretary of a provincial division outranks the governor while the CCP committee secretary of a city outranks the mayor. The CCP is officially guided by Marxism adapted to Chinese circumstances.
The government in China is under the sole control of the CCP. The CCP controls appointments in government bodies, with most senior government officials being CCP members.
The National People's Congress (NPC), the nearly 3,000-member legislature, is constitutionally the "highest state organ of power", though it has been also described as a "rubber stamp" body. The NPC meets annually, while the NPC Standing Committee, around 150 members elected from NPC delegates, meets every couple of months. Elections are indirect and not pluralistic, with nominations at all levels being controlled by the CCP. The NPC is dominated by the CCP, with another eight minor parties having nominal representation under the condition of upholding CCP leadership.
The president is the ceremonial state representative, elected by the NPC. The incumbent president is Xi Jinping, who is also the general secretary of the CCP and the chairman of the Central Military Commission, making him China's paramount leader. The premier is the head of government, with Li Qiang being the incumbent. The premier is officially nominated by the president and then elected by the NPC, and has generally been either the second or third-ranking member of the Politburo Standing Committee (PSC). The premier presides over the State Council, China's cabinet, composed of four vice premiers, state councilors, and the heads of ministries and commissions. The Chinese People's Political Consultative Conference (CPPCC) is a political advisory body that is critical in China's "united front" system, which aims to gather non-CCP voices to support the CCP. Similar to the people's congresses, CPPCC's exist at various division, with the National Committee of the CPPCC being chaired by Wang Huning, fourth-ranking member of the PSC.
The governance of China is characterized by a high degree of political centralization but significant economic decentralization. Policy instruments or processes are often tested locally before being applied more widely, resulting in a policy process that involves experimentation and feedback. Generally, high-level central government leadership refrains from drafting specific policies, instead using the informal networks and site visits to affirm or suggest changes to the direction of local policy experiments or pilot programs. The typical approach is that central government leadership begins drafting formal policies, law, or regulations after policy has been developed at local levels.
The PRC is constitutionally a unitary state divided into 23 provinces, five autonomous regions (each with a designated minority group), and four direct-administered municipalities—collectively referred to as "mainland China"—as well as the special administrative regions (SARs) of Hong Kong and Macau. The PRC considers Taiwan to be its 23rd province, although it is governed by the Republic of China (ROC). Geographically, all 31 provincial divisions of mainland China can be grouped into six regions: North China, Northeast China, East China, South Central China, Southwestern China, and Northwestern China.
The PRC has diplomatic relations with 179 United Nation members states and maintains embassies in 174. Since 2019, China has the largest diplomatic network in the world. In 1971, the PRC replaced the Republic of China (ROC) as the sole representative of China in the United Nations and as one of the five permanent members of the United Nations Security Council. It is a member of intergovernmental organizations including the G20, the SCO, the East Asia Summit, and the APEC. China was also a former member and leader of the Non-Aligned Movement, and still considers itself an advocate for developing countries. Along with Brazil, Russia, India and South Africa, China is a member of the BRICS group of emerging major economies and hosted the group's third official summit in April 2011.
The PRC officially maintains the one-China principle, which holds the view that there is only one sovereign state in the name of China, represented by the PRC, and that Taiwan is part of that China. The unique status of Taiwan has led to countries recognizing the PRC to maintain unique "one-China policies" that differ from each other; some countries explicitly recognize the PRC's claim over Taiwan, while others, including the US and Japan, only acknowledge the claim. Chinese officials have protested on numerous occasions when foreign countries have made diplomatic overtures to Taiwan, especially in the matter of armament sales. Most countries have switched recognition from the ROC to the PRC since the latter replaced the former in the United Nations in 1971.
Much of current Chinese foreign policy is reportedly based on Premier Zhou Enlai's Five Principles of Peaceful Coexistence, and is also driven by the concept of "harmony without uniformity", which encourages diplomatic relations between states despite ideological differences. This policy may have led China to support or maintain close ties with states that are regarded as dangerous and repressive by Western nations, such as Sudan, North Korea and Iran. China's close relationship with Myanmar has involved both support for its ruling governments as well as for its ethnic rebel groups, including the Arakan Army. China has a close political, economic and military relationship with Russia, and the two states often vote in unison in the United Nations Security Council. China's relationship with the United States is long and complex, and includes deep trade ties but significant political differences.
Since the early 200s, China has followed a policy of engaging with African nations for trade and bilateral co-operation. It maintains extensive and highly diversified trade links with the European Union, and became its largest trading partner for goods. China has strong trade ties with ASEAN countries and major South American economies, and is the largest trading partner of Brazil, Chile, Peru, Uruguay, Argentina, and several others.
In 2013, China initiated the Belt and Road Initiative (BRI), a large global infrastructure building initiative with funding on the order of $50–100 billion per year. BRI could be one of the largest development plans in modern history. It has expanded significantly over the last six years and, as of April 2020, includes 138 countries and 30 international organizations. In addition to intensifying foreign policy relations, the focus is particularly on building efficient transport routes, especially the maritime Silk Road with its connections to East Africa and Europe. However many loans made under the program are unsustainable and China has faced a number of calls for debt relief from debtor nations.
Ever since its establishment, the PRC has claimed the territories governed by the Republic of China (ROC), a separate political entity today commonly known as Taiwan, as a part of its territory. It regards the island of Taiwan as its Taiwan Province, Kinmen and Matsu as a part of Fujian Province and islands the ROC controls in the South China Sea as a part of Hainan Province and Guangdong Province. These claims are controversial because of the complicated Cross-Strait relations.
China has resolved its land borders with 12 out of 14 neighboring countries, having pursued substantial compromises in most of them. China currently has a disputed land border with India and Bhutan. China is additionally involved in maritime disputes with multiple countries over the ownership of islands in the East and South China Seas, such as the Senkaku Islands and the entirety of South China Sea Islands, along with the EEZ disputes over East China Sea.
The situation of human rights in China has attracted significant criticism from foreign governments, foreign press agencies, and non-governmental organizations, alleging widespread civil rights violations such as detention without trial, forced confessions, torture, restrictions of fundamental rights, and excessive use of the death penalty. Since its inception, Freedom House has ranked China as "not free" in its Freedom in the World survey, while Amnesty International has documented significant human rights abuses. The Chinese constitution states that the "fundamental rights" of citizens include freedom of speech, freedom of the press, the right to a fair trial, freedom of religion, universal suffrage, and property rights. However, in practice, these provisions do not afford significant protection against criminal prosecution by the state. China has limited protections regarding LGBT rights.
Although some criticisms of government policies and the ruling CCP are tolerated, censorship of political speech and information are amongst the harshest in the world and routinely used to prevent collective action. China also has the most comprehensive and sophisticated Internet censorship regime in the world, with numerous websites being blocked. The government suppresses popular protests and demonstrations that it considers a potential threat to "social stability". China additionally uses a massive espionage network of cameras, facial recognition software, sensors, and surveillance of personal technology as a means of social control of persons living in the country.
China is regularly accused of large-scale repression and human rights abuses in Tibet and Xinjiang, where significant numbers of ethnic minorities reside, including violent police crackdowns and religious suppression. In Xinjiang, repression has significantly escalated since 2016, after which at least one million Uyghurs and other ethnic and religion minorities have been detained in internment camps aimed at changing the political thinking of detainees, their identities, and their religious beliefs. According to western reports, political indoctrination, torture, physical and psychological abuse, forced sterilization, sexual abuse, and forced labor are common in these facilities. According to a 2020 report, China's treatment of Uyghurs meets the UN definition of genocide, while a separate UN Human Rights Office report said they could potentially meet the definitions for crimes against humanity.
Global studies from Pew Research Center in 2014 and 2017 ranked the Chinese government's restrictions on religion as among the highest in the world, despite low to moderate rankings for religious-related social hostilities in the country. The Global Slavery Index estimated that in 2016 more than 3.8 million people (0.25% of the population) were living in "conditions of modern slavery", including victims of human trafficking, forced labor, forced marriage, child labor, and state-imposed forced labor. The state-imposed re-education through labor (laojiao) system was formally abolished in 2013, but it is not clear to what extent its practices have stopped. The much larger reform through labor (laogai) system includes labor prison factories, detention centers, and re-education camps; the Laogai Research Foundation has estimated in June 2008 that there were nearly 1,422 of these facilities, though it cautioned that this number was likely an underestimate.
Political concerns in China include the growing gap between rich and poor and government corruption. Nonetheless, international surveys show the Chinese public have a high level of satisfaction with their government. These views are generally attributed to the material comforts and security available to large segments of the Chinese populace as well as the government's attentiveness and responsiveness. According to the World Values Survey (2022), 91% of Chinese respondents have significant confidence in their government. A Harvard University survey published in July 2020 found that citizen satisfaction with the government had increased since 2003, also rating China's government as more effective and capable than ever in the survey's history.
The People's Liberation Army (PLA) is considered one of the world's most powerful militaries and has rapidly modernized in the recent decades. It consists of the Ground Force (PLAGF), the Navy (PLAN), the Air Force (PLAAF), the Rocket Force (PLARF) and the Strategic Support Force (PLASSF). Its nearly 2.2 million active duty personnel is the largest in the world. The PLA holds the world's third-largest stockpile of nuclear weapons, and the world's second-largest navy by tonnage. China's official military budget for 2022 totalled US$230 billion (1.45 trillion Yuan), the second-largest in the world, though SIPRI estimates that its real expenditure that year was US$292 billion. According to SIPRI, its military spending from 2012 to 2021 averaged US$215 billion per year or 1.7 per cent of GDP, behind only the United States at US$734 billion per year or 3.6 per cent of GDP. The PLA is commanded by the Central Military Commission (CMC) of the party and the state; though officially two separate organizations, the two CMCs have identical membership except during leadership transition periods and effectively function as one organization. The chairman of the CMC is the commander-in-chief of the PLA.
China has the world's second-largest economy in terms of nominal GDP, and the world's largest in terms of purchasing power parity (PPP). As of 2022, China accounts for around 18% of global economy by nominal GDP. China is one of the world's fastest-growing major economies, with its economic growth having been almost consistently above 6 percent since the introduction of economic reforms in 1978. According to the World Bank, China's GDP grew from $150 billion in 1978 to $17.96 trillion by 2022. It ranks at 64th at GDP (nominal) per capita, making it an upper-middle income country. Of the world's 500 largest companies, 142 are headquartered in China.
China was one of the world's foremost economic powers throughout the arc of East Asian and global history. The country had one of the largest economies in the world for most of the past two millennia, during which it has seen cycles of prosperity and decline. Since economic reforms began in 1978, China has developed into a highly diversified economy and one of the most consequential players in international trade. Major sectors of competitive strength include manufacturing, retail, mining, steel, textiles, automobiles, energy generation, green energy, banking, electronics, telecommunications, real estate, e-commerce, and tourism. China has three out of the ten largest stock exchanges in the world—Shanghai, Hong Kong and Shenzhen—that together have a market capitalization of over $15.9 trillion, as of October 2020. China has four (Shanghai, Hong Kong, Beijing, and Shenzhen) out of the world's top ten most competitive financial centers, which is more than any other country in the 2020 Global Financial Centres Index.
Modern-day China is often described as an example of state capitalism or party-state capitalism. The state dominates in strategic "pillar" sectors such as energy production and heavy industries, but private enterprise has expanded enormously, with around 30 million private businesses recorded in 2008. According to official statistics, privately owned companies constitute more than 60% of China's GDP.
China has been the world's largest manufacturing nation since 2010, after overtaking the US, which had been the largest for the previous hundred years. China has also been the second largest in high-tech manufacturing country since 2012, according to US National Science Foundation. China is the second largest retail market after the United States. China leads the world in e-commerce, accounting for over 37% of the global market share in 2021. China is the world's leader in electric vehicle consumption and production, manufacturing and buying half of all the plug-in electric cars (BEV and PHEV) in the world as of 2022. China is also the leading producer of batteries for electric vehicles as well as several key raw materials for batteries. Long heavily relying on non-renewable energy sources such as coal, China's adaptation of renewable energy has increased significantly in recent years, with their share increasing from 26.3 percent in 2016 to 31.9 percent in 2022.
China accounted for 17.9% of the world's total wealth in 2021, second highest in the world after the US. China brought more people out of extreme poverty than any other country in history—between 1978 and 2018, China reduced extreme poverty by 800 million. From 1990 to 2018, the proportion of the Chinese population living with an income of less than $1.90 per day (2011 PPP) decreased from 66.3% to 0.3%, the share living with an income of less than $3.20 per day from 90.0% to 2.9%, and the share living with an income of less than $5.50 per day decreased from 98.3% to 17.0%.
From 1978 to 2018, the average standard of living multiplied by a factor of twenty-six. Wages in China have grown significantly in the last 40 years—real (inflation-adjusted) wages grew seven-fold from 1978 to 2007. Per capita incomes have also risen significantly – when the PRC was founded in 1949, per capita income in China was one-fifth of the world average; per capita incomes now equal the world average itself. China's development is highly uneven. Its major cities and coastal areas are far more prosperous compared to rural and interior regions. It has a high level of economic inequality, which has increased quickly after the economic reforms, though has decreased significantly in the 2010s. In 2020, China's Gini coefficient was 0.371, according to the World Bank.
As of April 2023, China was second in the world, after the US, in total number of billionaires and total number of millionaires, with 495 Chinese billionaires and 6.2 million millionaires. In 2019, China overtook the US as the home to the highest number of people who have a net personal wealth of at least $110,000, according to the global wealth report by Credit Suisse. China had 85 female billionaires as of January 2021, two-thirds of the global total. China has had the world's largest middle-class population since 2015; the middle-class grew to 400 million by 2018.
China has been a member of the WTO since 2001 and is the world's largest trading power. By 2016, China was the largest trading partner of 124 countries. China became the world's largest trading nation in 2013 by the sum of imports and exports, as well as the world's largest commodity importer, comprising roughly 45% of maritime's dry-bulk market.
China's foreign exchange reserves reached US$3.128 trillion as of December 2022, making its reserves by far the world's largest. In 2022, China was amongst the world's largest recipient of inward foreign direct investment (FDI), attracting $180 billion, though most of these were speculated to be from Hong Kong. In 2021, China's foreign exchange remittances were $US53 billion making it the second largest recipient of remittances in the world. China also invests abroad, with a total outward FDI of $146.5 billion in 2022, and a number of major takeovers of foreign firms by Chinese companies.
Economists have argued that the renminbi is undervalued, due to currency intervention from the Chinese government, giving China an unfair trade advantage. China has also been widely criticized for manufacturing large quantities of counterfeit goods. The US government has also alleged that China does not respect intellectual property (IP) rights and steals IP through espionage operations. In 2020, Harvard University's Economic Complexity Index ranked complexity of China's exports 17th in the world, up from 24th in 2010.
The Chinese government has promoted the internationalization of the renminbi in order to wean off of its dependence on the U.S. dollar as a result of perceived weaknesses of the international monetary system. The renminbi is a component of the IMF's special drawing rights and the world's fifth-most traded currency as of 2022. However, partly due to capital controls that make the renminbi fall short of being a fully convertible currency, it remains far behind the Euro, the U.S. Dollar and the Japanese Yen in international trade volumes.
China was a world leader in science and technology until the Ming dynasty. Ancient and medieval Chinese discoveries and inventions, such as papermaking, printing, the compass, and gunpowder (the Four Great Inventions), became widespread across East Asia, the Middle East and later Europe. Chinese mathematicians were the first to use negative numbers. By the 17th century, the Western World surpassed China in scientific and technological advancement. The causes of this early modern Great Divergence continue to be debated by scholars.
After repeated military defeats by the European colonial powers and Imperial Japan in the 19th century, Chinese reformers began promoting modern science and technology as part of the Self-Strengthening Movement. After the Communists came to power in 1949, efforts were made to organize science and technology based on the model of the Soviet Union, in which scientific research was part of central planning. After Mao's death in 1976, science and technology were promoted as one of the Four Modernizations, and the Soviet-inspired academic system was gradually reformed.
Since the end of the Cultural Revolution, China has made significant investments in scientific research and is quickly catching up with the US in R&D spending. China officially spent around 2.4% of its GDP on R&D in 2020, totaling to around $377.8 billion. According to the World Intellectual Property Indicators, China received more applications than the US did in 2018 and 2019 and ranked first globally in patents, utility models, trademarks, industrial designs, and creative goods exports in 2021. It was ranked 12th in the Global Innovation Index in 2023, a considerable improvement from its rank of 35th in 2013. Chinese supercomputers have been ranked the fastest in the world on a few occasions; however, these supercomputers rely on critical components —namely processors—designed in foreign countries. China has also struggled with developing several technologies domestically, such as the most advanced semiconductors and reliable jet engines.
China is developing its education system with an emphasis on science, technology, engineering, and mathematics (STEM). It became the world's largest publisher of scientific papers in 2016.
The Chinese space program started in 1958 with some technology transfers from the Soviet Union. However, it did not launch the nation's first satellite until 1970 with the Dong Fang Hong I, which made China the fifth country to do so independently.
In 2003, China became the third country in the world to independently send humans into space with Yang Liwei's spaceflight aboard Shenzhou 5. As of 2023, eighteen Chinese nationals have journeyed into space, including two women. In 2011, China launched its first space station testbed, Tiangong-1. In 2013, a Chinese robotic rover Yutu successfully touched down on the lunar surface as part of the Chang'e 3 mission.
In 2019, China became the first country to land a probe—Chang'e 4—on the far side of the Moon. In 2020, Chang'e 5 successfully returned Moon samples to the Earth, making China the third country to do so independently. In 2021, China became the third country to land a spacecraft on Mars and the second one to deploy a rover (Zhurong) on Mars. China completed its own modular space station, the Tiangong, in low Earth orbit on 3 November 2022. On 29 November 2022, China performed its first in-orbit crew handover aboard the Tiangong.
In May 2023, China announced a plan to land humans on the Moon by 2030. To that end, China currently is developing a lunar-capable super-heavy launcher, the Long March 10, a new crewed spacecraft, and a crewed lunar lander.
After a decades-long infrastructural boom, China has produced numerous world-leading infrastructural projects: it has the largest high-speed rail network, the most supertall skyscrapers, the largest power plant (the Three Gorges Dam), and a global satellite navigation system with the largest number of satellites.
China is the largest telecom market in the world and currently has the largest number of active cellphones of any country, with over 1.7 billion subscribers, as of February 2023. It has the largest number of internet and broadband users, with over 1.05 billion Internet users since 2021—equivalent to around 73.7% of its population. By 2018, China had more than 1 billion 4G users, accounting for 40% of world's total. China is making rapid advances in 5G—by late 2018, China had started large-scale and commercial 5G trials. As of March 2022, China had over 500 million 5G users and 1.45 million base stations installed.
China Mobile, China Unicom and China Telecom, are the three large providers of mobile and internet in China. China Telecom alone served more than 145 million broadband subscribers and 300 million mobile users; China Unicom had about 300 million subscribers; and China Mobile, the largest of them all, had 925 million users, as of 2018. Combined, the three operators had over 3.4 million 4G base-stations in China. Several Chinese telecommunications companies, most notably Huawei and ZTE, have been accused of spying for the Chinese military.
China has developed its own satellite navigation system, dubbed BeiDou, which began offering commercial navigation services across Asia in 2012 as well as global services by the end of 2018. Beidou followed GPS and GLONASS as the third completed global navigation satellite.
Since the late 1990s, China's national road network has been significantly expanded through the creation of a network of national highways and expressways. In 2018, China's highways had reached a total length of 161,000 km (100,000 mi), making it the longest highway system in the world. China has the world's largest market for automobiles, having surpassed the United States in both auto sales and production. The country is the world's largest exporter of cars as of 2023. A side-effect of the rapid growth of China's road network has been a significant rise in traffic accidents. In urban areas, bicycles remain a common mode of transport, despite the increasing prevalence of automobiles – as of 2012, there are approximately 470 million bicycles in China.
China's railways, which are operated by the state-owned China State Railway Group Company, are among the busiest in the world, handling a quarter of the world's rail traffic volume on only 6 percent of the world's tracks in 2006. As of 2021, the country had 150,000 km (93,206 mi) of railways, the second longest network in the world. The railways strain to meet enormous demand particularly during the Chinese New Year holiday, when the world's largest annual human migration takes place. China's high-speed rail (HSR) system started construction in the early 2000s. By the end of 2022, high speed rail in China had reached 42,000 kilometers (26,098 miles) of dedicated lines alone, making it the longest HSR network in the world. Services on the Beijing–Shanghai, Beijing–Tianjin, and Chengdu–Chongqing lines reach up to 350 km/h (217 mph), making them the fastest conventional high speed railway services in the world. With an annual ridership of over 2.3 billion passengers in 2019, it is the world's busiest. The network includes the Beijing–Guangzhou high-speed railway, the single longest HSR line in the world, and the Beijing–Shanghai high-speed railway, which has three of longest railroad bridges in the world. The Shanghai maglev train, which reaches 431 km/h (268 mph), is the fastest commercial train service in the world. Since 2000, the growth of rapid transit systems in Chinese cities has accelerated. As of January 2021, 44 Chinese cities have urban mass transit systems in operation. As of 2020, China boasts the five longest metro systems in the world with the networks in Shanghai, Beijing, Guangzhou, Chengdu and Shenzhen being the largest.
The civil aviation industry in China is mostly state-dominated, with the Chinese government retaining a majority stake in the majority of Chinese airlines. The top three airlines in China, which collectively made up 71% of the market in 2018, are all state-owned. Air travel has expanded rapidly in the last decades, with the number of passengers increasing from 16.6 million in 1990 to 551.2 million in 2017. China had approximately 241 airports in 2021.
China has over 2,000 river and seaports, about 130 of which are open to foreign shipping. Of the fifty busiest container ports, 15 are located in China, of which the busiest is the Port of Shanghai, also the busiest port in the world. The country's inland waterways are the world's sixth-longest, and total 27,700 km (17,212 mi).
Water supply and sanitation infrastructure in China is facing challenges such as rapid urbanization, as well as water scarcity, contamination, and pollution. According to the Joint Monitoring Program for Water Supply and Sanitation in 2015, about 36% of the rural population in China still did not have access to improved sanitation. The ongoing South–North Water Transfer Project intends to abate water shortage in the north.
The 2020 Chinese census recorded the population as approximately 1,411,778,724. About 17.95% were 14 years old or younger, 63.35% were between 15 and 59 years old, and 18.7% were over 60 years old. Between 2010 and 2020, the average population growth rate was 0.53%.
Given concerns about population growth, China implemented a two-child limit during the 1970s, and, in 1979, began to advocate for an even stricter limit of one child per family. Beginning in the mid-1980s, however, given the unpopularity of the strict limits, China began to allow some major exemptions, particularly in rural areas, resulting in what was actually a "1.5"-child policy from the mid-1980s to 2015; ethnic minorities were also exempt from one-child limits. The next major loosening of the policy was enacted in December 2013, allowing families to have two children if one parent is an only child. In 2016, the one-child policy was replaced in favor of a two-child policy. A three-child policy was announced on 31 May 2021, due to population aging, and in July 2021, all family size limits as well as penalties for exceeding them were removed. According to the 2020 census, China's total fertility rate is 1.3. In 2023, the total fertility was estimated to be around 1.09, ranking among the lowest in the world. In 2023, National Bureau of Statistics estimated that the population fell 850,000 from 2021 to 2022, the first decline since 1961.
According to one group of scholars, one-child limits had little effect on population growth or total population size. However, these scholars have been challenged. The policy, along with traditional preference for boys, may have contributed to an imbalance in the sex ratio at birth. The 2020 census found that males accounted for 51.2% of the total population. However, China's sex ratio is more balanced than it was in 1953, when males accounted for 51.8% of the population.
China legally recognizes 56 distinct ethnic groups, who comprise the Zhonghua minzu. The largest of these nationalities are the Han Chinese, who constitute more than 91% of the total population. The Han Chinese – the world's largest single ethnic group – outnumber other ethnic groups in every provincial-level division except Tibet and Xinjiang. Ethnic minorities account for less than 10% of the population of China, according to the 2020 census. Compared with the 2010 population census, the Han population increased by 60,378,693 persons, or 4.93%, while the population of the 55 national minorities combined increased by 11,675,179 persons, or 10.26%. The 2020 census recorded a total of 845,697 foreign nationals living in mainland China.
There are as many as 292 living languages in China. The languages most commonly spoken belong to the Sinitic branch of the Sino-Tibetan language family, which contains Mandarin (spoken by 80% of the population), and other varieties of Chinese language: Yue (including Cantonese and Taishanese), Wu (including Shanghainese and Suzhounese), Min (including Fuzhounese, Hokkien and Teochew), Xiang, Gan and Hakka. Languages of the Tibeto-Burman branch, including Tibetan, Qiang, Naxi and Yi, are spoken across the Tibetan and Yunnan–Guizhou Plateau. Other ethnic minority languages in southwestern China include Zhuang, Thai, Dong and Sui of the Tai-Kadai family, Miao and Yao of the Hmong–Mien family, and Wa of the Austroasiatic family. Across northeastern and northwestern China, local ethnic groups speak Altaic languages including Manchu, Mongolian and several Turkic languages: Uyghur, Kazakh, Kyrgyz, Salar and Western Yugur. Korean is spoken natively along the border with North Korea. Sarikoli, the language of Tajiks in western Xinjiang, is an Indo-European language. Taiwanese indigenous peoples, including a small population on the mainland, speak Austronesian languages.
Standard Mandarin, a variety of Mandarin based on the Beijing dialect, is the official national language and is used as a lingua franca between people of different linguistic backgrounds. Mongolian, Uyghur, Tibetan, Zhuang and various other languages are also regionally recognized.
China has urbanized significantly in recent decades. The percent of the country's population living in urban areas increased from 20% in 1980 to over 64% in 2021. China has over 160 cities with a population of over one million, including the 17 megacities as of 2021 (cities with a population of over 10 million) of Chongqing, Shanghai, Beijing, Chengdu, Guangzhou, Shenzhen, Tianjin, Xi'an, Suzhou, Zhengzhou, Wuhan, Hangzhou, Linyi, Shijiazhuang, Dongguan, Qingdao and Changsha. The total permanent population of Chongqing, Shanghai, Beijing and Chengdu is above 20 million. Shanghai is China's most populous urban area while Chongqing is its largest city proper, the only city in China with a permanent population of over 30 million. The figures in the table below are from the 2017 census, and are only estimates of the urban populations within administrative city limits; a different ranking exists for total municipal populations. The large "floating populations" of migrant workers make conducting censuses in urban areas difficult; the figures below include only long-term residents.
Compulsory education in China comprises primary and junior secondary school, which together last for nine years from the age of 6 and 15. The Gaokao, China's national university entrance exam, is a prerequisite for entrance into most higher education institutions. Vocational education is available to students at the secondary and tertiary level. More than 10 million Chinese students graduated from vocational colleges every year. In 2022, about 91.6 percent of students continued their education at a three-year senior secondary school, while 59.6 secondary school graduates were enrolled in higher education.
China has the largest education system in the world, with about 282 million students and 17.32 million full-time teachers in over 530,000 schools. Annual education investment went from less than US$50 billion in 2003 to more than US$817 billion in 2020. However, there remains an inequality in education spending. In 2010, the annual education expenditure per secondary school student in Beijing totalled ¥20,023, while in Guizhou, one of the poorest provinces, only totalled ¥3,204. China's literacy rate has grown dramatically, from only 20% in 1949 and 65.5% in 1979, to 97% of the population over age 15 in 2020.
As of 2021, China has over 3,000 universities, with over 44.3 million students enrolled in mainland China and 240 million Chinese citizens have received high education, making China the largest higher education system in the world. As of 2023, China had the world's highest number of top universities. Currently, China trails only the United States and the United Kingdom in terms of representation on lists of the top 200 universities according to the 2023 Aggregate Ranking of Top Universities, a composite ranking system of three world-most followed university rankings (ARWU+QS+ THE). China is home to two of the highest-ranking universities (Tsinghua University and Peking University) in Asia and emerging economies, according to the Times Higher Education World University Rankings and the QS World University Rankings. These universities are members of the C9 League, an alliance of elite Chinese universities offering comprehensive and leading education.
The National Health Commission, together with its counterparts in the local commissions, oversees the health needs of the population. An emphasis on public health and preventive medicine has characterized Chinese health policy since the early 1950s. The Communist Party started the Patriotic Health Campaign, which was aimed at improving sanitation and hygiene, as well as treating and preventing several diseases. Diseases such as cholera, typhoid and scarlet fever, which were previously rife in China, were nearly eradicated by the campaign.
After Deng Xiaoping began instituting economic reforms in 1978, the health of the Chinese public improved rapidly because of better nutrition, although many of the free public health services provided in the countryside disappeared. Healthcare in China became mostly privatized, and experienced a significant rise in quality. In 2009, the government began a three-year large-scale healthcare provision initiative worth US$124 billion. By 2011, the campaign resulted in 95% of China's population having basic health insurance coverage. By 2022, China had established itself as a key producer and exporter of pharmaceuticals, producing around 40 percent of active pharmaceutical ingredients in 2017.
As of 2021, the life expectancy at birth is 78 years, and the infant mortality rate is 5 per thousand. Both have improved significantly since the 1950s. Rates of stunting, a condition caused by malnutrition, have declined from 33.1% in 1990 to 9.9% in 2010. Despite significant improvements in health and the construction of advanced medical facilities, China has several emerging public health problems, such as respiratory illnesses caused by widespread air pollution, hundreds of millions of cigarette smokers, and an increase in obesity among urban youths. In 2010, air pollution caused 1.2 million premature deaths in China. China's large population and densely populated cities have led to serious disease outbreaks, such as SARS in 2003, although this has since been largely contained. The COVID-19 pandemic was first identified in Wuhan in December 2019.
The government of the People's Republic of China and the Chinese Communist Party both officially espouse state atheism, and have conducted antireligious campaigns to this end. Religious affairs and issues in the country are overseen by the CCP's United Front Work Department. Freedom of religion is guaranteed by China's constitution, although religious organizations that lack official approval can be subject to state persecution.
Chinese civilization has been influenced by various religious movements. The "three teachings", including Confucianism, Taoism, and Buddhism (Chinese Buddhism), historically have a significant role in shaping Chinese culture, enriching a theological and spiritual framework which harks back to the early Shang and Zhou dynasty. Chinese popular or folk religion, which is framed by the three teachings and other traditions, consists in allegiance to the shen (神), a character that signifies the "energies of generation", who can be deities of the environment or ancestral principles of human groups, concepts of civility, culture heroes, many of whom feature in Chinese mythology and history. Among the most popular cults are those of Mazu (goddess of the seas), Huangdi (one of the two divine patriarchs of the Chinese race), Guandi (god of war and business), Caishen (god of prosperity and richness), Pangu and many others. China is home to many of the world's tallest religious statues, including the tallest of all, the Spring Temple Buddha in Henan.
Clear data on religious affiliation is difficult to gather due to varying definitions of "religion" and the unorganized, diffusive nature of Chinese religious traditions. Scholars note that in China there is no clear boundary between three teachings religions and local folk religious practice. A 2015 poll conducted by Gallup International found that 61% of Chinese people self-identified as "convinced atheist", though Chinese religions or some of their strands are definable as non-theistic and humanistic religions, since they do not believe that divine creativity is completely transcendent, but it is inherent in the world and in particular in the human being. According to a 2014 study, approximately 74% are either non-religious or practice Chinese folk belief, 16% are Buddhists, 2% are Christians, 1% are Muslims, and 8% adhere to other religions including Taoists and folk salvationism. There are also various ethnic minority groups in China who maintain their indigenous religions. Significant faiths specifically connected to certain ethnic groups include Tibetan Buddhism and the Islamic religion of the Hui, Uyghur, Kazakh and Kyrgyz peoples in Northwest China. China had a total of 39,000 mosques in 2014, with 63% located in Xinjiang, 12% in Gansu, 11% in Ningxia, 3% in Qinghai and the rest located in other parts of the country.
A 2021 poll from Ipsos had 35% of Chinese people saying there was tension between different religious groups, which was the second lowest percentage of the 28 countries surveyed.
Since ancient times, Chinese culture has been heavily influenced by Confucianism. Chinese culture, in turn, has heavily influenced East Asia and Southeast Asia. For much of the country's dynastic era, opportunities for social advancement could be provided by high performance in the prestigious imperial examinations, which have their origins in the Han dynasty. The literary emphasis of the exams affected the general perception of cultural refinement in China, such as the belief that calligraphy, poetry and painting were higher forms of art than dancing or drama. Chinese culture has long emphasized a sense of deep history and a largely inward-looking national perspective. Examinations and a culture of merit remain greatly valued in China today.
Today, the Chinese government has accepted numerous elements of traditional Chinese culture as being integral to Chinese society. With the rise of Chinese nationalism and the end of the Cultural Revolution, various forms of traditional Chinese art, literature, music, film, fashion and architecture have seen a vigorous revival, and folk and variety art in particular have sparked interest nationally and even worldwide. Access to foreign media remains heavily restricted.
China received 65.7 million international visitors in 2019, and in 2018 was the fourth-most-visited country in the world. It also experiences an enormous volume of domestic tourism; Chinese tourists made an estimated 6 billion travels within the country in 2019. China hosts the world's second-largest number of World Heritage Sites (56) after Italy, and is one of the most popular tourist destinations (first in the Asia-Pacific).
Chinese literature is based on the literature of the Zhou dynasty. Concepts covered within the Chinese classic texts present a wide range of thoughts and subjects including calendar, military, astrology, herbology, geography and many others. Some of the most important early texts include the I Ching and the Shujing within the Four Books and Five Classics which served as the Confucian authoritative books for the state-sponsored curriculum in dynastic era. Inherited from the Classic of Poetry, classical Chinese poetry developed to its floruit during the Tang dynasty. Li Bai and Du Fu opened the forking ways for the poetic circles through romanticism and realism respectively. Chinese historiography began with the Shiji, the overall scope of the historiographical tradition in China is termed the Twenty-Four Histories, which set a vast stage for Chinese fictions along with Chinese mythology and folklore. Pushed by a burgeoning citizen class in the Ming dynasty, Chinese classical fiction rose to a boom of the historical, town and gods and demons fictions as represented by the Four Great Classical Novels which include Water Margin, Romance of the Three Kingdoms, Journey to the West and Dream of the Red Chamber. Along with the wuxia fictions of Jin Yong and Liang Yusheng, it remains an enduring source of popular culture in the Chinese sphere of influence.
In the wake of the New Culture Movement after the end of the Qing dynasty, Chinese literature embarked on a new era with written vernacular Chinese for ordinary citizens. Hu Shih and Lu Xun were pioneers in modern literature. Various literary genres, such as misty poetry, scar literature, young adult fiction and the xungen literature, which is influenced by magic realism, emerged following the Cultural Revolution. Mo Yan, a xungen literature author, was awarded the Nobel Prize in Literature in 2012.
Chinese cuisine is highly diverse, drawing on several millennia of culinary history and geographical variety, in which the most influential are known as the "Eight Major Cuisines", including Sichuan, Cantonese, Jiangsu, Shandong, Fujian, Hunan, Anhui, and Zhejiang cuisines. Chinese cuisine is known for its breadth of cooking methods and ingredients. China's staple food is rice in the south and wheat-based breads and noodles in the north. Bean products such as tofu and soy milk remain a popular source of protein. Pork is now the most popular meat in China, accounting for about three-fourths of the country's total meat consumption. There is also the vegetarian Buddhist cuisine and the pork-free Chinese Islamic cuisine. Southern cuisine, due to the area's proximity to the ocean and milder climate, has a wide variety of seafood and vegetables. Offshoots of Chinese food, such as Hong Kong cuisine and American Chinese cuisine, have emerged in the Chinese diaspora.
Chinese architecture has developed over millennia in China and has remained a vestigial source of perennial influence on the development of East Asian architecture, including in Japan, Korea, and Mongolia. and minor influences on the architecture of Southeast and South Asia including the countries of Malaysia, Singapore, Indonesia, Sri Lanka, Thailand, Laos, Cambodia, Vietnam and the Philippines.
Chinese architecture is characterized by bilateral symmetry, use of enclosed open spaces, feng shui (e.g. directional hierarchies), a horizontal emphasis, and an allusion to various cosmological, mythological or in general symbolic elements. Chinese architecture traditionally classifies structures according to type, ranging from pagodas to palaces.
Chinese architecture varies widely based on status or affiliation, such as whether the structures were constructed for emperors, commoners, or for religious purposes. Other variations in Chinese architecture are shown in vernacular styles associated with different geographic regions and different ethnic heritages, such as the stilt houses in the south, the Yaodong buildings in the northwest, the yurt buildings of nomadic people, and the Siheyuan buildings in the north.
Chinese music covers a highly diverse range of music from traditional music to modern music. Chinese music dates back before the pre-imperial times. Traditional Chinese musical instruments were traditionally grouped into eight categories known as bayin (八音). Traditional Chinese opera is a form of musical theatre in China originating thousands of years and has regional style forms such as Beijing and Cantonese opera. Chinese pop (C-Pop) includes mandopop and cantopop. Chinese hip hop and Hong Kong hip hop have become popular.
Cinema was first introduced to China in 1896 and the first Chinese film, Dingjun Mountain, was released in 1905. China has the largest number of movie screens in the world since 2016; China became the largest cinema market in 2020. The top three highest-grossing films in China as of 2023 were The Battle at Lake Changjin (2021), Wolf Warrior 2 (2017), and Hi, Mom (2021).
Hanfu is the historical clothing of the Han people in China. The qipao or cheongsam is a popular Chinese female dress. The hanfu movement has been popular in contemporary times and seeks to revitalize Hanfu clothing.
China has one of the oldest sporting cultures. There is evidence that archery (shèjiàn) was practiced during the Western Zhou dynasty. Swordplay (jiànshù) and cuju, a sport loosely related to association football date back to China's early dynasties as well.
Physical fitness is widely emphasized in Chinese culture, with morning exercises such as qigong and tai chi widely practiced, and commercial gyms and private fitness clubs are gaining popularity. Basketball is the most popular spectator sport in China. The Chinese Basketball Association and the American National Basketball Association also have a huge national following amongst the Chinese populace, with native-born and NBA-bound Chinese players and well-known national household names such as Yao Ming and Yi Jianlian being held in high esteem. China's professional football league, known as Chinese Super League, is the largest football market in East Asia. Other popular sports include martial arts, table tennis, badminton, swimming and snooker. China is home to a huge number of cyclists, with an estimated 470 million bicycles as of 2012. Many more traditional sports, such as dragon boat racing, Mongolian-style wrestling and horse racing are also popular.
China has participated in the Olympic Games since 1932, although it has only participated as the PRC since 1952. China hosted the 2008 Summer Olympics in Beijing, where its athletes received 48 gold medals – the highest number of any participating nation that year. China also won the most medals at the 2012 Summer Paralympics, with 231 overall, including 95 gold. In 2011, Shenzhen hosted the 2011 Summer Universiade. China hosted the 2013 East Asian Games in Tianjin and the 2014 Summer Youth Olympics in Nanjing, the first country to host both regular and Youth Olympics. Beijing and its nearby city Zhangjiakou collaboratively hosted the 2022 Winter Olympics, making Beijing the first dual Olympic city by holding both the Summer Olympics and the Winter Olympics.
This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO.
35°N 103°E / 35°N 103°E / 35; 103 | [
{
"paragraph_id": 0,
"text": "China (Chinese: 中国; pinyin: Zhōngguó), officially the People's Republic of China (PRC), is a country in East Asia. With a population exceeding 1.4 billion, it is the world's second-most-populous country. China spans the equivalent of five time zones and borders fourteen countries by land. With an area of nearly 9.6 million square kilometers (3,700,000 sq mi), it is the third-largest country by total land area. The country is divided into 22 provinces, five autonomous regions, four municipalities, and two semi-autonomous special administrative regions. Beijing is the national capital, while Shanghai is the most populous city and largest financial center.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The region has been inhabited since the Paleolithic era. The earliest Chinese dynastic states, such as the Shang and the Zhou, emerged in the basin of the Yellow River before the late second millennium BCE. The eighth to third centuries BCE saw a breakdown in Zhou authority and significant conflict, as well as the emergence of Classical Chinese literature and philosophy. In 221 BCE, China was unified under an emperor for the first time, ushering in more than two millennia in which China was governed by one or more imperial dynasties, including the Han, Tang, Yuan, Ming, and Qing. Some of China's most notable achievements—such as the invention of gunpowder and paper, the establishment of the Silk Road, and the building of the Great Wall—occurred during this period. The imperial Chinese culture—including languages, traditions, architecture, philosophy and more—has heavily influenced East Asia.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 1912, the monarchy was overthrown and the Republic of China was established. The Republic saw consistent conflict for most of the mid-20th century, including a civil war between the Kuomintang government and the Chinese Communist Party (CCP), which began in 1927, as well as the Second Sino-Japanese War that began in 1937 and continued until 1945, therefore becoming involved in World War II. The latter led to a temporary stop in the civil war and numerous Japanese atrocities such as the Nanjing Massacre, which continue to influence China–Japan relations. In 1949, the CCP established control over China as the Kuomintang fled to Taiwan. Early communist rule saw two major projects: the Great Leap Forward, which resulted in a sharp economic decline and massive famine; and the Cultural Revolution, a movement to purge all non-communist elements of Chinese society that led to mass violence and persecution. Beginning in 1978, the Chinese government launched economic reforms that moved the country away from planned economics, but political reforms were cut short by the 1989 Tiananmen Square protests and massacre. Economic reform continued to strengthen the nation's economy in the following decades while raising China's standard of living significantly.",
"title": ""
},
{
"paragraph_id": 3,
"text": "China is a unitary one-party socialist republic led by the CCP. It is one of the five permanent members of the UN Security Council and a founding member of several multilateral and regional organizations such as the Asian Infrastructure Investment Bank, the Silk Road Fund, the New Development Bank, and the RCEP. It is a member of the BRICS, the G20, APEC, the SCO, and the East Asia Summit. China ranks poorly in measures of democracy, transparency, and human rights, including for press freedom, religious freedom, and ethnic equality. Making up around one-fifth of the world economy, China is the world's largest economy by GDP at purchasing power parity, the second-largest economy by nominal GDP, and the second-wealthiest country. The country is one of the fastest-growing major economies and is the world's largest manufacturer and exporter, as well as the second-largest importer, although its economic growth has slowed greatly in the 2020s. China is a nuclear-weapon state with the world's largest standing army by military personnel and the second-largest defense budget.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The word \"China\" has been used in English since the 16th century; however, it was not used by the Chinese themselves during this period. Its origin has been traced through Portuguese, Malay, and Persian back to the Sanskrit word Cīna, used in ancient India. \"China\" appears in Richard Eden's 1555 translation of the 1516 journal of the Portuguese explorer Duarte Barbosa. Barbosa's usage was derived from Persian Chīn (چین), which in turn derived from Sanskrit Cīna (चीन). Cīna was first used in early Hindu scripture, including the Mahabharata (5th century BCE) and the Laws of Manu (2nd century BCE). In 1655, Martino Martini suggested that the word China is derived ultimately from the name of the Qin dynasty (221–206 BCE). Although usage in Indian sources precedes this dynasty, this derivation is still given in various sources. The origin of the Sanskrit word is a matter of debate. Alternative suggestions include the names for Yelang and the Jing or Chu state.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The official name of the modern state is the \"People's Republic of China\" (simplified Chinese: 中华人民共和国; traditional Chinese: 中華人民共和國; pinyin: Zhōnghuá Rénmín Gònghéguó). The shorter form is \"China\" Zhōngguó (中国; 中國) from zhōng (\"central\") and guó (\"state\"), a term which developed under the Western Zhou dynasty in reference to its royal demesne. It was used in official documents as an synonym for the state under the Qing. The name Zhongguo is also translated as \"Middle Kingdom\" in English. China (PRC) is sometimes referred to as the Mainland when distinguishing the ROC from the PRC.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "Archaeological evidence suggests that early hominids inhabited China 2.25 million years ago. The hominid fossils of Peking Man, a Homo erectus who used fire, have been dated to between 680,000 and 780,000 years ago. The fossilized teeth of Homo sapiens (dated to 125,000–80,000 years ago) have been discovered in Fuyan Cave. Chinese proto-writing existed in Jiahu around 6600 BCE, at Damaidi around 6000 BCE, Dadiwan from 5800 to 5400 BCE, and Banpo dating from the 5th millennium BCE. Some scholars have suggested that the Jiahu symbols (7th millennium BCE) constituted the earliest Chinese writing system.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "According to Chinese tradition, the first dynasty was the Xia, which emerged around 2100 BCE. The Xia dynasty marked the beginning of China's political system based on hereditary monarchies, or dynasties. The Xia dynasty was considered mythical by historians until scientific excavations found early Bronze Age sites at Erlitou in 1959. It remains unclear whether these sites are the remains of the Xia dynasty or of another culture from the same period. The succeeding Shang dynasty is the earliest to be confirmed by contemporary records. The Shang ruled the plain of the Yellow River in eastern China from the 17th to the 11th century BCE. Their oracle bone script (from c. 1500 BCE) represents the oldest form of Chinese writing yet found and is a direct ancestor of modern Chinese characters.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Shang was conquered by the Zhou, who ruled between the 11th and 5th centuries BCE, though centralized authority was slowly eroded by feudal warlords. Some principalities eventually emerged from the weakened Zhou and continually waged war with each other during the 300-year Spring and Autumn period. By the time of the Warring States period of the 5th–3rd centuries BCE, there were seven major powerful states left.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Warring States period ended in 221 BCE after the state of Qin conquered the other six kingdoms, reunited China and established the dominant order of autocracy. King Zheng of Qin proclaimed himself the Emperor of the Qin dynasty, becoming the first emperor of a unified China. He enacted Qin's legalist reforms, notably the forced standardization of Chinese characters, measurements, road widths, and currency. His dynasty also conquered the Yue tribes in Guangxi, Guangdong, and Northern Vietnam. The Qin dynasty lasted only fifteen years, falling soon after the First Emperor's death, as his harsh authoritarian policies led to widespread rebellion.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Following a widespread civil war during which the imperial library was burned, the Han dynasty emerged to rule China between 206 BCE and CE 220, creating a cultural identity among its populace still remembered in the ethnonym of the modern Han Chinese. The Han expanded the empire's territory considerably, with military campaigns reaching Central Asia, Mongolia, Korea, and Yunnan, and the recovery of Guangdong and northern Vietnam from Nanyue. Han involvement in Central Asia and Sogdia helped establish the land route of the Silk Road, replacing the earlier path over the Himalayas to India. Han China gradually became the largest economy of the ancient world. Despite the Han's initial decentralization and the official abandonment of the Qin philosophy of Legalism in favor of Confucianism, Qin's legalist institutions and policies continued to be employed by the Han government and its successors.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "After the end of the Han dynasty, a period of strife known as Three Kingdoms followed, at the end of which Wei was swiftly overthrown by the Jin dynasty. The Jin fell to civil war upon the ascension of a developmentally disabled emperor; the Five Barbarians then rebelled and ruled northern China as the Sixteen States. The Xianbei unified them as the Northern Wei, whose Emperor Xiaowen reversed his predecessors' apartheid policies and enforced a drastic sinification on his subjects. In the south, the general Liu Yu secured the abdication of the Jin in favor of the Liu Song. The various successors of these states became known as the Northern and Southern dynasties, with the two areas finally reunited by the Sui in 581. The Sui restored the Han to power through China, reformed its agriculture, economy and imperial examination system, constructed the Grand Canal, and patronized Buddhism. However, they fell quickly when their conscription for public works and a failed war in northern Korea provoked widespread unrest.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Under the succeeding Tang and Song dynasties, Chinese economy, technology, and culture entered a golden age. The Tang dynasty retained control of the Western Regions and the Silk Road, which brought traders to as far as Mesopotamia and the Horn of Africa, and made the capital Chang'an a cosmopolitan urban center. However, it was devastated and weakened by the An Lushan rebellion in the 8th century. In 907, the Tang disintegrated completely when the local military governors became ungovernable. The Song dynasty ended the separatist situation in 960, leading to a balance of power between the Song and the Liao dynasty. The Song was the first government in world history to issue paper money and the first Chinese polity to establish a permanent navy which was supported by the developed shipbuilding industry along with the sea trade.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Between the 10th and 11th century CE, the population of China doubled to around 100 million people, mostly because of the expansion of rice cultivation in central and southern China, and the production of abundant food surpluses. The Song dynasty also saw a revival of Confucianism, in response to the growth of Buddhism during the Tang, and a flourishing of philosophy and the arts, as landscape art and porcelain were brought to new levels of complexity. However, the military weakness of the Song army was observed by the Jin dynasty. In 1127, Emperor Huizong of Song and the capital Bianjing were captured during the Jin–Song Wars. The remnants of the Song retreated to southern China.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Mongol conquest of China began in 1205 with the gradual conquest of Western Xia by Genghis Khan, who also invaded Jin territories. In 1271, the Mongol leader Kublai Khan established the Yuan dynasty, which conquered the last remnant of the Song dynasty in 1279. Before the Mongol invasion, the population of Song China was 120 million citizens; this was reduced to 60 million by the time of the census in 1300. A peasant named Zhu Yuanzhang overthrew the Yuan in 1368 and founded the Ming dynasty as the Hongwu Emperor. Under the Ming dynasty, China enjoyed another golden age, developing one of the strongest navies in the world and a rich and prosperous economy amid a flourishing of art and culture. It was during this period that admiral Zheng He led the Ming treasure voyages throughout the Indian Ocean, reaching as far as East Africa.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In the early Ming dynasty, China's capital was moved from Nanjing to Beijing. With the budding of capitalism, philosophers such as Wang Yangming critiqued and expanded Neo-Confucianism with concepts of individualism and equality of four occupations. The scholar-official stratum became a supporting force of industry and commerce in the tax boycott movements, which, together with the famines and defense against Japanese invasions of Korea (1592–1598) and Later Jin incursions led to an exhausted treasury. In 1644, Beijing was captured by a coalition of peasant rebel forces led by Li Zicheng. The Chongzhen Emperor committed suicide when the city fell. The Manchu Qing dynasty, then allied with Ming dynasty general Wu Sangui, overthrew Li's short-lived Shun dynasty and subsequently seized control of Beijing, which became the new capital of the Qing dynasty.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The Qing dynasty, which lasted from 1644 until 1912, was the last imperial dynasty of China. The Ming-Qing transition (1618–1683) cost 25 million lives, but the Qing appeared to have restored China's imperial power and inaugurated another flowering of the arts. After the Southern Ming ended, the further conquest of the Dzungar Khanate added Mongolia, Tibet and Xinjiang to the empire. Meanwhile, China's population growth resumed and shortly began to accelerate. It is commonly agreed that pre-modern China's population experienced two growth spurts, one during the Northern Song period (960-1127), and other during the Qing period (around 1700–1830). By the High Qing era China was possibly the most commercialized country in the world, and imperial China experienced a second commercial revolution by the end of the 18th century. On the other hand, the centralized autocracy was strengthened in part to suppress anti-Qing sentiment with the policy of valuing agriculture and restraining commerce, like the Haijin during the early Qing period and ideological control as represented by the literary inquisition, causing some social and technological stagnation.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In the mid-19th century, the Opium Wars with Britain and France forced China to pay compensation, open treaty ports, allow extraterritoriality for foreign nationals, and cede Hong Kong to the British under the 1842 Treaty of Nanking, the first of what have been termed as the \"unequal treaties\". The First Sino-Japanese War (1894–1895) resulted in Qing China's loss of influence in the Korean Peninsula, as well as the cession of Taiwan to Japan. The Qing dynasty also began experiencing internal unrest in which tens of millions of people died, especially in the White Lotus Rebellion, the failed Taiping Rebellion that ravaged southern China in the 1850s and 1860s and the Dungan Revolt (1862–1877) in the northwest. The initial success of the Self-Strengthening Movement of the 1860s was frustrated by a series of military defeats in the 1880s and 1890s.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In the 19th century, the great Chinese diaspora began. Losses due to emigration were added to by conflicts and catastrophes such as the Northern Chinese Famine of 1876–1879, in which between 9 and 13 million people died. The Guangxu Emperor drafted a reform plan in 1898 to establish a modern constitutional monarchy, but these plans were thwarted by the Empress Dowager Cixi. The ill-fated anti-foreign Boxer Rebellion of 1899–1901 further weakened the dynasty. Although Cixi sponsored a program of reforms known as the late Qing reforms, the Xinhai Revolution of 1911–1912 ended the Qing dynasty and established the Republic of China. Puyi, the last Emperor, abdicated in 1912.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "On 1 January 1912, the Republic of China was established, and Sun Yat-sen of the Kuomintang (KMT) was proclaimed provisional president. In March 1912, the presidency was given to Yuan Shikai, a former Qing general who in 1915 proclaimed himself Emperor of China. In the face of popular condemnation and opposition from his own Beiyang Army, he was forced to abdicate and re-establish the republic in 1916.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "After Yuan Shikai's death in 1916, China was politically fragmented. Its Beijing-based government was internationally recognized but virtually powerless; regional warlords controlled most of its territory. In the late 1920s, the Kuomintang under Chiang Kai-shek was able to reunify the country under its own control with a series of deft military and political maneuverings known collectively as the Northern Expedition. The Kuomintang moved the nation's capital to Nanjing and implemented \"political tutelage\", an intermediate stage of political development outlined in Sun Yat-sen's Three Principles of the People program for transforming China into a modern democratic state. The Kuomintang briefly allied with the Chinese Communist Party (CCP) during the Northern Expedition, though the alliance broke down in 1927 after Chiang violently suppressed the CCP and other leftists Shanghai, marking the beginning of the Chinese Civil War. The CCP declared areas of the country as the Chinese Soviet Republic (Jiangxi Soviet) in November 1931 in Ruijin, Jiangxi. The Jiangxi Soviet was wiped out by the KMT armies in 1934, leading the CCP to initiate the Long March and relocate to Yan'an in Shaanxi. It would be the base of the communists before major combat in the Chinese Civil War ended in 1949.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In 1931, Japan invaded and occupied Manchuria. Japan invaded other parts of China in 1937, precipitating the Second Sino-Japanese War (1937–1945), a theater of World War II. The war forced an uneasy alliance between the Kuomintang and the CCP. Japanese forces committed numerous war atrocities against the civilian population; as many as 20 million Chinese civilians died. An estimated 40,000 to 300,000 Chinese were massacred in Nanjing alone during the Japanese occupation. China, along with the UK, the United States, and the Soviet Union, were recognized as the Allied \"Big Four\" in the Declaration by United Nations. Along with the other three great powers, China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war. After the surrender of Japan in 1945, Taiwan, including the Penghu, was handed over to Chinese control; however, the validity of this handover is controversial.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "China emerged victorious but war-ravaged and financially drained. The continued distrust between the Kuomintang and the Communists led to the resumption of civil war. Constitutional rule was established in 1947, but because of the ongoing unrest, many provisions of the ROC constitution were never implemented in mainland China. Afterwards, the CCP took control of most of mainland China, and the ROC government retreated offshore to Taiwan.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "On 1 October 1949, CCP Chairman Mao Zedong formally proclaimed the People's Republic of China in Tiananmen Square, Beijing. In 1950, the PRC captured Hainan from the ROC and annexed Tibet. However, remaining Kuomintang forces continued to wage an insurgency in western China throughout the 1950s. The CCP consolidated its popularity among the peasants through the Land Reform Movement, which included the execution of between 1 and 2 million landlords. Though the PRC initially allied closely with the Soviet Union, the relations between the two communist nations gradually deteriorated, leading China to develop an independent industrial system and its own nuclear weapons.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Chinese population increased from 550 million in 1950 to 900 million in 1974. However, the Great Leap Forward, an idealistic massive industrialization project, resulted in an estimated 15 to 55 million deaths between 1959 and 1961, mostly from starvation. In 1964, China's first atomic bomb exploded successfully. In 1966, Mao and his allies launched the Cultural Revolution, sparking a decade of political recrimination and social upheaval that lasted until Mao's death in 1976. In October 1971, the PRC replaced the ROC in the United Nations, and took its seat as a permanent member of the Security Council.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "After Mao's death, the Gang of Four was quickly arrested by Hua Guofeng and held responsible for the excesses of the Cultural Revolution. Deng Xiaoping took power in 1978, and instituted large-scale political and economic reforms, together with the \"Eight Elders\", CCP members who held huge influence during this time. The CCP loosened governmental control over citizens' personal lives, and the communes were gradually disbanded in favor of working contracted to households. The Cultural Revolution was also rebuked, with millions of its victims being rehabilitated. Agricultural collectivization was dismantled and farmlands privatized, while foreign trade became a major new focus, leading to the creation of special economic zones (SEZs). Inefficient state-owned enterprises (SOEs) were restructured and unprofitable ones were closed outright, resulting in job losses. This marked China's transition from a planned economy to a mixed economy with an increasingly open-market environment. China adopted its current constitution on 4 December 1982.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In 1989, the country saw large pro-democracy protests, eventually leading to the Tiananmen Square massacre, bringing condemnations and sanctions from various foreign countries, though the effect on external relations was short-lived. Jiang Zemin was selected to replace the reformist Zhao Ziyang as the CCP general secretary; Zhao was put under house arrest for his sympathies to the protests. Jiang later additionally took the presidency and Central Military Commission chairmanship posts, effectively becoming China's top leader. Jiang continued economic reforms, further closing many SOEs and massively trimming down \"iron rice bowl\" (occupations with guaranteed job security). During Jiang's rule, China's economy grew sevenfold. British Hong Kong and Portuguese Macau returned to China in 1997 and 1999, respectively, as special administrative regions under the principle of one country, two systems. The country joined the World Trade Organization in 2001.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Between 2002 and 2003, Hu Jintao succeeded Jiang as the paramount leader. Under Hu, China maintained its high rate of economic growth, overtaking the United Kingdom, France, Germany and Japan to become the world's second-largest economy. However, the growth also severely impacted the country's resources and environment, and caused major social displacement. Hu and Wen also took a relatively more conservative approach towards economic reform, expanding support for SOEs.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Xi Jinping succeeded Hu as paramount leader and premier respectively between 2012 and 2013. Shortly after his ascension to power, Xi launched a vast anti-corruption crackdown, that prosecuted more than 2 million officials by 2022. Xi has also pursued changes to China's economy, supporting SOEs and making eradicating extreme poverty through \"targeted poverty alleviation\" a key goal. In 2013, Xi launched the Belt and Road Initiative, a global infrastructure investment project. Since 2017, the Chinese government has been engaged in a harsh crackdown in Xinjiang, with an estimated one million people, mostly Uyghurs, but including other ethnic and religious minorities, in internment camps. In 2020, the Standing Committee of the National People's Congress (NPCSC) passed a national security law that authorize the Hong Kong government wide-ranging tools to crack down on dissent. From December 2019 to December 2022, the COVID-19 pandemic led the government to enforce strict public health measures intended to completely eradicate the virus, a goal that was eventually abandoned after protests against the policy in 2022. The 2020s saw Chinese economic growth significantly slow due to factors such as a crisis in the country's real estate sector.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "China's landscape is vast and diverse, ranging from the Gobi and Taklamakan Deserts in the arid north to the subtropical forests in the wetter south. The Himalaya, Karakoram, Pamir and Tian Shan mountain ranges separate China from much of South and Central Asia. The Yangtze and Yellow Rivers, the third- and sixth-longest in the world, respectively, run from the Tibetan Plateau to the densely populated eastern seaboard. China's coastline along the Pacific Ocean is 14,500 km (9,000 mi) long and is bounded by the Bohai, Yellow, East China and South China seas. China connects through the Kazakh border to the Eurasian Steppe.",
"title": "Geography"
},
{
"paragraph_id": 30,
"text": "The territory of China lies between latitudes 18° and 54° N, and longitudes 73° and 135° E. The geographical center of China is marked by the Center of the Country Monument at 35°50′40.9″N 103°27′7.5″E / 35.844694°N 103.452083°E / 35.844694; 103.452083 (Geographical center of China). China's landscapes vary significantly across its vast territory. In the east, along the shores of the Yellow Sea and the East China Sea, there are extensive and densely populated alluvial plains, while on the edges of the Inner Mongolian plateau in the north, broad grasslands predominate. Southern China is dominated by hills and low mountain ranges, while the central-east hosts the deltas of China's two major rivers, the Yellow River and the Yangtze River. Other major rivers include the Xi, Mekong, Brahmaputra and Amur. To the west sit major mountain ranges, most notably the Himalayas. High plateaus feature among the more arid landscapes of the north, such as the Taklamakan and the Gobi Desert. The world's highest point, Mount Everest (8,848 m), lies on the Sino-Nepalese border. The country's lowest point, and the world's third-lowest, is the dried lake bed of Ayding Lake (−154 m) in the Turpan Depression.",
"title": "Geography"
},
{
"paragraph_id": 31,
"text": "China's climate is mainly dominated by dry seasons and wet monsoons, which lead to pronounced temperature differences between winter and summer. In the winter, northern winds coming from high-latitude areas are cold and dry; in summer, southern winds from coastal areas at lower latitudes are warm and moist.",
"title": "Geography"
},
{
"paragraph_id": 32,
"text": "A major environmental issue in China is the continued expansion of its deserts, particularly the Gobi Desert. Although barrier tree lines planted since the 1970s have reduced the frequency of sandstorms, prolonged drought and poor agricultural practices have resulted in dust storms plaguing northern China each spring, which then spread to other parts of East Asia, including Japan and Korea. China's environmental watchdog, SEPA, stated in 2007 that China is losing 4,000 km (1,500 sq mi) per year to desertification. Water quality, erosion, and pollution control have become important issues in China's relations with other countries. Melting glaciers in the Himalayas could potentially lead to water shortages for hundreds of millions of people. According to academics, in order to limit climate change in China to 1.5 °C (2.7 °F) electricity generation from coal in China without carbon capture must be phased out by 2045. With current policies, the GHG emissions of China will probably peak in 2025, and by 2030 they will return to 2022 levels. However, such pathway still leads to three-degree temperature rise.",
"title": "Geography"
},
{
"paragraph_id": 33,
"text": "Official government statistics about Chinese agricultural productivity are considered unreliable, due to exaggeration of production at subsidiary government levels. Much of China has a climate very suitable for agriculture and the country has been the world's largest producer of rice, wheat, tomatoes, eggplant, grapes, watermelon, spinach, and many other crops. In 2021, 12 percent of global permanent meadows and pastures belonged to China, as well as 8% of global cropland.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "China is one of 17 megadiverse countries, lying in two of the world's major biogeographic realms: the Palearctic and the Indomalayan. By one measure, China has over 34,687 species of animals and vascular plants, making it the third-most biodiverse country in the world, after Brazil and Colombia. The country is a party to the Convention on Biological Diversity; its National Biodiversity Strategy and Action Plan was received by the convention in 2010.",
"title": "Geography"
},
{
"paragraph_id": 35,
"text": "China is home to at least 551 species of mammals (the third-highest in the world), 1,221 species of birds (eighth), 424 species of reptiles (seventh) and 333 species of amphibians (seventh). Wildlife in China shares habitat with, and bears acute pressure from, the world's largest population of humans. At least 840 animal species are threatened, vulnerable or in danger of local extinction, due mainly to human activity such as habitat destruction, pollution and poaching for food, fur and traditional Chinese medicine. Endangered wildlife is protected by law, and as of 2005, the country has over 2,349 nature reserves, covering a total area of 149.95 million hectares, 15 percent of China's total land area. Most wild animals have been eliminated from the core agricultural regions of east and central China, but they have fared better in the mountainous south and west. The Baiji was confirmed extinct on 12 December 2006.",
"title": "Geography"
},
{
"paragraph_id": 36,
"text": "China has over 32,000 species of vascular plants, and is home to a variety of forest types. Cold coniferous forests predominate in the north of the country, supporting animal species such as moose and Asian black bear, along with over 120 bird species. The understory of moist conifer forests may contain thickets of bamboo. In higher montane stands of juniper and yew, the bamboo is replaced by rhododendrons. Subtropical forests, which are predominate in central and southern China, support a high density of plant species including numerous rare endemics. Tropical and seasonal rainforests, though confined to Yunnan and Hainan, contain a quarter of all the animal and plant species found in China. China has over 10,000 recorded species of fungi.",
"title": "Geography"
},
{
"paragraph_id": 37,
"text": "In the early 2000s, China has suffered from environmental deterioration and pollution due to its rapid pace of industrialization. Regulations such as the 1979 Environmental Protection Law are fairly stringent, though they are poorly enforced, frequently disregarded in favor of rapid economic development. China has the second highest death toll because of air pollution, after India, with approximately 1 million deaths. Although China ranks as the highest CO2 emitting country, it only emits 8 tons of CO2 per capita, significantly lower than developed countries such as the United States (16.1), Australia (16.8) and South Korea (13.6). Greenhouse gas emissions by China are the world's largest.",
"title": "Geography"
},
{
"paragraph_id": 38,
"text": "In recent years, China has clamped down on pollution. In March 2014, CCP General Secretary Xi Jinping \"declared war\" on pollution during the opening of the National People's Congress. In 2020, Xi announced that China aims to peak emissions before 2030 and go carbon-neutral by 2060 in accordance with the Paris Agreement, which, according to Climate Action Tracker, would lower the expected rise in global temperature by 0.2–0.3 degrees – \"the biggest single reduction ever estimated by the Climate Action Tracker\". In September 2021 Xi Jinping announced that China will not build \"coal-fired power projects abroad\".",
"title": "Geography"
},
{
"paragraph_id": 39,
"text": "The country has significant water pollution problems; only 84.8% of China's national surface water was graded suitable for human consumption by the Ministry of Ecology and Environment in 2021. In 2020, a sweeping law was passed by the Chinese government to protect the ecology of the Yangtze River. The new laws include strengthening ecological protection rules for hydropower projects, banning chemical plants within 1 kilometer of the river, relocating polluting industries, severely restricting sand mining as well as a complete fishing ban on all the natural waterways of the river, including all its major tributaries and lakes.",
"title": "Geography"
},
{
"paragraph_id": 40,
"text": "China is the world's leading investor in renewable energy and its commercialization, with $546 billion invested in 2022; it is a major manufacturer of renewable energy technologies and invests heavily in local-scale renewable energy projects. In 2022, 61.2% of China's electricity came from coal (largest producer in the world), 14.9% from hydroelectric power (largest), 9.3% from wind (largest), 4.7% from solar energy (largest), 4.7% from nuclear energy (second-largest), 3.1% from natural gas (fifth-largest), and 1.9% from bioenergy (largest); in total, 30.8% of China's energy came from renewable energy sources. Despite its emphasis on renewables, China remains deeply connected to global oil markets and next to India, has been the largest importer of Russian crude oil in 2022.",
"title": "Geography"
},
{
"paragraph_id": 41,
"text": "China is the second-largest country in the world by land area after Russia, and the third or fourth largest country in the world by total area. China's total area is generally stated as being approximately 9,600,000 km (3,700,000 sq mi). Specific area figures range from 9,572,900 km (3,696,100 sq mi) according to the Encyclopædia Britannica, to 9,596,961 km (3,705,407 sq mi) according to the UN Demographic Yearbook, and The World Factbook.",
"title": "Geography"
},
{
"paragraph_id": 42,
"text": "China has the longest combined land border in the world, measuring 22,117 km (13,743 mi) and its coastline covers approximately 14,500 km (9,000 mi) from the mouth of the Yalu River (Amnok River) to the Gulf of Tonkin. China borders 14 nations and covers the bulk of East Asia, bordering Vietnam, Laos, and Myanmar in Southeast Asia; India, Bhutan, Nepal, Pakistan and Afghanistan in South Asia; Tajikistan, Kyrgyzstan and Kazakhstan in Central Asia; and Russia, Mongolia, and North Korea in Inner Asia and Northeast Asia. It is narrowly separated from Bangladesh and Thailand to the southwest and south, and has several maritime neighbors such as Japan, Philippines, Malaysia, and Indonesia.",
"title": "Geography"
},
{
"paragraph_id": 43,
"text": "The People's Republic of China is a one-party state governed by the Marxist–Leninist Chinese Communist Party (CCP). This makes China one of the few countries governed by a communist party. The Chinese constitution states that the PRC \"is a socialist state governed by a people's democratic dictatorship that is led by the working class and based on an alliance of workers and peasants,\" that the state institutions \"shall practice the principle of democratic centralism,\" and that \"the defining feature of socialism with Chinese characteristics is the leadership of the Communist Party of China.\"",
"title": "Politics"
},
{
"paragraph_id": 44,
"text": "The PRC officially terms itself as a democracy, using terms such as \"socialist consultative democracy\", and \"whole-process people's democracy\". However, the country is commonly described as an authoritarian one-party state and a dictatorship, with among the heaviest restrictions worldwide in many areas, most notably against freedom of the press, freedom of assembly, reproductive rights, free formation of social organizations, freedom of religion and free access to the Internet. China has consistently been ranked amongst the lowest as an \"authoritarian regime\" by the Economist Intelligence Unit's Democracy Index, ranking at 156th out of 167 countries in 2022.",
"title": "Politics"
},
{
"paragraph_id": 45,
"text": "According to the CCP constitution, its highest body is the National Congress held every five years. The National Congress elects the Central Committee, who then elects the party's Politburo, Politburo Standing Committee and the general secretary (party leader), the top leadership of the country. The general secretary holds ultimate power and authority over state and government and serves as the informal paramount leader. The current general secretary is Xi Jinping, who took office on 15 November 2012. At the local level, the secretary of the CCP committee of a subdivision outranks the local government level; CCP committee secretary of a provincial division outranks the governor while the CCP committee secretary of a city outranks the mayor. The CCP is officially guided by Marxism adapted to Chinese circumstances.",
"title": "Politics"
},
{
"paragraph_id": 46,
"text": "The government in China is under the sole control of the CCP. The CCP controls appointments in government bodies, with most senior government officials being CCP members.",
"title": "Politics"
},
{
"paragraph_id": 47,
"text": "The National People's Congress (NPC), the nearly 3,000-member legislature, is constitutionally the \"highest state organ of power\", though it has been also described as a \"rubber stamp\" body. The NPC meets annually, while the NPC Standing Committee, around 150 members elected from NPC delegates, meets every couple of months. Elections are indirect and not pluralistic, with nominations at all levels being controlled by the CCP. The NPC is dominated by the CCP, with another eight minor parties having nominal representation under the condition of upholding CCP leadership.",
"title": "Politics"
},
{
"paragraph_id": 48,
"text": "The president is the ceremonial state representative, elected by the NPC. The incumbent president is Xi Jinping, who is also the general secretary of the CCP and the chairman of the Central Military Commission, making him China's paramount leader. The premier is the head of government, with Li Qiang being the incumbent. The premier is officially nominated by the president and then elected by the NPC, and has generally been either the second or third-ranking member of the Politburo Standing Committee (PSC). The premier presides over the State Council, China's cabinet, composed of four vice premiers, state councilors, and the heads of ministries and commissions. The Chinese People's Political Consultative Conference (CPPCC) is a political advisory body that is critical in China's \"united front\" system, which aims to gather non-CCP voices to support the CCP. Similar to the people's congresses, CPPCC's exist at various division, with the National Committee of the CPPCC being chaired by Wang Huning, fourth-ranking member of the PSC.",
"title": "Politics"
},
{
"paragraph_id": 49,
"text": "The governance of China is characterized by a high degree of political centralization but significant economic decentralization. Policy instruments or processes are often tested locally before being applied more widely, resulting in a policy process that involves experimentation and feedback. Generally, high-level central government leadership refrains from drafting specific policies, instead using the informal networks and site visits to affirm or suggest changes to the direction of local policy experiments or pilot programs. The typical approach is that central government leadership begins drafting formal policies, law, or regulations after policy has been developed at local levels.",
"title": "Politics"
},
{
"paragraph_id": 50,
"text": "The PRC is constitutionally a unitary state divided into 23 provinces, five autonomous regions (each with a designated minority group), and four direct-administered municipalities—collectively referred to as \"mainland China\"—as well as the special administrative regions (SARs) of Hong Kong and Macau. The PRC considers Taiwan to be its 23rd province, although it is governed by the Republic of China (ROC). Geographically, all 31 provincial divisions of mainland China can be grouped into six regions: North China, Northeast China, East China, South Central China, Southwestern China, and Northwestern China.",
"title": "Politics"
},
{
"paragraph_id": 51,
"text": "The PRC has diplomatic relations with 179 United Nation members states and maintains embassies in 174. Since 2019, China has the largest diplomatic network in the world. In 1971, the PRC replaced the Republic of China (ROC) as the sole representative of China in the United Nations and as one of the five permanent members of the United Nations Security Council. It is a member of intergovernmental organizations including the G20, the SCO, the East Asia Summit, and the APEC. China was also a former member and leader of the Non-Aligned Movement, and still considers itself an advocate for developing countries. Along with Brazil, Russia, India and South Africa, China is a member of the BRICS group of emerging major economies and hosted the group's third official summit in April 2011.",
"title": "Politics"
},
{
"paragraph_id": 52,
"text": "The PRC officially maintains the one-China principle, which holds the view that there is only one sovereign state in the name of China, represented by the PRC, and that Taiwan is part of that China. The unique status of Taiwan has led to countries recognizing the PRC to maintain unique \"one-China policies\" that differ from each other; some countries explicitly recognize the PRC's claim over Taiwan, while others, including the US and Japan, only acknowledge the claim. Chinese officials have protested on numerous occasions when foreign countries have made diplomatic overtures to Taiwan, especially in the matter of armament sales. Most countries have switched recognition from the ROC to the PRC since the latter replaced the former in the United Nations in 1971.",
"title": "Politics"
},
{
"paragraph_id": 53,
"text": "Much of current Chinese foreign policy is reportedly based on Premier Zhou Enlai's Five Principles of Peaceful Coexistence, and is also driven by the concept of \"harmony without uniformity\", which encourages diplomatic relations between states despite ideological differences. This policy may have led China to support or maintain close ties with states that are regarded as dangerous and repressive by Western nations, such as Sudan, North Korea and Iran. China's close relationship with Myanmar has involved both support for its ruling governments as well as for its ethnic rebel groups, including the Arakan Army. China has a close political, economic and military relationship with Russia, and the two states often vote in unison in the United Nations Security Council. China's relationship with the United States is long and complex, and includes deep trade ties but significant political differences.",
"title": "Politics"
},
{
"paragraph_id": 54,
"text": "Since the early 200s, China has followed a policy of engaging with African nations for trade and bilateral co-operation. It maintains extensive and highly diversified trade links with the European Union, and became its largest trading partner for goods. China has strong trade ties with ASEAN countries and major South American economies, and is the largest trading partner of Brazil, Chile, Peru, Uruguay, Argentina, and several others.",
"title": "Politics"
},
{
"paragraph_id": 55,
"text": "In 2013, China initiated the Belt and Road Initiative (BRI), a large global infrastructure building initiative with funding on the order of $50–100 billion per year. BRI could be one of the largest development plans in modern history. It has expanded significantly over the last six years and, as of April 2020, includes 138 countries and 30 international organizations. In addition to intensifying foreign policy relations, the focus is particularly on building efficient transport routes, especially the maritime Silk Road with its connections to East Africa and Europe. However many loans made under the program are unsustainable and China has faced a number of calls for debt relief from debtor nations.",
"title": "Politics"
},
{
"paragraph_id": 56,
"text": "Ever since its establishment, the PRC has claimed the territories governed by the Republic of China (ROC), a separate political entity today commonly known as Taiwan, as a part of its territory. It regards the island of Taiwan as its Taiwan Province, Kinmen and Matsu as a part of Fujian Province and islands the ROC controls in the South China Sea as a part of Hainan Province and Guangdong Province. These claims are controversial because of the complicated Cross-Strait relations.",
"title": "Politics"
},
{
"paragraph_id": 57,
"text": "China has resolved its land borders with 12 out of 14 neighboring countries, having pursued substantial compromises in most of them. China currently has a disputed land border with India and Bhutan. China is additionally involved in maritime disputes with multiple countries over the ownership of islands in the East and South China Seas, such as the Senkaku Islands and the entirety of South China Sea Islands, along with the EEZ disputes over East China Sea.",
"title": "Politics"
},
{
"paragraph_id": 58,
"text": "The situation of human rights in China has attracted significant criticism from foreign governments, foreign press agencies, and non-governmental organizations, alleging widespread civil rights violations such as detention without trial, forced confessions, torture, restrictions of fundamental rights, and excessive use of the death penalty. Since its inception, Freedom House has ranked China as \"not free\" in its Freedom in the World survey, while Amnesty International has documented significant human rights abuses. The Chinese constitution states that the \"fundamental rights\" of citizens include freedom of speech, freedom of the press, the right to a fair trial, freedom of religion, universal suffrage, and property rights. However, in practice, these provisions do not afford significant protection against criminal prosecution by the state. China has limited protections regarding LGBT rights.",
"title": "Politics"
},
{
"paragraph_id": 59,
"text": "Although some criticisms of government policies and the ruling CCP are tolerated, censorship of political speech and information are amongst the harshest in the world and routinely used to prevent collective action. China also has the most comprehensive and sophisticated Internet censorship regime in the world, with numerous websites being blocked. The government suppresses popular protests and demonstrations that it considers a potential threat to \"social stability\". China additionally uses a massive espionage network of cameras, facial recognition software, sensors, and surveillance of personal technology as a means of social control of persons living in the country.",
"title": "Politics"
},
{
"paragraph_id": 60,
"text": "China is regularly accused of large-scale repression and human rights abuses in Tibet and Xinjiang, where significant numbers of ethnic minorities reside, including violent police crackdowns and religious suppression. In Xinjiang, repression has significantly escalated since 2016, after which at least one million Uyghurs and other ethnic and religion minorities have been detained in internment camps aimed at changing the political thinking of detainees, their identities, and their religious beliefs. According to western reports, political indoctrination, torture, physical and psychological abuse, forced sterilization, sexual abuse, and forced labor are common in these facilities. According to a 2020 report, China's treatment of Uyghurs meets the UN definition of genocide, while a separate UN Human Rights Office report said they could potentially meet the definitions for crimes against humanity.",
"title": "Politics"
},
{
"paragraph_id": 61,
"text": "Global studies from Pew Research Center in 2014 and 2017 ranked the Chinese government's restrictions on religion as among the highest in the world, despite low to moderate rankings for religious-related social hostilities in the country. The Global Slavery Index estimated that in 2016 more than 3.8 million people (0.25% of the population) were living in \"conditions of modern slavery\", including victims of human trafficking, forced labor, forced marriage, child labor, and state-imposed forced labor. The state-imposed re-education through labor (laojiao) system was formally abolished in 2013, but it is not clear to what extent its practices have stopped. The much larger reform through labor (laogai) system includes labor prison factories, detention centers, and re-education camps; the Laogai Research Foundation has estimated in June 2008 that there were nearly 1,422 of these facilities, though it cautioned that this number was likely an underestimate.",
"title": "Politics"
},
{
"paragraph_id": 62,
"text": "Political concerns in China include the growing gap between rich and poor and government corruption. Nonetheless, international surveys show the Chinese public have a high level of satisfaction with their government. These views are generally attributed to the material comforts and security available to large segments of the Chinese populace as well as the government's attentiveness and responsiveness. According to the World Values Survey (2022), 91% of Chinese respondents have significant confidence in their government. A Harvard University survey published in July 2020 found that citizen satisfaction with the government had increased since 2003, also rating China's government as more effective and capable than ever in the survey's history.",
"title": "Politics"
},
{
"paragraph_id": 63,
"text": "The People's Liberation Army (PLA) is considered one of the world's most powerful militaries and has rapidly modernized in the recent decades. It consists of the Ground Force (PLAGF), the Navy (PLAN), the Air Force (PLAAF), the Rocket Force (PLARF) and the Strategic Support Force (PLASSF). Its nearly 2.2 million active duty personnel is the largest in the world. The PLA holds the world's third-largest stockpile of nuclear weapons, and the world's second-largest navy by tonnage. China's official military budget for 2022 totalled US$230 billion (1.45 trillion Yuan), the second-largest in the world, though SIPRI estimates that its real expenditure that year was US$292 billion. According to SIPRI, its military spending from 2012 to 2021 averaged US$215 billion per year or 1.7 per cent of GDP, behind only the United States at US$734 billion per year or 3.6 per cent of GDP. The PLA is commanded by the Central Military Commission (CMC) of the party and the state; though officially two separate organizations, the two CMCs have identical membership except during leadership transition periods and effectively function as one organization. The chairman of the CMC is the commander-in-chief of the PLA.",
"title": "Military"
},
{
"paragraph_id": 64,
"text": "China has the world's second-largest economy in terms of nominal GDP, and the world's largest in terms of purchasing power parity (PPP). As of 2022, China accounts for around 18% of global economy by nominal GDP. China is one of the world's fastest-growing major economies, with its economic growth having been almost consistently above 6 percent since the introduction of economic reforms in 1978. According to the World Bank, China's GDP grew from $150 billion in 1978 to $17.96 trillion by 2022. It ranks at 64th at GDP (nominal) per capita, making it an upper-middle income country. Of the world's 500 largest companies, 142 are headquartered in China.",
"title": "Economy"
},
{
"paragraph_id": 65,
"text": "China was one of the world's foremost economic powers throughout the arc of East Asian and global history. The country had one of the largest economies in the world for most of the past two millennia, during which it has seen cycles of prosperity and decline. Since economic reforms began in 1978, China has developed into a highly diversified economy and one of the most consequential players in international trade. Major sectors of competitive strength include manufacturing, retail, mining, steel, textiles, automobiles, energy generation, green energy, banking, electronics, telecommunications, real estate, e-commerce, and tourism. China has three out of the ten largest stock exchanges in the world—Shanghai, Hong Kong and Shenzhen—that together have a market capitalization of over $15.9 trillion, as of October 2020. China has four (Shanghai, Hong Kong, Beijing, and Shenzhen) out of the world's top ten most competitive financial centers, which is more than any other country in the 2020 Global Financial Centres Index.",
"title": "Economy"
},
{
"paragraph_id": 66,
"text": "Modern-day China is often described as an example of state capitalism or party-state capitalism. The state dominates in strategic \"pillar\" sectors such as energy production and heavy industries, but private enterprise has expanded enormously, with around 30 million private businesses recorded in 2008. According to official statistics, privately owned companies constitute more than 60% of China's GDP.",
"title": "Economy"
},
{
"paragraph_id": 67,
"text": "China has been the world's largest manufacturing nation since 2010, after overtaking the US, which had been the largest for the previous hundred years. China has also been the second largest in high-tech manufacturing country since 2012, according to US National Science Foundation. China is the second largest retail market after the United States. China leads the world in e-commerce, accounting for over 37% of the global market share in 2021. China is the world's leader in electric vehicle consumption and production, manufacturing and buying half of all the plug-in electric cars (BEV and PHEV) in the world as of 2022. China is also the leading producer of batteries for electric vehicles as well as several key raw materials for batteries. Long heavily relying on non-renewable energy sources such as coal, China's adaptation of renewable energy has increased significantly in recent years, with their share increasing from 26.3 percent in 2016 to 31.9 percent in 2022.",
"title": "Economy"
},
{
"paragraph_id": 68,
"text": "China accounted for 17.9% of the world's total wealth in 2021, second highest in the world after the US. China brought more people out of extreme poverty than any other country in history—between 1978 and 2018, China reduced extreme poverty by 800 million. From 1990 to 2018, the proportion of the Chinese population living with an income of less than $1.90 per day (2011 PPP) decreased from 66.3% to 0.3%, the share living with an income of less than $3.20 per day from 90.0% to 2.9%, and the share living with an income of less than $5.50 per day decreased from 98.3% to 17.0%.",
"title": "Economy"
},
{
"paragraph_id": 69,
"text": "From 1978 to 2018, the average standard of living multiplied by a factor of twenty-six. Wages in China have grown significantly in the last 40 years—real (inflation-adjusted) wages grew seven-fold from 1978 to 2007. Per capita incomes have also risen significantly – when the PRC was founded in 1949, per capita income in China was one-fifth of the world average; per capita incomes now equal the world average itself. China's development is highly uneven. Its major cities and coastal areas are far more prosperous compared to rural and interior regions. It has a high level of economic inequality, which has increased quickly after the economic reforms, though has decreased significantly in the 2010s. In 2020, China's Gini coefficient was 0.371, according to the World Bank.",
"title": "Economy"
},
{
"paragraph_id": 70,
"text": "As of April 2023, China was second in the world, after the US, in total number of billionaires and total number of millionaires, with 495 Chinese billionaires and 6.2 million millionaires. In 2019, China overtook the US as the home to the highest number of people who have a net personal wealth of at least $110,000, according to the global wealth report by Credit Suisse. China had 85 female billionaires as of January 2021, two-thirds of the global total. China has had the world's largest middle-class population since 2015; the middle-class grew to 400 million by 2018.",
"title": "Economy"
},
{
"paragraph_id": 71,
"text": "China has been a member of the WTO since 2001 and is the world's largest trading power. By 2016, China was the largest trading partner of 124 countries. China became the world's largest trading nation in 2013 by the sum of imports and exports, as well as the world's largest commodity importer, comprising roughly 45% of maritime's dry-bulk market.",
"title": "Economy"
},
{
"paragraph_id": 72,
"text": "China's foreign exchange reserves reached US$3.128 trillion as of December 2022, making its reserves by far the world's largest. In 2022, China was amongst the world's largest recipient of inward foreign direct investment (FDI), attracting $180 billion, though most of these were speculated to be from Hong Kong. In 2021, China's foreign exchange remittances were $US53 billion making it the second largest recipient of remittances in the world. China also invests abroad, with a total outward FDI of $146.5 billion in 2022, and a number of major takeovers of foreign firms by Chinese companies.",
"title": "Economy"
},
{
"paragraph_id": 73,
"text": "Economists have argued that the renminbi is undervalued, due to currency intervention from the Chinese government, giving China an unfair trade advantage. China has also been widely criticized for manufacturing large quantities of counterfeit goods. The US government has also alleged that China does not respect intellectual property (IP) rights and steals IP through espionage operations. In 2020, Harvard University's Economic Complexity Index ranked complexity of China's exports 17th in the world, up from 24th in 2010.",
"title": "Economy"
},
{
"paragraph_id": 74,
"text": "The Chinese government has promoted the internationalization of the renminbi in order to wean off of its dependence on the U.S. dollar as a result of perceived weaknesses of the international monetary system. The renminbi is a component of the IMF's special drawing rights and the world's fifth-most traded currency as of 2022. However, partly due to capital controls that make the renminbi fall short of being a fully convertible currency, it remains far behind the Euro, the U.S. Dollar and the Japanese Yen in international trade volumes.",
"title": "Economy"
},
{
"paragraph_id": 75,
"text": "China was a world leader in science and technology until the Ming dynasty. Ancient and medieval Chinese discoveries and inventions, such as papermaking, printing, the compass, and gunpowder (the Four Great Inventions), became widespread across East Asia, the Middle East and later Europe. Chinese mathematicians were the first to use negative numbers. By the 17th century, the Western World surpassed China in scientific and technological advancement. The causes of this early modern Great Divergence continue to be debated by scholars.",
"title": "Science and technology"
},
{
"paragraph_id": 76,
"text": "After repeated military defeats by the European colonial powers and Imperial Japan in the 19th century, Chinese reformers began promoting modern science and technology as part of the Self-Strengthening Movement. After the Communists came to power in 1949, efforts were made to organize science and technology based on the model of the Soviet Union, in which scientific research was part of central planning. After Mao's death in 1976, science and technology were promoted as one of the Four Modernizations, and the Soviet-inspired academic system was gradually reformed.",
"title": "Science and technology"
},
{
"paragraph_id": 77,
"text": "Since the end of the Cultural Revolution, China has made significant investments in scientific research and is quickly catching up with the US in R&D spending. China officially spent around 2.4% of its GDP on R&D in 2020, totaling to around $377.8 billion. According to the World Intellectual Property Indicators, China received more applications than the US did in 2018 and 2019 and ranked first globally in patents, utility models, trademarks, industrial designs, and creative goods exports in 2021. It was ranked 12th in the Global Innovation Index in 2023, a considerable improvement from its rank of 35th in 2013. Chinese supercomputers have been ranked the fastest in the world on a few occasions; however, these supercomputers rely on critical components —namely processors—designed in foreign countries. China has also struggled with developing several technologies domestically, such as the most advanced semiconductors and reliable jet engines.",
"title": "Science and technology"
},
{
"paragraph_id": 78,
"text": "China is developing its education system with an emphasis on science, technology, engineering, and mathematics (STEM). It became the world's largest publisher of scientific papers in 2016.",
"title": "Science and technology"
},
{
"paragraph_id": 79,
"text": "The Chinese space program started in 1958 with some technology transfers from the Soviet Union. However, it did not launch the nation's first satellite until 1970 with the Dong Fang Hong I, which made China the fifth country to do so independently.",
"title": "Science and technology"
},
{
"paragraph_id": 80,
"text": "In 2003, China became the third country in the world to independently send humans into space with Yang Liwei's spaceflight aboard Shenzhou 5. As of 2023, eighteen Chinese nationals have journeyed into space, including two women. In 2011, China launched its first space station testbed, Tiangong-1. In 2013, a Chinese robotic rover Yutu successfully touched down on the lunar surface as part of the Chang'e 3 mission.",
"title": "Science and technology"
},
{
"paragraph_id": 81,
"text": "In 2019, China became the first country to land a probe—Chang'e 4—on the far side of the Moon. In 2020, Chang'e 5 successfully returned Moon samples to the Earth, making China the third country to do so independently. In 2021, China became the third country to land a spacecraft on Mars and the second one to deploy a rover (Zhurong) on Mars. China completed its own modular space station, the Tiangong, in low Earth orbit on 3 November 2022. On 29 November 2022, China performed its first in-orbit crew handover aboard the Tiangong.",
"title": "Science and technology"
},
{
"paragraph_id": 82,
"text": "In May 2023, China announced a plan to land humans on the Moon by 2030. To that end, China currently is developing a lunar-capable super-heavy launcher, the Long March 10, a new crewed spacecraft, and a crewed lunar lander.",
"title": "Science and technology"
},
{
"paragraph_id": 83,
"text": "After a decades-long infrastructural boom, China has produced numerous world-leading infrastructural projects: it has the largest high-speed rail network, the most supertall skyscrapers, the largest power plant (the Three Gorges Dam), and a global satellite navigation system with the largest number of satellites.",
"title": "Infrastructure"
},
{
"paragraph_id": 84,
"text": "China is the largest telecom market in the world and currently has the largest number of active cellphones of any country, with over 1.7 billion subscribers, as of February 2023. It has the largest number of internet and broadband users, with over 1.05 billion Internet users since 2021—equivalent to around 73.7% of its population. By 2018, China had more than 1 billion 4G users, accounting for 40% of world's total. China is making rapid advances in 5G—by late 2018, China had started large-scale and commercial 5G trials. As of March 2022, China had over 500 million 5G users and 1.45 million base stations installed.",
"title": "Infrastructure"
},
{
"paragraph_id": 85,
"text": "China Mobile, China Unicom and China Telecom, are the three large providers of mobile and internet in China. China Telecom alone served more than 145 million broadband subscribers and 300 million mobile users; China Unicom had about 300 million subscribers; and China Mobile, the largest of them all, had 925 million users, as of 2018. Combined, the three operators had over 3.4 million 4G base-stations in China. Several Chinese telecommunications companies, most notably Huawei and ZTE, have been accused of spying for the Chinese military.",
"title": "Infrastructure"
},
{
"paragraph_id": 86,
"text": "China has developed its own satellite navigation system, dubbed BeiDou, which began offering commercial navigation services across Asia in 2012 as well as global services by the end of 2018. Beidou followed GPS and GLONASS as the third completed global navigation satellite.",
"title": "Infrastructure"
},
{
"paragraph_id": 87,
"text": "Since the late 1990s, China's national road network has been significantly expanded through the creation of a network of national highways and expressways. In 2018, China's highways had reached a total length of 161,000 km (100,000 mi), making it the longest highway system in the world. China has the world's largest market for automobiles, having surpassed the United States in both auto sales and production. The country is the world's largest exporter of cars as of 2023. A side-effect of the rapid growth of China's road network has been a significant rise in traffic accidents. In urban areas, bicycles remain a common mode of transport, despite the increasing prevalence of automobiles – as of 2012, there are approximately 470 million bicycles in China.",
"title": "Infrastructure"
},
{
"paragraph_id": 88,
"text": "China's railways, which are operated by the state-owned China State Railway Group Company, are among the busiest in the world, handling a quarter of the world's rail traffic volume on only 6 percent of the world's tracks in 2006. As of 2021, the country had 150,000 km (93,206 mi) of railways, the second longest network in the world. The railways strain to meet enormous demand particularly during the Chinese New Year holiday, when the world's largest annual human migration takes place. China's high-speed rail (HSR) system started construction in the early 2000s. By the end of 2022, high speed rail in China had reached 42,000 kilometers (26,098 miles) of dedicated lines alone, making it the longest HSR network in the world. Services on the Beijing–Shanghai, Beijing–Tianjin, and Chengdu–Chongqing lines reach up to 350 km/h (217 mph), making them the fastest conventional high speed railway services in the world. With an annual ridership of over 2.3 billion passengers in 2019, it is the world's busiest. The network includes the Beijing–Guangzhou high-speed railway, the single longest HSR line in the world, and the Beijing–Shanghai high-speed railway, which has three of longest railroad bridges in the world. The Shanghai maglev train, which reaches 431 km/h (268 mph), is the fastest commercial train service in the world. Since 2000, the growth of rapid transit systems in Chinese cities has accelerated. As of January 2021, 44 Chinese cities have urban mass transit systems in operation. As of 2020, China boasts the five longest metro systems in the world with the networks in Shanghai, Beijing, Guangzhou, Chengdu and Shenzhen being the largest.",
"title": "Infrastructure"
},
{
"paragraph_id": 89,
"text": "The civil aviation industry in China is mostly state-dominated, with the Chinese government retaining a majority stake in the majority of Chinese airlines. The top three airlines in China, which collectively made up 71% of the market in 2018, are all state-owned. Air travel has expanded rapidly in the last decades, with the number of passengers increasing from 16.6 million in 1990 to 551.2 million in 2017. China had approximately 241 airports in 2021.",
"title": "Infrastructure"
},
{
"paragraph_id": 90,
"text": "China has over 2,000 river and seaports, about 130 of which are open to foreign shipping. Of the fifty busiest container ports, 15 are located in China, of which the busiest is the Port of Shanghai, also the busiest port in the world. The country's inland waterways are the world's sixth-longest, and total 27,700 km (17,212 mi).",
"title": "Infrastructure"
},
{
"paragraph_id": 91,
"text": "Water supply and sanitation infrastructure in China is facing challenges such as rapid urbanization, as well as water scarcity, contamination, and pollution. According to the Joint Monitoring Program for Water Supply and Sanitation in 2015, about 36% of the rural population in China still did not have access to improved sanitation. The ongoing South–North Water Transfer Project intends to abate water shortage in the north.",
"title": "Infrastructure"
},
{
"paragraph_id": 92,
"text": "The 2020 Chinese census recorded the population as approximately 1,411,778,724. About 17.95% were 14 years old or younger, 63.35% were between 15 and 59 years old, and 18.7% were over 60 years old. Between 2010 and 2020, the average population growth rate was 0.53%.",
"title": "Demographics"
},
{
"paragraph_id": 93,
"text": "Given concerns about population growth, China implemented a two-child limit during the 1970s, and, in 1979, began to advocate for an even stricter limit of one child per family. Beginning in the mid-1980s, however, given the unpopularity of the strict limits, China began to allow some major exemptions, particularly in rural areas, resulting in what was actually a \"1.5\"-child policy from the mid-1980s to 2015; ethnic minorities were also exempt from one-child limits. The next major loosening of the policy was enacted in December 2013, allowing families to have two children if one parent is an only child. In 2016, the one-child policy was replaced in favor of a two-child policy. A three-child policy was announced on 31 May 2021, due to population aging, and in July 2021, all family size limits as well as penalties for exceeding them were removed. According to the 2020 census, China's total fertility rate is 1.3. In 2023, the total fertility was estimated to be around 1.09, ranking among the lowest in the world. In 2023, National Bureau of Statistics estimated that the population fell 850,000 from 2021 to 2022, the first decline since 1961.",
"title": "Demographics"
},
{
"paragraph_id": 94,
"text": "According to one group of scholars, one-child limits had little effect on population growth or total population size. However, these scholars have been challenged. The policy, along with traditional preference for boys, may have contributed to an imbalance in the sex ratio at birth. The 2020 census found that males accounted for 51.2% of the total population. However, China's sex ratio is more balanced than it was in 1953, when males accounted for 51.8% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 95,
"text": "China legally recognizes 56 distinct ethnic groups, who comprise the Zhonghua minzu. The largest of these nationalities are the Han Chinese, who constitute more than 91% of the total population. The Han Chinese – the world's largest single ethnic group – outnumber other ethnic groups in every provincial-level division except Tibet and Xinjiang. Ethnic minorities account for less than 10% of the population of China, according to the 2020 census. Compared with the 2010 population census, the Han population increased by 60,378,693 persons, or 4.93%, while the population of the 55 national minorities combined increased by 11,675,179 persons, or 10.26%. The 2020 census recorded a total of 845,697 foreign nationals living in mainland China.",
"title": "Demographics"
},
{
"paragraph_id": 96,
"text": "There are as many as 292 living languages in China. The languages most commonly spoken belong to the Sinitic branch of the Sino-Tibetan language family, which contains Mandarin (spoken by 80% of the population), and other varieties of Chinese language: Yue (including Cantonese and Taishanese), Wu (including Shanghainese and Suzhounese), Min (including Fuzhounese, Hokkien and Teochew), Xiang, Gan and Hakka. Languages of the Tibeto-Burman branch, including Tibetan, Qiang, Naxi and Yi, are spoken across the Tibetan and Yunnan–Guizhou Plateau. Other ethnic minority languages in southwestern China include Zhuang, Thai, Dong and Sui of the Tai-Kadai family, Miao and Yao of the Hmong–Mien family, and Wa of the Austroasiatic family. Across northeastern and northwestern China, local ethnic groups speak Altaic languages including Manchu, Mongolian and several Turkic languages: Uyghur, Kazakh, Kyrgyz, Salar and Western Yugur. Korean is spoken natively along the border with North Korea. Sarikoli, the language of Tajiks in western Xinjiang, is an Indo-European language. Taiwanese indigenous peoples, including a small population on the mainland, speak Austronesian languages.",
"title": "Demographics"
},
{
"paragraph_id": 97,
"text": "Standard Mandarin, a variety of Mandarin based on the Beijing dialect, is the official national language and is used as a lingua franca between people of different linguistic backgrounds. Mongolian, Uyghur, Tibetan, Zhuang and various other languages are also regionally recognized.",
"title": "Demographics"
},
{
"paragraph_id": 98,
"text": "China has urbanized significantly in recent decades. The percent of the country's population living in urban areas increased from 20% in 1980 to over 64% in 2021. China has over 160 cities with a population of over one million, including the 17 megacities as of 2021 (cities with a population of over 10 million) of Chongqing, Shanghai, Beijing, Chengdu, Guangzhou, Shenzhen, Tianjin, Xi'an, Suzhou, Zhengzhou, Wuhan, Hangzhou, Linyi, Shijiazhuang, Dongguan, Qingdao and Changsha. The total permanent population of Chongqing, Shanghai, Beijing and Chengdu is above 20 million. Shanghai is China's most populous urban area while Chongqing is its largest city proper, the only city in China with a permanent population of over 30 million. The figures in the table below are from the 2017 census, and are only estimates of the urban populations within administrative city limits; a different ranking exists for total municipal populations. The large \"floating populations\" of migrant workers make conducting censuses in urban areas difficult; the figures below include only long-term residents.",
"title": "Demographics"
},
{
"paragraph_id": 99,
"text": "Compulsory education in China comprises primary and junior secondary school, which together last for nine years from the age of 6 and 15. The Gaokao, China's national university entrance exam, is a prerequisite for entrance into most higher education institutions. Vocational education is available to students at the secondary and tertiary level. More than 10 million Chinese students graduated from vocational colleges every year. In 2022, about 91.6 percent of students continued their education at a three-year senior secondary school, while 59.6 secondary school graduates were enrolled in higher education.",
"title": "Demographics"
},
{
"paragraph_id": 100,
"text": "China has the largest education system in the world, with about 282 million students and 17.32 million full-time teachers in over 530,000 schools. Annual education investment went from less than US$50 billion in 2003 to more than US$817 billion in 2020. However, there remains an inequality in education spending. In 2010, the annual education expenditure per secondary school student in Beijing totalled ¥20,023, while in Guizhou, one of the poorest provinces, only totalled ¥3,204. China's literacy rate has grown dramatically, from only 20% in 1949 and 65.5% in 1979, to 97% of the population over age 15 in 2020.",
"title": "Demographics"
},
{
"paragraph_id": 101,
"text": "As of 2021, China has over 3,000 universities, with over 44.3 million students enrolled in mainland China and 240 million Chinese citizens have received high education, making China the largest higher education system in the world. As of 2023, China had the world's highest number of top universities. Currently, China trails only the United States and the United Kingdom in terms of representation on lists of the top 200 universities according to the 2023 Aggregate Ranking of Top Universities, a composite ranking system of three world-most followed university rankings (ARWU+QS+ THE). China is home to two of the highest-ranking universities (Tsinghua University and Peking University) in Asia and emerging economies, according to the Times Higher Education World University Rankings and the QS World University Rankings. These universities are members of the C9 League, an alliance of elite Chinese universities offering comprehensive and leading education.",
"title": "Demographics"
},
{
"paragraph_id": 102,
"text": "The National Health Commission, together with its counterparts in the local commissions, oversees the health needs of the population. An emphasis on public health and preventive medicine has characterized Chinese health policy since the early 1950s. The Communist Party started the Patriotic Health Campaign, which was aimed at improving sanitation and hygiene, as well as treating and preventing several diseases. Diseases such as cholera, typhoid and scarlet fever, which were previously rife in China, were nearly eradicated by the campaign.",
"title": "Demographics"
},
{
"paragraph_id": 103,
"text": "After Deng Xiaoping began instituting economic reforms in 1978, the health of the Chinese public improved rapidly because of better nutrition, although many of the free public health services provided in the countryside disappeared. Healthcare in China became mostly privatized, and experienced a significant rise in quality. In 2009, the government began a three-year large-scale healthcare provision initiative worth US$124 billion. By 2011, the campaign resulted in 95% of China's population having basic health insurance coverage. By 2022, China had established itself as a key producer and exporter of pharmaceuticals, producing around 40 percent of active pharmaceutical ingredients in 2017.",
"title": "Demographics"
},
{
"paragraph_id": 104,
"text": "As of 2021, the life expectancy at birth is 78 years, and the infant mortality rate is 5 per thousand. Both have improved significantly since the 1950s. Rates of stunting, a condition caused by malnutrition, have declined from 33.1% in 1990 to 9.9% in 2010. Despite significant improvements in health and the construction of advanced medical facilities, China has several emerging public health problems, such as respiratory illnesses caused by widespread air pollution, hundreds of millions of cigarette smokers, and an increase in obesity among urban youths. In 2010, air pollution caused 1.2 million premature deaths in China. China's large population and densely populated cities have led to serious disease outbreaks, such as SARS in 2003, although this has since been largely contained. The COVID-19 pandemic was first identified in Wuhan in December 2019.",
"title": "Demographics"
},
{
"paragraph_id": 105,
"text": "The government of the People's Republic of China and the Chinese Communist Party both officially espouse state atheism, and have conducted antireligious campaigns to this end. Religious affairs and issues in the country are overseen by the CCP's United Front Work Department. Freedom of religion is guaranteed by China's constitution, although religious organizations that lack official approval can be subject to state persecution.",
"title": "Demographics"
},
{
"paragraph_id": 106,
"text": "Chinese civilization has been influenced by various religious movements. The \"three teachings\", including Confucianism, Taoism, and Buddhism (Chinese Buddhism), historically have a significant role in shaping Chinese culture, enriching a theological and spiritual framework which harks back to the early Shang and Zhou dynasty. Chinese popular or folk religion, which is framed by the three teachings and other traditions, consists in allegiance to the shen (神), a character that signifies the \"energies of generation\", who can be deities of the environment or ancestral principles of human groups, concepts of civility, culture heroes, many of whom feature in Chinese mythology and history. Among the most popular cults are those of Mazu (goddess of the seas), Huangdi (one of the two divine patriarchs of the Chinese race), Guandi (god of war and business), Caishen (god of prosperity and richness), Pangu and many others. China is home to many of the world's tallest religious statues, including the tallest of all, the Spring Temple Buddha in Henan.",
"title": "Demographics"
},
{
"paragraph_id": 107,
"text": "Clear data on religious affiliation is difficult to gather due to varying definitions of \"religion\" and the unorganized, diffusive nature of Chinese religious traditions. Scholars note that in China there is no clear boundary between three teachings religions and local folk religious practice. A 2015 poll conducted by Gallup International found that 61% of Chinese people self-identified as \"convinced atheist\", though Chinese religions or some of their strands are definable as non-theistic and humanistic religions, since they do not believe that divine creativity is completely transcendent, but it is inherent in the world and in particular in the human being. According to a 2014 study, approximately 74% are either non-religious or practice Chinese folk belief, 16% are Buddhists, 2% are Christians, 1% are Muslims, and 8% adhere to other religions including Taoists and folk salvationism. There are also various ethnic minority groups in China who maintain their indigenous religions. Significant faiths specifically connected to certain ethnic groups include Tibetan Buddhism and the Islamic religion of the Hui, Uyghur, Kazakh and Kyrgyz peoples in Northwest China. China had a total of 39,000 mosques in 2014, with 63% located in Xinjiang, 12% in Gansu, 11% in Ningxia, 3% in Qinghai and the rest located in other parts of the country.",
"title": "Demographics"
},
{
"paragraph_id": 108,
"text": "A 2021 poll from Ipsos had 35% of Chinese people saying there was tension between different religious groups, which was the second lowest percentage of the 28 countries surveyed.",
"title": "Demographics"
},
{
"paragraph_id": 109,
"text": "Since ancient times, Chinese culture has been heavily influenced by Confucianism. Chinese culture, in turn, has heavily influenced East Asia and Southeast Asia. For much of the country's dynastic era, opportunities for social advancement could be provided by high performance in the prestigious imperial examinations, which have their origins in the Han dynasty. The literary emphasis of the exams affected the general perception of cultural refinement in China, such as the belief that calligraphy, poetry and painting were higher forms of art than dancing or drama. Chinese culture has long emphasized a sense of deep history and a largely inward-looking national perspective. Examinations and a culture of merit remain greatly valued in China today.",
"title": "Culture and society"
},
{
"paragraph_id": 110,
"text": "Today, the Chinese government has accepted numerous elements of traditional Chinese culture as being integral to Chinese society. With the rise of Chinese nationalism and the end of the Cultural Revolution, various forms of traditional Chinese art, literature, music, film, fashion and architecture have seen a vigorous revival, and folk and variety art in particular have sparked interest nationally and even worldwide. Access to foreign media remains heavily restricted.",
"title": "Culture and society"
},
{
"paragraph_id": 111,
"text": "China received 65.7 million international visitors in 2019, and in 2018 was the fourth-most-visited country in the world. It also experiences an enormous volume of domestic tourism; Chinese tourists made an estimated 6 billion travels within the country in 2019. China hosts the world's second-largest number of World Heritage Sites (56) after Italy, and is one of the most popular tourist destinations (first in the Asia-Pacific).",
"title": "Culture and society"
},
{
"paragraph_id": 112,
"text": "Chinese literature is based on the literature of the Zhou dynasty. Concepts covered within the Chinese classic texts present a wide range of thoughts and subjects including calendar, military, astrology, herbology, geography and many others. Some of the most important early texts include the I Ching and the Shujing within the Four Books and Five Classics which served as the Confucian authoritative books for the state-sponsored curriculum in dynastic era. Inherited from the Classic of Poetry, classical Chinese poetry developed to its floruit during the Tang dynasty. Li Bai and Du Fu opened the forking ways for the poetic circles through romanticism and realism respectively. Chinese historiography began with the Shiji, the overall scope of the historiographical tradition in China is termed the Twenty-Four Histories, which set a vast stage for Chinese fictions along with Chinese mythology and folklore. Pushed by a burgeoning citizen class in the Ming dynasty, Chinese classical fiction rose to a boom of the historical, town and gods and demons fictions as represented by the Four Great Classical Novels which include Water Margin, Romance of the Three Kingdoms, Journey to the West and Dream of the Red Chamber. Along with the wuxia fictions of Jin Yong and Liang Yusheng, it remains an enduring source of popular culture in the Chinese sphere of influence.",
"title": "Culture and society"
},
{
"paragraph_id": 113,
"text": "In the wake of the New Culture Movement after the end of the Qing dynasty, Chinese literature embarked on a new era with written vernacular Chinese for ordinary citizens. Hu Shih and Lu Xun were pioneers in modern literature. Various literary genres, such as misty poetry, scar literature, young adult fiction and the xungen literature, which is influenced by magic realism, emerged following the Cultural Revolution. Mo Yan, a xungen literature author, was awarded the Nobel Prize in Literature in 2012.",
"title": "Culture and society"
},
{
"paragraph_id": 114,
"text": "Chinese cuisine is highly diverse, drawing on several millennia of culinary history and geographical variety, in which the most influential are known as the \"Eight Major Cuisines\", including Sichuan, Cantonese, Jiangsu, Shandong, Fujian, Hunan, Anhui, and Zhejiang cuisines. Chinese cuisine is known for its breadth of cooking methods and ingredients. China's staple food is rice in the south and wheat-based breads and noodles in the north. Bean products such as tofu and soy milk remain a popular source of protein. Pork is now the most popular meat in China, accounting for about three-fourths of the country's total meat consumption. There is also the vegetarian Buddhist cuisine and the pork-free Chinese Islamic cuisine. Southern cuisine, due to the area's proximity to the ocean and milder climate, has a wide variety of seafood and vegetables. Offshoots of Chinese food, such as Hong Kong cuisine and American Chinese cuisine, have emerged in the Chinese diaspora.",
"title": "Culture and society"
},
{
"paragraph_id": 115,
"text": "Chinese architecture has developed over millennia in China and has remained a vestigial source of perennial influence on the development of East Asian architecture, including in Japan, Korea, and Mongolia. and minor influences on the architecture of Southeast and South Asia including the countries of Malaysia, Singapore, Indonesia, Sri Lanka, Thailand, Laos, Cambodia, Vietnam and the Philippines.",
"title": "Culture and society"
},
{
"paragraph_id": 116,
"text": "Chinese architecture is characterized by bilateral symmetry, use of enclosed open spaces, feng shui (e.g. directional hierarchies), a horizontal emphasis, and an allusion to various cosmological, mythological or in general symbolic elements. Chinese architecture traditionally classifies structures according to type, ranging from pagodas to palaces.",
"title": "Culture and society"
},
{
"paragraph_id": 117,
"text": "Chinese architecture varies widely based on status or affiliation, such as whether the structures were constructed for emperors, commoners, or for religious purposes. Other variations in Chinese architecture are shown in vernacular styles associated with different geographic regions and different ethnic heritages, such as the stilt houses in the south, the Yaodong buildings in the northwest, the yurt buildings of nomadic people, and the Siheyuan buildings in the north.",
"title": "Culture and society"
},
{
"paragraph_id": 118,
"text": "Chinese music covers a highly diverse range of music from traditional music to modern music. Chinese music dates back before the pre-imperial times. Traditional Chinese musical instruments were traditionally grouped into eight categories known as bayin (八音). Traditional Chinese opera is a form of musical theatre in China originating thousands of years and has regional style forms such as Beijing and Cantonese opera. Chinese pop (C-Pop) includes mandopop and cantopop. Chinese hip hop and Hong Kong hip hop have become popular.",
"title": "Culture and society"
},
{
"paragraph_id": 119,
"text": "Cinema was first introduced to China in 1896 and the first Chinese film, Dingjun Mountain, was released in 1905. China has the largest number of movie screens in the world since 2016; China became the largest cinema market in 2020. The top three highest-grossing films in China as of 2023 were The Battle at Lake Changjin (2021), Wolf Warrior 2 (2017), and Hi, Mom (2021).",
"title": "Culture and society"
},
{
"paragraph_id": 120,
"text": "Hanfu is the historical clothing of the Han people in China. The qipao or cheongsam is a popular Chinese female dress. The hanfu movement has been popular in contemporary times and seeks to revitalize Hanfu clothing.",
"title": "Culture and society"
},
{
"paragraph_id": 121,
"text": "China has one of the oldest sporting cultures. There is evidence that archery (shèjiàn) was practiced during the Western Zhou dynasty. Swordplay (jiànshù) and cuju, a sport loosely related to association football date back to China's early dynasties as well.",
"title": "Culture and society"
},
{
"paragraph_id": 122,
"text": "Physical fitness is widely emphasized in Chinese culture, with morning exercises such as qigong and tai chi widely practiced, and commercial gyms and private fitness clubs are gaining popularity. Basketball is the most popular spectator sport in China. The Chinese Basketball Association and the American National Basketball Association also have a huge national following amongst the Chinese populace, with native-born and NBA-bound Chinese players and well-known national household names such as Yao Ming and Yi Jianlian being held in high esteem. China's professional football league, known as Chinese Super League, is the largest football market in East Asia. Other popular sports include martial arts, table tennis, badminton, swimming and snooker. China is home to a huge number of cyclists, with an estimated 470 million bicycles as of 2012. Many more traditional sports, such as dragon boat racing, Mongolian-style wrestling and horse racing are also popular.",
"title": "Culture and society"
},
{
"paragraph_id": 123,
"text": "China has participated in the Olympic Games since 1932, although it has only participated as the PRC since 1952. China hosted the 2008 Summer Olympics in Beijing, where its athletes received 48 gold medals – the highest number of any participating nation that year. China also won the most medals at the 2012 Summer Paralympics, with 231 overall, including 95 gold. In 2011, Shenzhen hosted the 2011 Summer Universiade. China hosted the 2013 East Asian Games in Tianjin and the 2014 Summer Youth Olympics in Nanjing, the first country to host both regular and Youth Olympics. Beijing and its nearby city Zhangjiakou collaboratively hosted the 2022 Winter Olympics, making Beijing the first dual Olympic city by holding both the Summer Olympics and the Winter Olympics.",
"title": "Culture and society"
},
{
"paragraph_id": 124,
"text": "This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO.",
"title": "Sources"
},
{
"paragraph_id": 125,
"text": "35°N 103°E / 35°N 103°E / 35; 103",
"title": "External links"
}
] | China, officially the People's Republic of China (PRC), is a country in East Asia. With a population exceeding 1.4 billion, it is the world's second-most-populous country. China spans the equivalent of five time zones and borders fourteen countries by land. With an area of nearly 9.6 million square kilometers (3,700,000 sq mi), it is the third-largest country by total land area. The country is divided into 22 provinces, five autonomous regions, four municipalities, and two semi-autonomous special administrative regions. Beijing is the national capital, while Shanghai is the most populous city and largest financial center. The region has been inhabited since the Paleolithic era. The earliest Chinese dynastic states, such as the Shang and the Zhou, emerged in the basin of the Yellow River before the late second millennium BCE. The eighth to third centuries BCE saw a breakdown in Zhou authority and significant conflict, as well as the emergence of Classical Chinese literature and philosophy. In 221 BCE, China was unified under an emperor for the first time, ushering in more than two millennia in which China was governed by one or more imperial dynasties, including the Han, Tang, Yuan, Ming, and Qing. Some of China's most notable achievements—such as the invention of gunpowder and paper, the establishment of the Silk Road, and the building of the Great Wall—occurred during this period. The imperial Chinese culture—including languages, traditions, architecture, philosophy and more—has heavily influenced East Asia. In 1912, the monarchy was overthrown and the Republic of China was established. The Republic saw consistent conflict for most of the mid-20th century, including a civil war between the Kuomintang government and the Chinese Communist Party (CCP), which began in 1927, as well as the Second Sino-Japanese War that began in 1937 and continued until 1945, therefore becoming involved in World War II. The latter led to a temporary stop in the civil war and numerous Japanese atrocities such as the Nanjing Massacre, which continue to influence China–Japan relations. In 1949, the CCP established control over China as the Kuomintang fled to Taiwan. Early communist rule saw two major projects: the Great Leap Forward, which resulted in a sharp economic decline and massive famine; and the Cultural Revolution, a movement to purge all non-communist elements of Chinese society that led to mass violence and persecution. Beginning in 1978, the Chinese government launched economic reforms that moved the country away from planned economics, but political reforms were cut short by the 1989 Tiananmen Square protests and massacre. Economic reform continued to strengthen the nation's economy in the following decades while raising China's standard of living significantly. China is a unitary one-party socialist republic led by the CCP. It is one of the five permanent members of the UN Security Council and a founding member of several multilateral and regional organizations such as the Asian Infrastructure Investment Bank, the Silk Road Fund, the New Development Bank, and the RCEP. It is a member of the BRICS, the G20, APEC, the SCO, and the East Asia Summit. China ranks poorly in measures of democracy, transparency, and human rights, including for press freedom, religious freedom, and ethnic equality. Making up around one-fifth of the world economy, China is the world's largest economy by GDP at purchasing power parity, the second-largest economy by nominal GDP, and the second-wealthiest country. The country is one of the fastest-growing major economies and is the world's largest manufacturer and exporter, as well as the second-largest importer, although its economic growth has slowed greatly in the 2020s. China is a nuclear-weapon state with the world's largest standing army by military personnel and the second-largest defense budget. | 2001-10-23T01:23:25Z | 2023-12-29T01:06:26Z | [
"Template:Britannica",
"Template:Redirect",
"Template:For timeline",
"Template:Wide image",
"Template:Cite conference",
"Template:Main",
"Template:Coord",
"Template:Colorbull",
"Template:In lang",
"Template:Curlie",
"Template:Lang",
"Template:Reflist",
"Template:Cite map",
"Template:Sister project links",
"Template:Infobox country",
"Template:C.",
"Template:Library resources box",
"Template:China topics",
"Template:Multiple image",
"Template:Free-content attribution",
"Template:Short description",
"Template:Pp",
"Template:Rp",
"Template:See also",
"Template:Cite journal",
"Template:Use American English",
"Template:Use dmy dates",
"Template:Convert",
"Template:Cite web",
"Template:Wikiatlas",
"Template:Portal",
"Template:Notelist",
"Template:Cite encyclopedia",
"Template:Cite magazine",
"Template:Linktext",
"Template:Nowrap",
"Template:ISBN",
"Template:Cite tweet",
"Template:Dead link",
"Template:OSM relation",
"Template:Further",
"Template:As of",
"Template:PRC provinces small imagemap/province list",
"Template:Cite news",
"Template:Zh",
"Template:Cvt",
"Template:Navboxes",
"Template:Transliteration",
"Template:PRC provinces big imagemap alt",
"Template:For",
"Template:Nbsp",
"Template:Authority control",
"Template:TOC limit",
"Template:Anchor",
"Template:Webarchive",
"Template:Cbignore",
"Template:Pp-move",
"Template:Efn",
"Template:Most populous cities in the People's Republic of China",
"Template:Cite book",
"Template:Cite CIA World Factbook",
"Template:Cite report"
] | https://en.wikipedia.org/wiki/China |
5,407 | California | California is a state in the Western United States. With over 38.9 million residents across a total area of approximately 163,696 square miles (423,970 km), it is the most populous U.S. state, the third-largest U.S. state by area, and the most populated subnational entity in North America. California borders Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south; it has a coastline along the Pacific Ocean to the west.
The Greater Los Angeles and San Francisco Bay areas in California are the nation's second and fifth-most populous urban regions respectively. Greater Los Angeles has over 18.7 million residents and the San Francisco Bay Area has over 9.6 million residents. Los Angeles is the state's most populous city and the nation's second-most populous city. San Francisco is the second-most densely populated major city in the country. Los Angeles County is the country's most populous county, and San Bernardino County is the nation's largest county by area. Sacramento is the state's capital.
California's economy is the largest of any state within the United States, with a $3.6 trillion gross state product (GSP) as of 2022. It is the largest sub-national economy in the world. If California were a sovereign nation, it would rank as the world's fifth-largest economy as of 2022, behind India and ahead of the United Kingdom, as well as the 37th most populous. The Greater Los Angeles area and the San Francisco area are the nation's second- and fourth-largest urban economies ($1.0 trillion and $0.6 trillion respectively as of 2020). The San Francisco Bay Area Combined Statistical Area had the nation's highest gross domestic product per capita ($106,757) among large primary statistical areas in 2018, and is home to five of the world's ten largest companies by market capitalization and four of the world's ten richest people. Slightly over 84 percent of the state's residents 25 or older hold a high school degree, the lowest high school education rate of all 50 states.
Prior to European colonization, California was one of the most culturally and linguistically diverse areas in pre-Columbian North America, and the indigenous peoples of California constituted the highest Native American population density north of what is now Mexico. European exploration in the 16th and 17th centuries led to the colonization of California by the Spanish Empire. In 1804, it was included in Alta California province within the Viceroyalty of New Spain. The area became a part of Mexico in 1821, following its successful war for independence, but was ceded to the United States in 1848 after the Mexican–American War. The California Gold Rush started in 1848 and led to dramatic social and demographic changes, including the depopulation of indigenous peoples in the California genocide. The western portion of Alta California was then organized and admitted as the 31st state on September 9, 1850, as a free state, following the Compromise of 1850.
Notable contributions to popular culture, ranging from entertainment, sports, music, and fashion, have their origins in California. The state also has made substantial contributions in the fields of communication, information, innovation, education, environmentalism, entertainment, economics, politics, technology, and religion. California is the home of Hollywood, the oldest and one of the largest film industries in the world, profoundly influencing global entertainment. It is considered the origin of the American film industry, hippie counterculture, beach and car culture, the personal computer, the internet, fast food, diners, burger joints, skateboarding, and the fortune cookie, among other inventions. The San Francisco Bay Area and the Greater Los Angeles Area are widely seen as the centers of the global technology and U.S. film industries, respectively. California's economy is very diverse. California's agricultural industry has the highest output of any U.S. state, and is led by its dairy, almonds, and grapes. With the busiest ports in the country (Los Angeles and Long Beach), California plays a pivotal role in the global supply chain, hauling in about 40% of all goods imported to the United States.
The state's extremely diverse geography ranges from the Pacific Coast and metropolitan areas in the west to the Sierra Nevada mountains in the east, and from the redwood and Douglas fir forests in the northwest to the Mojave Desert in the southeast. Two-thirds of the nation's earthquake risk lies in California. The Central Valley, a fertile agricultural area, dominates the state's center. California is well known for its warm Mediterranean climate along the coast and monsoon seasonal weather inland. The large size of the state results in climates that vary from moist temperate rainforest in the north to arid desert in the interior, as well as snowy alpine in the mountains. Droughts and wildfires are an ongoing issue for the state.
The Spaniards gave the name Las Californias to the peninsula of Baja California and to Alta California, the latter region becoming the present-day state of California.
The name derived from the mythical island of California in the fictional story of Queen Calafia, as recorded in a 1510 work The Adventures of Esplandián by Castilian author Garci Rodríguez de Montalvo. This work was the fifth in a popular Spanish chivalric romance series that began with Amadís de Gaula. Queen Calafia's kingdom was said to be a remote land rich in gold and pearls, inhabited by beautiful Black women who wore gold armor and lived like Amazons, as well as griffins and other strange beasts. In the fictional paradise, the ruler Queen Calafia fought alongside Muslims and her name may have been chosen to echo the Muslim title caliph, used for Muslim leaders.
Know ye that at the right hand of the Indies there is an island called California, very close to that part of the Terrestrial Paradise, which was inhabited by black women without a single man among them, and they lived in the manner of Amazons. They were robust of body with strong passionate hearts and great virtue. The island itself is one of the wildest in the world on account of the bold and craggy rocks.
Official abbreviations of the state's name include CA, Cal., Calif., and US-CA.
California was one of the most culturally and linguistically diverse areas in pre-Columbian North America. Historians generally agree that there were at least 300,000 people living in California prior to European colonization. The indigenous peoples of California included more than 70 distinct ethnic groups, inhabiting environments ranging from mountains and deserts to islands and redwood forests.
Living in these diverse geographic areas, the indigenous peoples developed complex forms of ecosystem management, including forest gardening to ensure the regular availability of food and medicinal plants. This was a form of sustainable agriculture. To mitigate destructive large wildfires from ravaging the natural environment, indigenous peoples developed a practice of controlled burning. This practice was recognized for its benefits by the California government in 2022.
These groups were also diverse in their political organization, with bands, tribes, villages, and, on the resource-rich coasts, large chiefdoms, such as the Chumash, Pomo and Salinan. Trade, intermarriage, craft specialists, and military alliances fostered social and economic relationships between many groups. Although nations would sometimes war, most armed conflicts were between groups of men for vengeance. Acquiring territory was not usually the purpose of these small-scale battles.
Men and women generally had different roles in society. Women were often responsible for weaving, harvesting, processing, and preparing food, while men for hunting and other forms of physical labor. Most societies also had roles for people whom the Spanish referred to as joyas, who they saw as "men who dressed as women". Joyas were responsible for death, burial, and mourning rituals, and they performed women's social roles. Indigenous societies had terms such as two-spirit to refer to them. The Chumash referred to them as 'aqi. The early Spanish settlers detested and sought to eliminate them.
The first Europeans to explore the coast of California were the members of a Spanish maritime expedition led by Portuguese captain Juan Rodríguez Cabrillo in 1542. Cabrillo was commissioned by Antonio de Mendoza, the Viceroy of New Spain, to lead an expedition up the Pacific coast in search of trade opportunities; they entered San Diego Bay on September 28, 1542, and reached at least as far north as San Miguel Island. Privateer and explorer Francis Drake explored and claimed an undefined portion of the California coast in 1579, landing north of the future city of San Francisco. The first Asians to set foot on what would be the United States occurred in 1587, when Filipino sailors arrived in Spanish ships at Morro Bay. Coincidentally the descendants of the Muslim Caliph Hasan ibn Ali in formerly Islamic Manila and had converted to Christianity, upon Spanish conquest, transited through California (Named after a Caliph) on their way to Guerrero, Mexico. Sebastián Vizcaíno explored and mapped the coast of California in 1602 for New Spain, putting ashore in Monterey. Despite the on-the-ground explorations of California in the 16th century, Rodríguez's idea of California as an island persisted. Such depictions appeared on many European maps well into the 18th century.
The Portolá expedition of 1769–70 was a pivotal event in the Spanish colonization of California, resulting in the establishment of numerous missions, presidios, and pueblos. The military and civil contingent of the expedition was led by Gaspar de Portolá, who traveled over land from Sonora into California, while the religious component was headed by Junípero Serra, who came by sea from Baja California. In 1769, Portolá and Serra established Mission San Diego de Alcalá and the Presidio of San Diego, the first religious and military settlements founded by the Spanish in California. By the end of the expedition in 1770, they would establish the Presidio of Monterey and Mission San Carlos Borromeo de Carmelo on Monterey Bay.
After the Portolà expedition, Spanish missionaries led by Father-President Serra set out to establish 21 Spanish missions of California along El Camino Real ("The Royal Road") and along the California coast, 16 sites of which having been chosen during the Portolá expedition. Numerous major cities in California grew out of missions, including San Francisco (Mission San Francisco de Asís), San Diego (Mission San Diego de Alcalá), Ventura (Mission San Buenaventura), or Santa Barbara (Mission Santa Barbara), among others.
Juan Bautista de Anza led a similarly important expedition throughout California in 1775–76, which would extend deeper into the interior and north of California. The Anza expedition selected numerous sites for missions, presidios, and pueblos, which subsequently would be established by settlers. Gabriel Moraga, a member of the expedition, would also christen many of California's prominent rivers with their names in 1775–1776, such as the Sacramento River and the San Joaquin River. After the expedition, Gabriel's son, José Joaquín Moraga, would found the pueblo of San Jose in 1777, making it the first civilian-established city in California.
During this same period, sailors from the Russian Empire explored along the northern coast of California. In 1812, the Russian-American Company established a trading post and small fortification at Fort Ross on the North Coast. Fort Ross was primarily used to supply Russia's Alaskan colonies with food supplies. The settlement did not meet much success, failing to attract settlers or establish long term trade viability, and was abandoned by 1841.
During the War of Mexican Independence, Alta California was largely unaffected and uninvolved in the revolution, though many Californios supported independence from Spain, which many believed had neglected California and limited its development. Spain's trade monopoly on California had limited local trade prospects. Following Mexican independence, California ports were freely able to trade with foreign merchants. Governor Pablo Vicente de Solá presided over the transition from Spanish colonial rule to independent Mexican rule.
In 1821, the Mexican War of Independence gave the Mexican Empire (which included California) independence from Spain. For the next 25 years, Alta California remained a remote, sparsely populated, northwestern administrative district of the newly independent country of Mexico, which shortly after independence became a republic. The missions, which controlled most of the best land in the state, were secularized by 1834 and became the property of the Mexican government. The governor granted many square leagues of land to others with political influence. These huge ranchos or cattle ranches emerged as the dominant institutions of Mexican California. The ranchos developed under ownership by Californios (Hispanics native of California) who traded cowhides and tallow with Boston merchants. Beef did not become a commodity until the 1849 California Gold Rush.
From the 1820s, trappers and settlers from the United States and Canada began to arrive in Northern California. These new arrivals used the Siskiyou Trail, California Trail, Oregon Trail and Old Spanish Trail to cross the rugged mountains and harsh deserts in and surrounding California. The early government of the newly independent Mexico was highly unstable, and in a reflection of this, from 1831 onwards, California also experienced a series of armed disputes, both internal and with the central Mexican government. During this tumultuous political period Juan Bautista Alvarado was able to secure the governorship during 1836–1842. The military action which first brought Alvarado to power had momentarily declared California to be an independent state, and had been aided by Anglo-American residents of California, including Isaac Graham. In 1840, one hundred of those residents who did not have passports were arrested, leading to the Graham Affair, which was resolved in part with the intercession of Royal Navy officials.
One of the largest ranchers in California was John Marsh. After failing to obtain justice against squatters on his land from the Mexican courts, he determined that California should become part of the United States. Marsh conducted a letter-writing campaign espousing the California climate, the soil, and other reasons to settle there, as well as the best route to follow, which became known as "Marsh's route". His letters were read, reread, passed around, and printed in newspapers throughout the country, and started the first wagon trains rolling to California. He invited immigrants to stay on his ranch until they could get settled, and assisted in their obtaining passports.
After ushering in the period of organized emigration to California, Marsh became involved in a military battle between the much-hated Mexican general, Manuel Micheltorena and the California governor he had replaced, Juan Bautista Alvarado. The armies of each met at the Battle of Providencia near Los Angeles. Marsh had been forced against his will to join Micheltorena's army. Ignoring his superiors, during the battle, he signaled the other side for a parley. There were many settlers from the United States fighting on both sides. He convinced each side that they had no reason to be fighting each other. As a result of Marsh's actions, they abandoned the fight, Micheltorena was defeated, and California-born Pio Pico was returned to the governorship. This paved the way to California's ultimate acquisition by the United States.
In 1846, a group of American settlers in and around Sonoma rebelled against Mexican rule during the Bear Flag Revolt. Afterward, rebels raised the Bear Flag (featuring a bear, a star, a red stripe and the words "California Republic") at Sonoma. The Republic's only president was William B. Ide, who played a pivotal role during the Bear Flag Revolt. This revolt by American settlers served as a prelude to the later American military invasion of California and was closely coordinated with nearby American military commanders.
The California Republic was short-lived; the same year marked the outbreak of the Mexican–American War (1846–1848).
Commodore John D. Sloat of the United States Navy sailed into Monterey Bay in 1846 and began the U.S. military invasion of California, with Northern California capitulating in less than a month to the United States forces. In Southern California, Californios continued to resist American forces. Notable military engagements of the conquest include the Battle of San Pasqual and the Battle of Dominguez Rancho in Southern California, as well as the Battle of Olómpali and the Battle of Santa Clara in Northern California. After a series of defensive battles in the south, the Treaty of Cahuenga was signed by the Californios on January 13, 1847, securing a censure and establishing de facto American control in California.
Following the Treaty of Guadalupe Hidalgo (February 2, 1848) that ended the war, the westernmost portion of the annexed Mexican territory of Alta California soon became the American state of California, and the remainder of the old territory was then subdivided into the new American Territories of Arizona, Nevada, Colorado and Utah. The even more lightly populated and arid lower region of old Baja California remained as a part of Mexico. In 1846, the total settler population of the western part of the old Alta California had been estimated to be no more than 8,000, plus about 100,000 Native Americans, down from about 300,000 before Hispanic settlement in 1769.
In 1848, only one week before the official American annexation of the area, gold was discovered in California, this being an event which was to forever alter both the state's demographics and its finances. Soon afterward, a massive influx of immigration into the area resulted, as prospectors and miners arrived by the thousands. The population burgeoned with United States citizens, Europeans, Middle Easterns, Chinese and other immigrants during the great California Gold Rush. By the time of California's application for statehood in 1850, the settler population of California had multiplied to 100,000. By 1854, more than 300,000 settlers had come. Between 1847 and 1870, the population of San Francisco increased from 500 to 150,000.
The seat of government for California under Spanish and later Mexican rule had been located in Monterey from 1777 until 1845. Pio Pico, the last Mexican governor of Alta California, had briefly moved the capital to Los Angeles in 1845. The United States consulate had also been located in Monterey, under consul Thomas O. Larkin.
In 1849, a state Constitutional Convention was first held in Monterey. Among the first tasks of the convention was a decision on a location for the new state capital. The first full legislative sessions were held in San Jose (1850–1851). Subsequent locations included Vallejo (1852–1853), and nearby Benicia (1853–1854); these locations eventually proved to be inadequate as well. The capital has been located in Sacramento since 1854 with only a short break in 1862 when legislative sessions were held in San Francisco due to flooding in Sacramento. Once the state's Constitutional Convention had finalized its state constitution, it applied to the U.S. Congress for admission to statehood. On September 9, 1850, as part of the Compromise of 1850, California became a free state and September 9 a state holiday.
During the American Civil War (1861–1865), California sent gold shipments eastward to Washington in support of the Union. However, due to the existence of a large contingent of pro-South sympathizers within the state, the state was not able to muster any full military regiments to send eastwards to officially serve in the Union war effort. Still, several smaller military units within the Union army, such as the "California 100 Company", were unofficially associated with the state of California due to a majority of their members being from California.
At the time of California's admission into the Union, travel between California and the rest of the continental United States had been a time-consuming and dangerous feat. Nineteen years later, and seven years after it was greenlighted by President Lincoln, the first transcontinental railroad was completed in 1869. California was then reachable from the eastern States in a week's time.
Much of the state was extremely well suited to fruit cultivation and agriculture in general. Vast expanses of wheat, other cereal crops, vegetable crops, cotton, and nut and fruit trees were grown (including oranges in Southern California), and the foundation was laid for the state's prodigious agricultural production in the Central Valley and elsewhere.
In the nineteenth century, a large number of migrants from China traveled to the state as part of the Gold Rush or to seek work. Even though the Chinese proved indispensable in building the transcontinental railroad from California to Utah, perceived job competition with the Chinese led to anti-Chinese riots in the state, and eventually the US ended migration from China partially as a response to pressure from California with the 1882 Chinese Exclusion Act.
Under earlier Spanish and Mexican rule, California's original native population had precipitously declined, above all, from Eurasian diseases to which the indigenous people of California had not yet developed a natural immunity. Under its new American administration, California's first governor Peter Hardeman Burnett instituted policies that have been described as a state-sanctioned policy of elimination toward California's indigenous people. Burnett announced in 1851 in his Second Annual Message to the Legislature: "That a war of extermination will continue to be waged between the races until the Indian race becomes extinct must be expected. While we cannot anticipate the result with but painful regret, the inevitable destiny of the race is beyond the power and wisdom of man to avert."
As in other American states, indigenous peoples were forcibly removed from their lands by American settlers, like miners, ranchers, and farmers. Although California had entered the American union as a free state, the "loitering or orphaned Indians", were de facto enslaved by their new Anglo-American masters under the 1850 Act for the Government and Protection of Indians. One of these de facto slave auctions was approved by the Los Angeles City Council and occurred for nearly twenty years. There were many massacres in which hundreds of indigenous people were killed by settlers for their land.
Between 1850 and 1860, the California state government paid around 1.5 million dollars (some 250,000 of which was reimbursed by the federal government) to hire militias with the stated purpose of protecting settlers, however these militias perpetrated numerous massacres of indigenous people. Indigenous people were also forcibly moved to reservations and rancherias, which were often small and isolated and without enough natural resources or funding from the government to adequately sustain the populations living on them. As a result, settler colonialism was a calamity for indigenous people. Several scholars and Native American activists, including Benjamin Madley and Ed Castillo, have described the actions of the California government as a genocide, as well as the 40th governor of California Gavin Newsom. Benjamin Madley estimates that from 1846 to 1873, between 9,492 and 16,092 indigenous people were killed, including between 1,680 and 3,741 killed by the U.S. Army.
In the twentieth century, thousands of Japanese people migrated to the US and California specifically to attempt to purchase and own land in the state. However, the state in 1913 passed the Alien Land Act, excluding Asian immigrants from owning land. During World War II, Japanese Americans in California were interned in concentration camps such as at Tule Lake and Manzanar. In 2020, California officially apologized for this internment.
Migration to California accelerated during the early 20th century with the completion of major transcontinental highways like the Lincoln Highway and Route 66. In the period from 1900 to 1965, the population grew from fewer than one million to the greatest in the Union. In 1940, the Census Bureau reported California's population as 6.0% Hispanic, 2.4% Asian, and 89.5% non-Hispanic white.
To meet the population's needs, major engineering feats like the California and Los Angeles Aqueducts; the Oroville and Shasta Dams; and the Bay and Golden Gate Bridges were built across the state. The state government also adopted the California Master Plan for Higher Education in 1960 to develop a highly efficient system of public education.
Meanwhile, attracted to the mild Mediterranean climate, cheap land, and the state's wide variety of geography, filmmakers established the studio system in Hollywood in the 1920s. California manufactured 8.7 percent of total United States military armaments produced during World War II, ranking third (behind New York and Michigan) among the 48 states. California however easily ranked first in production of military ships during the war (transport, cargo, [merchant ships] such as Liberty ships, Victory ships, and warships) at drydock facilities in San Diego, Los Angeles, and the San Francisco Bay Area, which were used on the naval heavy Asia–Pacific War Theater of World War II. Due to the hiring opportunities California offered during the conflict, the population of the state greatly multiplied from the immigration it received due to the work offered in its war factories, military bases, and training facilities. After World War II, California's economy greatly expanded due to strong aerospace and defense industries, whose size decreased following the end of the Cold War. Stanford University and its Dean of Engineering Frederick Terman began encouraging faculty and graduates to stay in California instead of leaving the state, and develop a high-tech region in the area now known as Silicon Valley. As a result of these efforts, California is regarded as a world center of the entertainment and music industries, of technology, engineering, and the aerospace industry, and as the United States center of agricultural production. Just before the Dot Com Bust, California had the fifth-largest economy in the world among nations.
In the mid and late twentieth century, a number of race-related incidents occurred in the state. Tensions between police and African Americans, combined with unemployment and poverty in inner cities, led to violent riots, such as the 1965 Watts riots and 1992 Rodney King riots. California was also the hub of the Black Panther Party, a group known for arming African Americans to defend against racial injustice and for organizing free breakfast programs for schoolchildren. Additionally, Mexican, Filipino, and other migrant farm workers rallied in the state around Cesar Chavez for better pay in the 1960s and 1970s.
During the 20th century, two great disasters happened in California. The 1906 San Francisco earthquake and 1928 St. Francis Dam flood remain the deadliest in U.S. history.
Although air pollution problems have been reduced, health problems associated with pollution have continued. The brown haze known as "smog" has been substantially abated after the passage of federal and state restrictions on automobile exhaust.
An energy crisis in 2001 led to rolling blackouts, soaring power rates, and the importation of electricity from neighboring states. Southern California Edison and Pacific Gas and Electric Company came under heavy criticism.
Housing prices in urban areas continued to increase; a modest home which in the 1960s cost $25,000 would cost half a million dollars or more in urban areas by 2005. More people commuted longer hours to afford a home in more rural areas while earning larger salaries in the urban areas. Speculators bought houses they never intended to live in, expecting to make a huge profit in a matter of months, then rolling it over by buying more properties. Mortgage companies were compliant, as everyone assumed the prices would keep rising. The bubble burst in 2007–8 as housing prices began to crash and the boom years ended. Hundreds of billions in property values vanished and foreclosures soared as many financial institutions and investors were badly hurt.
In the twenty-first century, droughts and frequent wildfires attributed to climate change have occurred in the state. From 2011 to 2017, a persistent drought was the worst in its recorded history. The 2018 wildfire season was the state's deadliest and most destructive, most notably Camp Fire.
One of the first confirmed COVID-19 cases in the United States that occurred in California was first of which was confirmed on January 26, 2020. Meaning, all of the early confirmed cases were persons who had recently travelled to China in Asia, as testing was restricted to this group. On this January 29, 2020, as disease containment protocols were still being developed, the U.S. Department of State evacuated 195 persons from Wuhan, China aboard a chartered flight to March Air Reserve Base in Riverside County, and in this process, it may have granted and conferred to escalated within the land and the US at cosmic. On February 5, 2020, the U.S. evacuated 345 more citizens from Hubei Province to two military bases in California, Travis Air Force Base in Solano County and Marine Corps Air Station Miramar, San Diego, where they were quarantined for 14 days. A state of emergency was largely declared in this state of the nation on March 4, 2020, and as of February 24, 2021, remains in effect. A mandatory statewide stay-at-home order was issued on March 19, 2020, due to increase, which was ended on January 25, 2021, allowing citizens to return to normal life. On April 6, 2021, the state announced plans to fully reopen the economy by June 15, 2021.
In 2019, the 40th governor of California, Gavin Newsom formally apologized to the indigenous peoples of California for the California genocide: "Genocide. No other way to describe it, and that's the way it needs to be described in the history books." Newsom further acknowledged that "the actions of the state 150 years ago have ongoing ramifications even today." Cultural and language revitalization efforts among indigenous Californians have progressed among several tribes as of 2022. Some land returns to indigenous stewardship have occurred throughout California. In 2022, the largest dam removal and river restoration project in US history was announced for the Klamath River as a win for California tribes.
Covering an area of 163,696 sq mi (423,970 km), California is the third-largest state in the United States in area, after Alaska and Texas. California is one of the most geographically diverse states in the union and is often geographically bisected into two regions, Southern California, comprising the ten southernmost counties, and Northern California, comprising the 48 northernmost counties. It is bordered by Oregon to the north, Nevada to the east and northeast, Arizona to the southeast, the Pacific Ocean to the west and shares an international border with the Mexican state of Baja California to the south (with which it makes up part of The Californias region of North America, alongside Baja California Sur).
In the middle of the state lies the California Central Valley, bounded by the Sierra Nevada in the east, the coastal mountain ranges in the west, the Cascade Range to the north and by the Tehachapi Mountains in the south. The Central Valley is California's productive agricultural heartland.
Divided in two by the Sacramento-San Joaquin River Delta, the northern portion, the Sacramento Valley serves as the watershed of the Sacramento River, while the southern portion, the San Joaquin Valley is the watershed for the San Joaquin River. Both valleys derive their names from the rivers that flow through them. With dredging, the Sacramento and the San Joaquin Rivers have remained deep enough for several inland cities to be seaports.
The Sacramento-San Joaquin River Delta is a critical water supply hub for the state. Water is diverted from the delta and through an extensive network of pumps and canals that traverse nearly the length of the state, to the Central Valley and the State Water Projects and other needs. Water from the Delta provides drinking water for nearly 23 million people, almost two-thirds of the state's population as well as water for farmers on the west side of the San Joaquin Valley.
Suisun Bay lies at the confluence of the Sacramento and San Joaquin Rivers. The water is drained by the Carquinez Strait, which flows into San Pablo Bay, a northern extension of San Francisco Bay, which then connects to the Pacific Ocean via the Golden Gate strait.
The Channel Islands are located off the Southern coast, while the Farallon Islands lie west of San Francisco.
The Sierra Nevada (Spanish for "snowy range") includes the highest peak in the contiguous 48 states, Mount Whitney, at 14,505 feet (4,421 m). The range embraces Yosemite Valley, famous for its glacially carved domes, and Sequoia National Park, home to the giant sequoia trees, the largest living organisms on Earth, and the deep freshwater lake, Lake Tahoe, the largest lake in the state by volume.
To the east of the Sierra Nevada are Owens Valley and Mono Lake, an essential migratory bird habitat. In the western part of the state is Clear Lake, the largest freshwater lake by area entirely in California. Although Lake Tahoe is larger, it is divided by the California/Nevada border. The Sierra Nevada falls to Arctic temperatures in winter and has several dozen small glaciers, including Palisade Glacier, the southernmost glacier in the United States.
The Tulare Lake was the largest freshwater lake west of the Mississippi River. A remnant of Pleistocene-era Lake Corcoran, Tulare Lake dried up by the early 20th century after its tributary rivers were diverted for agricultural irrigation and municipal water uses.
About 45 percent of the state's total surface area is covered by forests, and California's diversity of pine species is unmatched by any other state. California contains more forestland than any other state except Alaska. Many of the trees in the California White Mountains are the oldest in the world; an individual bristlecone pine is over 5,000 years old.
In the south is a large inland salt lake, the Salton Sea. The south-central desert is called the Mojave; to the northeast of the Mojave lies Death Valley, which contains the lowest and hottest place in North America, the Badwater Basin at −279 feet (−85 m). The horizontal distance from the bottom of Death Valley to the top of Mount Whitney is less than 90 miles (140 km). Indeed, almost all of southeastern California is arid, hot desert, with routine extreme high temperatures during the summer. The southeastern border of California with Arizona is entirely formed by the Colorado River, from which the southern part of the state gets about half of its water.
A majority of California's cities are located in either the San Francisco Bay Area or the Sacramento metropolitan area in Northern California; or the Los Angeles area, the Inland Empire, or the San Diego metropolitan area in Southern California. The Los Angeles Area, the Bay Area, and the San Diego metropolitan area are among several major metropolitan areas along the California coast.
As part of the Ring of Fire, California is subject to tsunamis, floods, droughts, Santa Ana winds, wildfires, and landslides on steep terrain; California also has several volcanoes. It has many earthquakes due to several faults running through the state, the largest being the San Andreas Fault. About 37,000 earthquakes are recorded each year; most are too small to be felt, but two-thirds of the human risk from earthquakes lies in California.
Most of the state has a Mediterranean climate. The cool California Current offshore often creates summer fog near the coast. Farther inland, there are colder winters and hotter summers. The maritime moderation results in the shoreline summertime temperatures of Los Angeles and San Francisco being the coolest of all major metropolitan areas of the United States and uniquely cool compared to areas on the same latitude in the interior and on the east coast of the North American continent. Even the San Diego shoreline bordering Mexico is cooler in summer than most areas in the contiguous United States. Just a few miles inland, summer temperature extremes are significantly higher, with downtown Los Angeles being several degrees warmer than at the coast. The same microclimate phenomenon is seen in the climate of the Bay Area, where areas sheltered from the ocean experience significantly hotter summers and colder winters in contrast with nearby areas closer to the ocean.
Northern parts of the state have more rain than the south. California's mountain ranges also influence the climate: some of the rainiest parts of the state are west-facing mountain slopes. Coastal northwestern California has a temperate climate, and the Central Valley has a Mediterranean climate but with greater temperature extremes than the coast. The high mountains, including the Sierra Nevada, have an alpine climate with snow in winter and mild to moderate heat in summer.
California's mountains produce rain shadows on the eastern side, creating extensive deserts. The higher elevation deserts of eastern California have hot summers and cold winters, while the low deserts east of the Southern California mountains have hot summers and nearly frostless mild winters. Death Valley, a desert with large expanses below sea level, is considered the hottest location in the world; the highest temperature in the world, 134 °F (56.7 °C), was recorded there on July 10, 1913. The lowest temperature in California was −45 °F (−43 °C) on January 20, 1937, in Boca.
The table below lists average temperatures for January and August in a selection of places throughout the state; some highly populated and some not. This includes the relatively cool summers of the Humboldt Bay region around Eureka, the extreme heat of Death Valley, and the mountain climate of Mammoth in the Sierra Nevada.
The wide range of climates leads to a high demand for water. Over time, droughts have been increasing due to climate change and overextraction, becoming less seasonal and more year-round, further straining California's electricity supply and water security and having an impact on California business, industry, and agriculture.
In 2022, a new state program was created in collaboration with indigenous peoples of California to revive the practice of controlled burns as a way of clearing excessive forest debris and making landscapes more resilient to wildfires. Native American use of fire in ecosystem management was outlawed in 1911, yet has now been recognized.
California is one of the ecologically richest and most diverse parts of the world, and includes some of the most endangered ecological communities. California is part of the Nearctic realm and spans a number of terrestrial ecoregions.
California's large number of endemic species includes relict species, which have died out elsewhere, such as the Catalina ironwood (Lyonothamnus floribundus). Many other endemics originated through differentiation or adaptive radiation, whereby multiple species develop from a common ancestor to take advantage of diverse ecological conditions such as the California lilac (Ceanothus). Many California endemics have become endangered, as urbanization, logging, overgrazing, and the introduction of exotic species have encroached on their habitat.
California boasts several superlatives in its collection of flora: the largest trees, the tallest trees, and the oldest trees. California's native grasses are perennial plants, and there are close to hundred succulent species native to the state. After European contact, these were generally replaced by invasive species of European annual grasses; and, in modern times, California's hills turn a characteristic golden-brown in summer.
Because California has the greatest diversity of climate and terrain, the state has six life zones which are the lower Sonoran Desert; upper Sonoran (foothill regions and some coastal lands), transition (coastal areas and moist northeastern counties); and the Canadian, Hudsonian, and Arctic Zones, comprising the state's highest elevations.
Plant life in the dry climate of the lower Sonoran zone contains a diversity of native cactus, mesquite, and paloverde. The Joshua tree is found in the Mojave Desert. Flowering plants include the dwarf desert poppy and a variety of asters. Fremont cottonwood and valley oak thrive in the Central Valley. The upper Sonoran zone includes the chaparral belt, characterized by forests of small shrubs, stunted trees, and herbaceous plants. Nemophila, mint, Phacelia, Viola, and the California poppy (Eschscholzia californica, the state flower) also flourish in this zone, along with the lupine, more species of which occur here than anywhere else in the world.
The transition zone includes most of California's forests with the redwood (Sequoia sempervirens) and the "big tree" or giant sequoia (Sequoiadendron giganteum), among the oldest living things on earth (some are said to have lived at least 4,000 years). Tanbark oak, California laurel, sugar pine, madrona, broad-leaved maple, and Douglas-fir also grow here. Forest floors are covered with swordfern, alumnroot, barrenwort, and trillium, and there are thickets of huckleberry, azalea, elder, and wild currant. Characteristic wild flowers include varieties of mariposa, tulip, and tiger and leopard lilies.
The high elevations of the Canadian zone allow the Jeffrey pine, red fir, and lodgepole pine to thrive. Brushy areas are abundant with dwarf manzanita and ceanothus; the unique Sierra puffball is also found here. Right below the timberline, in the Hudsonian zone, the whitebark, foxtail, and silver pines grow. At about 10,500 feet (3,200 m), begins the Arctic zone, a treeless region whose flora include a number of wildflowers, including Sierra primrose, yellow columbine, alpine buttercup, and alpine shooting star.
Palm trees are a well-known feature of California, particularly in Southern California and Los Angeles; many species have been imported, though the Washington filifera (commonly known as the California fan palm) is native to the state, mainly growing in the Colorado Desert oases. Other common plants that have been introduced to the state include the eucalyptus, acacia, pepper tree, geranium, and Scotch broom. The species that are federally classified as endangered are the Contra Costa wallflower, Antioch Dunes evening primrose, Solano grass, San Clemente Island larkspur, salt marsh bird's beak, McDonald's rock-cress, and Santa Barbara Island liveforever. As of December 1997, 85 plant species were listed as threatened or endangered.
In the deserts of the lower Sonoran zone, the mammals include the jackrabbit, kangaroo rat, squirrel, and opossum. Common birds include the owl, roadrunner, cactus wren, and various species of hawk. The area's reptilian life include the sidewinder viper, desert tortoise, and horned toad. The upper Sonoran zone boasts mammals such as the antelope, brown-footed woodrat, and ring-tailed cat. Birds unique to this zone are the California thrasher, bushtit, and California condor.
In the transition zone, there are Colombian black-tailed deer, black bears, gray foxes, cougars, bobcats, and Roosevelt elk. Reptiles such as the garter snakes and rattlesnakes inhabit the zone. In addition, amphibians such as the water puppy and redwood salamander are common too. Birds such as the kingfisher, chickadee, towhee, and hummingbird thrive here as well.
The Canadian zone mammals include the mountain weasel, snowshoe hare, and several species of chipmunks. Conspicuous birds include the blue-fronted jay, mountain chickadee, hermit thrush, American dipper, and Townsend's solitaire. As one ascends into the Hudsonian zone, birds become scarcer. While the gray-crowned rosy finch is the only bird native to the high Arctic region, other bird species such as Anna's hummingbird and Clark's nutcracker. Principal mammals found in this region include the Sierra coney, white-tailed jackrabbit, and the bighorn sheep. As of April 2003, the bighorn sheep was listed as endangered by the U.S. Fish and Wildlife Service. The fauna found throughout several zones are the mule deer, coyote, mountain lion, northern flicker, and several species of hawk and sparrow.
Aquatic life in California thrives, from the state's mountain lakes and streams to the rocky Pacific coastline. Numerous trout species are found, among them rainbow, golden, and cutthroat. Migratory species of salmon are common as well. Deep-sea life forms include sea bass, yellowfin tuna, barracuda, and several types of whale. Native to the cliffs of northern California are seals, sea lions, and many types of shorebirds, including migratory species.
As of April 2003, 118 California animals were on the federal endangered list; 181 plants were listed as endangered or threatened. Endangered animals include the San Joaquin kitfox, Point Arena mountain beaver, Pacific pocket mouse, salt marsh harvest mouse, Morro Bay kangaroo rat (and five other species of kangaroo rat), Amargosa vole, California least tern, California condor, loggerhead shrike, San Clemente sage sparrow, San Francisco garter snake, five species of salamander, three species of chub, and two species of pupfish. Eleven butterflies are also endangered and two that are threatened are on the federal list. Among threatened animals are the coastal California gnatcatcher, Paiute cutthroat trout, southern sea otter, and northern spotted owl. California has a total of 290,821 acres (1,176.91 km) of National Wildlife Refuges. As of September 2010, 123 California animals were listed as either endangered or threatened on the federal list. Also, as of the same year, 178 species of California plants were listed either as endangered or threatened on this federal list.
The most prominent river system within California is formed by the Sacramento River and San Joaquin River, which are fed mostly by snowmelt from the west slope of the Sierra Nevada, and respectively drain the north and south halves of the Central Valley. The two rivers join in the Sacramento–San Joaquin River Delta, flowing into the Pacific Ocean through San Francisco Bay. Many major tributaries feed into the Sacramento–San Joaquin system, including the Pit River, Feather River and Tuolumne River.
The Klamath and Trinity Rivers drain a large area in far northwestern California. The Eel River and Salinas River each drain portions of the California coast, north and south of San Francisco Bay, respectively. The Mojave River is the primary watercourse in the Mojave Desert, and the Santa Ana River drains much of the Transverse Ranges as it bisects Southern California. The Colorado River forms the state's southeast border with Arizona.
Most of California's major rivers are dammed as part of two massive water projects: the Central Valley Project, providing water for agriculture in the Central Valley, and the California State Water Project diverting water from Northern to Southern California. The state's coasts, rivers, and other bodies of water are regulated by the California Coastal Commission.
California is traditionally separated into Northern California and Southern California, divided by a straight border which runs across the state, separating the northern 48 counties from the southern 10 counties. Despite the persistence of the northern-southern divide, California is more precisely divided into many regions, multiple of which stretch across the northern-southern divide.
The state has 482 incorporated cities and towns, of which 460 are cities and 22 are towns. Under California law, the terms "city" and "town" are explicitly interchangeable; the name of an incorporated municipality in the state can either be "City of (Name)" or "Town of (Name)".
Sacramento became California's first incorporated city on February 27, 1850. San Jose, San Diego, and Benicia tied for California's second incorporated city, each receiving incorporation on March 27, 1850. Jurupa Valley became the state's most recent and 482nd incorporated municipality, on July 1, 2011.
The majority of these cities and towns are within one of five metropolitan areas: the Los Angeles Metropolitan Area, the San Francisco Bay Area, the Riverside-San Bernardino Area, the San Diego metropolitan area, or the Sacramento metropolitan area.
Nearly one out of every eight Americans lives in California. The United States Census Bureau reported that the population of California was 39,538,223 on April 1, 2020, a 6.13% increase since the 2010 census. The estimated state population in 2022 was 39.22 million. For over a century (1900–2020), California experienced steady population growth, adding an average of more than 300,000 people per year from 1940 onward. California's rate of growth began to slow by the 1990s, although it continued to experience population growth in the first two decades of the 21st century. The state experienced population declines in 2020 and 2021, attributable to declining birth rates, COVID-19 pandemic deaths, and less internal migration from other states to California. According to the U.S. Census Bureau, between 2021 and 2022, 818,000 California residents moved out of state with emigrants listing high cost of living, high taxes, and a difficult business environment as the motivation.
The Greater Los Angeles Area is the second-largest metropolitan area in the United States (U.S.), while Los Angeles is the second-largest city in the U.S. Conversely, San Francisco is the most densely-populated city in California and one of the most densely populated cities in the U.S.. Also, Los Angeles County has held the title of most populous U.S. county for decades, and it alone is more populous than 42 U.S. states. Including Los Angeles, four of the top 20 most populous cities in the U.S. are in California: Los Angeles (2nd), San Diego (8th), San Jose (10th), and San Francisco (17th). The center of population of California is located four miles west-southwest of the city of Shafter, Kern County.
As of 2019, California ranked second among states by life expectancy, with a life expectancy of 80.9 years.
Starting in the year 2010, for the first time since the California Gold Rush, California-born residents made up the majority of the state's population. Along with the rest of the United States, California's immigration pattern has also shifted over the course of the late 2000s to early 2010s. Immigration from Latin American countries has dropped significantly with most immigrants now coming from Asia. In total for 2011, there were 277,304 immigrants. Fifty-seven percent came from Asian countries versus 22% from Latin American countries. Net immigration from Mexico, previously the most common country of origin for new immigrants, has dropped to zero / less than zero since more Mexican nationals are departing for their home country than immigrating.
The state's population of undocumented immigrants has been shrinking in recent years, due to increased enforcement and decreased job opportunities for lower-skilled workers. The number of migrants arrested attempting to cross the Mexican border in the Southwest decreased from a high of 1.1 million in 2005 to 367,000 in 2011. Despite these recent trends, illegal aliens constituted an estimated 7.3 percent of the state's population, the third highest percentage of any state in the country, totaling nearly 2.6 million. In particular, illegal immigrants tended to be concentrated in Los Angeles, Monterey, San Benito, Imperial, and Napa Counties—the latter four of which have significant agricultural industries that depend on manual labor. More than half of illegal immigrants originate from Mexico. The state of California and some California cities, including Los Angeles, Oakland and San Francisco, have adopted sanctuary policies.
According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 171,521 homeless people in California.
According to the United States Census Bureau in 2018 the population self-identified as (alone or in combination): 72.1% White (including Hispanic Whites), 36.8% non-Hispanic whites, 15.3% Asian, 6.5% Black or African American, 1.6% Native American and Alaska Native, 0.5% Native Hawaiian or Pacific Islander, and 3.9% two or more races.
By ethnicity, in 2018 the population was 60.7% non-Hispanic (of any race) and 39.3% Hispanic or Latino (of any race). Hispanics are the largest single ethnic group in California. Non-Hispanic whites constituted 36.8% of the state's population. Californios are the Hispanic residents native to California, who make up the Spanish-speaking community that has existed in California since 1542, of varying Mexican American/Chicano, Criollo Spaniard, and Mestizo origin.
As of 2011, 75.1% of California's population younger than age 1 were minorities, meaning they had at least one parent who was not non-Hispanic white (white Hispanics are counted as minorities).
In terms of total numbers, California has the largest population of White Americans in the United States, an estimated 22,200,000 residents. The state has the 5th largest population of African Americans in the United States, an estimated 2,250,000 residents. California's Asian American population is estimated at 4.4 million, constituting a third of the nation's total. California's Native American population of 285,000 is the most of any state.
According to estimates from 2011, California has the largest minority population in the United States by numbers, making up 60% of the state population. Over the past 25 years, the population of non-Hispanic whites has declined, while Hispanic and Asian populations have grown. Between 1970 and 2011, non-Hispanic whites declined from 80% of the state's population to 40%, while Hispanics grew from 32% in 2000 to 38% in 2011. It is currently projected that Hispanics will rise to 49% of the population by 2060, primarily due to domestic births rather than immigration. With the decline of immigration from Latin America, Asian Americans now constitute the fastest growing racial/ethnic group in California; this growth is primarily driven by immigration from China, India and the Philippines, respectively.
Most of California's immigrant population are born in Mexico (3.9 million), the Philippines (825,200), China (768,400), India (556,500) and Vietnam (502,600).
California has the largest multiracial population in the United States. California has the highest rate of interracial marriage.
English serves as California's de jure and de facto official language. According to the 2021 American Community Survey conducted by the United States Census Bureau, 56.08% (20,763,638) of California residents age 5 and older spoke only English at home, while 43.92% spoke another language at home. 60.35% of people who speak a language other than English at home are able to speak English "well" or "very well", with this figure varying significantly across the different linguistic groups. Like most U.S. states (32 out of 50), California law enshrines English as its official language, and has done so since the passage of Proposition 63 by California voters in 1986. Various government agencies do, and are often required to, furnish documents in the various languages needed to reach their intended audiences.
Spanish is the most commonly spoken language in California, behind English, spoken by 28.18% (10,434,308) of the population (in 2021). The Spanish language has been spoken in California since 1542 and is deeply intertwined with California's cultural landscape and history. Spanish was the official administrative language of California through the Spanish and Mexican eras, until 1848. Following the U.S. Conquest of California and the Treaty of Guadalupe-Hidalgo, the U.S. Government guaranteed the rights of Spanish speaking Californians. The first Constitution of California was written in both languages at the Monterey Constitutional Convention of 1849 and protected the rights of Spanish speakers to use their language in government proceedings and mandating that all government documents be published in both English and Spanish.
Despite the initial recognition of Spanish by early American governments in California, the revised 1879 constitution stripped the rights of Spanish speakers and the official status of Spanish. The growth of the English-only movement by the mid-20th century led to the passage of 1986 California Proposition 63, which enshrined English as the only official language in California and ended Spanish language instruction in schools. 2016 California Proposition 58 reversed the prohibition on bilingual education, though there are still many barriers to the proliferation of Spanish bilingual education, including a shortage of teachers and lack of funding. The government of California has since made efforts to promote Spanish language access and bilingual education, as have private educational institutions in California. Many businesses in California promote the usage of Spanish by their employees, to better serve both California's Hispanic population and the larger Spanish-speaking world.
California has historically been one of the most linguistically diverse areas in the world, with more than 70 indigenous languages derived from 64 root languages in six language families. A survey conducted between 2007 and 2009 identified 23 different indigenous languages among California farmworkers. All of California's indigenous languages are endangered, although there are now efforts toward language revitalization. California has the highest concentration nationwide of Chinese, Vietnamese and Punjabi speakers.
As a result of the state's increasing diversity and migration from other areas across the country and around the globe, linguists began noticing a noteworthy set of emerging characteristics of spoken American English in California since the late 20th century. This variety, known as California English, has a vowel shift and several other phonological processes that are different from varieties of American English used in other regions of the United States.
Religious self-identification, per Public Religion Research Institute's 2021 American Values Survey
The largest religious denominations by number of adherents as a percentage of California's population in 2014 were the Catholic Church with 28 percent, Evangelical Protestants with 20 percent, and Mainline Protestants with 10 percent. Together, all kinds of Protestants accounted for 32 percent. Those unaffiliated with any religion represented 27 percent of the population. The breakdown of other religions is 1% Muslim, 2% Hindu and 2% Buddhist. This is a change from 2008, when the population identified their religion with the Catholic Church with 31 percent; Evangelical Protestants with 18 percent; and Mainline Protestants with 14 percent. In 2008, those unaffiliated with any religion represented 21 percent of the population. The breakdown of other religions in 2008 was 0.5% Muslim, 1% Hindu and 2% Buddhist. The American Jewish Year Book placed the total Jewish population of California at about 1,194,190 in 2006. According to the Association of Religion Data Archives (ARDA) the largest denominations by adherents in 2010 were the Catholic Church with 10,233,334; The Church of Jesus Christ of Latter-day Saints with 763,818; and the Southern Baptist Convention with 489,953.
The first priests to come to California were Catholic missionaries from Spain. Catholics founded 21 missions along the California coast, as well as the cities of Los Angeles and San Francisco. California continues to have a large Catholic population due to the large numbers of Mexicans and Central Americans living within its borders. California has twelve dioceses and two archdioceses, the Archdiocese of Los Angeles and the Archdiocese of San Francisco, the former being the largest archdiocese in the United States.
A Pew Research Center survey revealed that California is somewhat less religious than the rest of the states: 62 percent of Californians say they are "absolutely certain" of their belief in God, while in the nation 71 percent say so. The survey also revealed 48 percent of Californians say religion is "very important", compared to 56 percent nationally.
The culture of California is a Western culture and most clearly has its modern roots in the culture of the United States, but also, historically, many Hispanic Californio and Mexican influences. As a border and coastal state, California culture has been greatly influenced by several large immigrant populations, especially those from Latin America and Asia.
California has long been a subject of interest in the public mind and has often been promoted by its boosters as a kind of paradise. In the early 20th century, fueled by the efforts of state and local boosters, many Americans saw the Golden State as an ideal resort destination, sunny and dry all year round with easy access to the ocean and mountains. In the 1960s, popular music groups such as the Beach Boys promoted the image of Californians as laid-back, tanned beach-goers.
The California Gold Rush of the 1850s is still seen as a symbol of California's economic style, which tends to generate technology, social, entertainment, and economic fads and booms and related busts.
Hollywood and the rest of the Los Angeles area is a major global center for entertainment, with the U.S. film industry's "Big Five" major film studios (Columbia, Disney, Paramount, Universal, and Warner Bros.) as well as many minor film studios being based in or around the area. Many animation studios are also headquartered in the state.
The four major American television commercial broadcast networks (ABC, CBS, NBC, and Fox) as well as other networks all have production facilities and offices in the state. All the four major commercial broadcast networks, plus the two major Spanish-language networks (Telemundo and Univision) each have at least three owned-and-operated TV stations in California, including at least one in Los Angeles and at least one in San Francisco.
One of the oldest radio stations in the United States still in existence, KCBS (AM) in the San Francisco Bay Area, was founded in 1909. Universal Music Group, one of the "Big Four" record labels, is based in Santa Monica, while Warner Records is based in Los Angeles. Many independent record labels, such as Mind of a Genius Records, are also headquartered in the state. California is also the birthplace of several international music genres, including the Bakersfield sound, Bay Area thrash metal, alternative rock, g-funk, nu metal, glam metal, thrash metal, psychedelic rock, stoner rock, punk rock, hardcore punk, metalcore, pop punk, surf music, third wave ska, west coast hip hop, west coast jazz, jazz rap, and many other genres. Other genres such as pop rock, indie rock, hard rock, hip hop, pop, rock, rockabilly, country, heavy metal, grunge, new wave and disco were popularized in the state. In addition, many British bands, such as Led Zeppelin, Deep Purple, Black Sabbath, and the Rolling Stones settled in the state after becoming internationally famous.
As the home of Silicon Valley, the Bay Area is the headquarters of several prominent internet media, social media, and other technology companies. Three of the "Big Five" technology companies (Apple, Meta, and Google) are based in the area as well as other services such as Netflix, Pandora Radio, Twitter, Yahoo!, and YouTube. Other prominent companies that are headquartered here include HP inc. and Intel. Microsoft and Amazon also have offices in the area.
California, particularly Southern California, is considered the birthplace of modern car culture.
Several fast food, fast casual, and casual dining chains were also founded California, including some that have since expanded internationally like California Pizza Kitchen, Denny's, IHOP, McDonald's, Panda Express, and Taco Bell.
California has nineteen major professional sports league franchises, far more than any other state. The San Francisco Bay Area has six major league teams spread in its three major cities: San Francisco, San Jose, and Oakland, while the Greater Los Angeles Area is home to ten major league franchises. San Diego and Sacramento each have one major league team. The NFL Super Bowl has been hosted in California 12 times at five different stadiums: Los Angeles Memorial Coliseum, the Rose Bowl, Stanford Stadium, Levi's Stadium, and San Diego's Qualcomm Stadium. A thirteenth, Super Bowl LVI, was held at Sofi Stadium in Inglewood on February 13, 2022.
California has long had many respected collegiate sports programs. California is home to the oldest college bowl game, the annual Rose Bowl, among others.
The NFL has three teams in the state: the Los Angeles Rams, Los Angeles Chargers, and San Francisco 49ers.
MLB has five teams in the state: the San Francisco Giants, Oakland Athletics, Los Angeles Dodgers, Los Angeles Angels, and San Diego Padres.
The NBA has four teams in the state: the Golden State Warriors, Los Angeles Clippers, Los Angeles Lakers, and Sacramento Kings. Additionally, the WNBA also has one team in the state: the Los Angeles Sparks.
The NHL has three teams in the state: the Anaheim Ducks, Los Angeles Kings, and San Jose Sharks.
MLS has three teams in the state: the Los Angeles Galaxy, San Jose Earthquakes, and Los Angeles Football Club.
MLR has one team in the state: the San Diego Legion.
California is the only U.S. state to have hosted both the Summer and Winter Olympics. The 1932 and 1984 summer games were held in Los Angeles. Squaw Valley Ski Resort (now Palisades Tahoe) in the Lake Tahoe region hosted the 1960 Winter Olympics. Los Angeles will host the 2028 Summer Olympics, marking the fourth time that California will have hosted the Olympic Games. Multiple games during the 1994 FIFA World Cup took place in California, with the Rose Bowl hosting eight matches (including the final), while Stanford Stadium hosted six matches.
In addition to the Olympic games, California also hosts the California State Games.
Many sports, such as surfing, snowboarding, and skateboarding, were invented in California, while others like volleyball, beach soccer, and skiing were popularized in the state.
Other sports that are big in the state include golf, rodeo, tennis, mountain climbing, marathon running, horse racing, bowling, mixed martial arts, boxing, and motorsports, especially NASCAR and Formula One.
California has the most school students in the country, with over 6.2 million in the 2005–06 school year, giving California more students in school than 36 states have in total population and one of the highest projected enrollments in the country. Public secondary education consists of high schools that teach elective courses in trades, languages, and liberal arts with tracks for gifted, college-bound and industrial arts students. California's public educational system is supported by a unique constitutional amendment that requires a minimum annual funding level for grades K–12 and community colleges that grows with the economy and student enrollment figures.
In 2016, California's K–12 public school per-pupil spending was ranked 22nd in the nation ($11,500 per student vs. $11,800 for the U.S. average).
For 2012, California's K–12 public schools ranked 48th in the number of employees per student, at 0.102 (the U.S. average was 0.137), while paying the 7th most per employee, $49,000 (the U.S. average was $39,000).
A 2007 study concluded that California's public school system was "broken" in that it suffered from overregulation.
California public postsecondary education is organized into three separate systems:
California is also home to notable private universities such as Stanford University, the California Institute of Technology (Caltech), the University of Southern California, the Claremont Colleges, Santa Clara University, Loyola Marymount University, the University of San Diego, the University of San Francisco, Chapman University, Pepperdine University, Occidental College, and University of the Pacific, among numerous other private colleges and universities, including many religious and special-purpose institutions. California has a particularly high density of arts colleges, including the California College of the Arts, California Institute of the Arts, San Francisco Art Institute, Art Center College of Design, and Academy of Art University, among others.
California's economy ranks among the largest in the world. As of 2022, the gross state product (GSP) was $3.6 trillion ($92,190 per capita), the largest in the United States. California is responsible for one seventh of the nation's gross domestic product (GDP). As of 2018, California's nominal GDP is larger than all but four countries (the United States, China, Japan, and Germany). In terms of purchasing power parity (PPP), it is larger than all but eight countries (the United States, China, India, Japan, Germany, Russia, Brazil, and Indonesia). California's economy is larger than Africa and Australia and is almost as large as South America. The state recorded total, non-farm employment of 16,677,800 as of September 2021 among 966,224 employer establishments.
As the largest and second-largest U.S. ports respectively, the Port of Los Angeles and the Port of Long Beach in Southern California collectively play a pivotal role in the global supply chain, together hauling in about 40% of all imports to the United States by TEU volume. The Port of Oakland and Port of Hueneme are the 10th and 26th largest seaports in the U.S., respectively, by number of TEUs handled.
The five largest sectors of employment in California are trade, transportation, and utilities; government; professional and business services; education and health services; and leisure and hospitality. In output, the five largest sectors are financial services, followed by trade, transportation, and utilities; education and health services; government; and manufacturing. California has an unemployment rate of 3.9% as of September 2022.
California's economy is dependent on trade and international related commerce accounts for about one-quarter of the state's economy. In 2008, California exported $144 billion worth of goods, up from $134 billion in 2007 and $127 billion in 2006. Computers and electronic products are California's top export, accounting for 42 percent of all the state's exports in 2008.
Agriculture is an important sector in California's economy. According to the USDA in 2011, the three largest California agricultural products by value were milk and cream, shelled almonds, and grapes. Farming-related sales more than quadrupled over the past three decades, from $7.3 billion in 1974 to nearly $31 billion in 2004. This increase has occurred despite a 15 percent decline in acreage devoted to farming during the period, and water supply suffering from chronic instability. Factors contributing to the growth in sales-per-acre include more intensive use of active farmlands and technological improvements in crop production. In 2008, California's 81,500 farms and ranches generated $36.2 billion products revenue. In 2011, that number grew to $43.5 billion products revenue. The agriculture sector accounts for two percent of the state's GDP and employs around three percent of its total workforce.
Per capita GDP in 2007 was $38,956, ranking eleventh in the nation. Per capita income varies widely by geographic region and profession. The Central Valley is the most impoverished, with migrant farm workers making less than minimum wage. According to a 2005 report by the Congressional Research Service, the San Joaquin Valley was characterized as one of the most economically depressed regions in the United States, on par with the region of Appalachia.
Using the supplemental poverty measure, California has a poverty rate of 23.5%, the highest of any state in the country. However, using the official measure the poverty rate was only 13.3% as of 2017. Many coastal cities include some of the wealthiest per-capita areas in the United States. The high-technology sectors in Northern California, specifically Silicon Valley, in Santa Clara and San Mateo counties, have emerged from the economic downturn caused by the dot-com bust.
In 2019, there were 1,042,027 millionaire households in the state, more than any other state in the nation. In 2010, California residents were ranked first among the states with the best average credit score of 754.
State spending increased from $56 billion in 1998 to $127 billion in 2011. California has the third highest per capita spending on welfare among the states, as well as the highest spending on welfare at $6.67 billion. In January 2011, California's total debt was at least $265 billion. On June 27, 2013, Governor Jerry Brown signed a balanced budget (no deficit) for the state, its first in decades; however, the state's debt remains at $132 billion.
With the passage of Proposition 30 in 2012 and Proposition 55 in 2016, California now levies a 13.3% maximum marginal income tax rate with ten tax brackets, ranging from 1% at the bottom tax bracket of $0 annual individual income to 13.3% for annual individual income over $1,000,000 (though the top brackets are only temporary until Proposition 55 expires at the end of 2030). While Proposition 30 also enacted a minimum state sales tax of 7.5%, this sales tax increase was not extended by Proposition 55 and reverted to a previous minimum state sales tax rate of 7.25% in 2017. Local governments can and do levy additional sales taxes in addition to this minimum rate.
All real property is taxable annually; the ad valorem tax is based on the property's fair market value at the time of purchase or the value of new construction. Property tax increases are capped at 2% annually or the rate of inflation (whichever is lower), per Proposition 13.
Because it is the most populous state in the United States, California is one of the country's largest users of energy. The state has extensive hydro-electric energy generation facilities, however, moving water is the single largest energy use in the state. Also, due to high energy rates, conservation mandates, mild weather in the largest population centers and strong environmental movement, its per capita energy use is one of the smallest of any state in the United States. Due to the high electricity demand, California imports more electricity than any other state, primarily hydroelectric power from states in the Pacific Northwest (via Path 15 and Path 66) and coal- and natural gas-fired production from the desert Southwest via Path 46.
The state's crude oil and natural gas deposits are located in the Central Valley and along the coast, including the large Midway-Sunset Oil Field. Natural gas-fired power plants typically account for more than one-half of state electricity generation.
As a result of the state's strong environmental movement, California has some of the most aggressive renewable energy goals in the United States. Senate Bill SB 1020 (the Clean Energy, Jobs and Affordability Act of 2022) commits the state to running its operations on clean, renewable energy resources by 2035, and SB 1203 also requires the state to achieve net-zero operations for all agencies. Currently, several solar power plants such as the Solar Energy Generating Systems facility are located in the Mojave Desert. California's wind farms include Altamont Pass, San Gorgonio Pass, and Tehachapi Pass. The Tehachapi area is also where the Tehachapi Energy Storage Project is located. Several dams across the state provide hydro-electric power. It would be possible to convert the total supply to 100% renewable energy, including heating, cooling and mobility, by 2050.
California has one major nuclear power plant (Diablo Canyon) in operation. The San Onofre nuclear plant was shut down in 2013. More than 1,700 tons of radioactive waste are stored at San Onofre, and sit on the coast where there is a record of past tsunamis. Voters banned the approval of new nuclear power plants since the late 1970s because of concerns over radioactive waste disposal. In addition, several cities such as Oakland, Berkeley and Davis have declared themselves as nuclear-free zones.
California's vast terrain is connected by an extensive system of controlled-access highways ('freeways'), limited-access roads ('expressways'), and highways. California is known for its car culture, giving California's cities a reputation for severe traffic congestion. Construction and maintenance of state roads and statewide transportation planning are primarily the responsibility of the California Department of Transportation, nicknamed "Caltrans". The rapidly growing population of the state is straining all of its transportation networks, and California has some of the worst roads in the United States. The Reason Foundation's 19th Annual Report on the Performance of State Highway Systems ranked California's highways the third-worst of any state, with Alaska second, and Rhode Island first.
The state has been a pioneer in road construction. One of the state's more visible landmarks, the Golden Gate Bridge, was the longest suspension bridge main span in the world at 4,200 feet (1,300 m) between 1937 (when it opened) and 1964. With its orange paint and panoramic views of the bay, this highway bridge is a popular tourist attraction and also accommodates pedestrians and bicyclists. The San Francisco–Oakland Bay Bridge (often abbreviated the "Bay Bridge"), completed in 1936, transports about 280,000 vehicles per day on two-decks. Its two sections meet at Yerba Buena Island through the world's largest diameter transportation bore tunnel, at 76 feet (23 m) wide by 58 feet (18 m) high. The Arroyo Seco Parkway, connecting Los Angeles and Pasadena, opened in 1940 as the first freeway in the Western United States. It was later extended south to the Four Level Interchange in downtown Los Angeles, regarded as the first stack interchange ever built.
The California Highway Patrol is the largest statewide police agency in the United States in employment with more than 10,000 employees. They are responsible for providing any police-sanctioned service to anyone on California's state-maintained highways and on state property.
By the end of 2021, 30,610,058 people in California held a California Department of Motor Vehicles-issued driver's licenses or state identification card, and there were 36,229,205 registered vehicles, including 25,643,076 automobiles, 853,368 motorcycles, 8,981,787 trucks and trailers, and 121,716 miscellaneous vehicles (including historical vehicles and farm equipment).
Los Angeles International Airport (LAX), the 4th busiest airport in the world in 2018, and San Francisco International Airport (SFO), the 25th busiest airport in the world in 2018, are major hubs for trans-Pacific and transcontinental traffic. There are about a dozen important commercial airports and many more general aviation airports throughout the state.
Inter-city rail travel is provided by Amtrak California; the three routes, the Capitol Corridor, Pacific Surfliner, and San Joaquin, are funded by Caltrans. These services are the busiest intercity rail lines in the United States outside the Northeast Corridor and ridership is continuing to set records. The routes are becoming increasingly popular over flying, especially on the LAX-SFO route. Integrated subway and light rail networks are found in Los Angeles (Los Angeles Metro Rail) and San Francisco (Muni Metro). Light rail systems are also found in San Jose (VTA light rail), San Diego (San Diego Trolley), Sacramento (Sacramento RT Light Rail), and Northern San Diego County (Sprinter). Furthermore, commuter rail networks serve the San Francisco Bay Area (Altamont Corridor Express, Bay Area Rapid Transit, Caltrain, Sonoma–Marin Area Rail Transit), Greater Los Angeles (Metrolink), and San Diego County (Coaster).
The California High-Speed Rail Authority was authorized in 1996 by the state legislature to plan a California High-Speed Rail system to put before the voters. The plan they devised, 2008 California Proposition 1A, connecting all the major population centers in the state, was approved by the voters at the November 2008 general election. The first phase of construction was begun in 2015, and the first segment 171 miles (275 km) long, is planned to be put into operation by the end of 2030. Planning and work on the rest of the system is continuing, with funding for completing it is an ongoing issue. California's 2023 integrated passenger rail master plan includes a high speed rail system.
Nearly all counties operate bus lines, and many cities operate their own city bus lines as well. Intercity bus travel is provided by Greyhound, Megabus, and Amtrak Thruway.
California's interconnected water system is the world's largest, managing over 40,000,000 acre-feet (49 km) of water per year, centered on six main systems of aqueducts and infrastructure projects. Water use and conservation in California is a politically divisive issue, as the state experiences periodic droughts and has to balance the demands of its large agricultural and urban sectors, especially in the arid southern portion of the state. The state's widespread redistribution of water also invites the frequent scorn of environmentalists.
The California Water Wars, a conflict between Los Angeles and the Owens Valley over water rights, is one of the most well-known examples of the struggle to secure adequate water supplies. Former California Governor Arnold Schwarzenegger said: "We've been in crisis for quite some time because we're now 38 million people and not anymore 18 million people like we were in the late 60s. So it developed into a battle between environmentalists and farmers and between the south and the north and between rural and urban. And everyone has been fighting for the last four decades about water."
The capital city of California is Sacramento. The state is organized into three branches of government—the executive branch consisting of the governor and the other independently elected constitutional officers; the legislative branch consisting of the Assembly and Senate; and the judicial branch consisting of the Supreme Court of California and lower courts. The state also allows ballot propositions: direct participation of the electorate by initiative, referendum, recall, and ratification. Before the passage of Proposition 14 in 2010, California allowed each political party to choose whether to have a closed primary or a primary where only party members and independents vote. After June 8, 2010, when Proposition 14 was approved, excepting only the United States president and county central committee offices, all candidates in the primary elections are listed on the ballot with their preferred party affiliation, but they are not the official nominee of that party. At the primary election, the two candidates with the top votes will advance to the general election regardless of party affiliation. If at a special primary election, one candidate receives more than 50% of all the votes cast, they are elected to fill the vacancy and no special general election will be held.
The California executive branch consists of the governor and seven other elected constitutional officers: lieutenant governor, attorney general, secretary of state, state controller, state treasurer, insurance commissioner, and state superintendent of public instruction. They serve four-year terms and may be re-elected only once.
The many California state agencies that are under the governor's cabinet are grouped together to form cabinet-level entities that are referred to by government officials as "superagencies". Those departments that are directly under the other independently elected officers work separately from these superagencies.
The California State Legislature consists of a 40-member Senate and 80-member Assembly. Senators serve four-year terms and Assembly members two. Members of the Assembly are subject to term limits of six terms, and members of the Senate are subject to term limits of three terms.
California's legal system is explicitly based upon English common law but carries many features from Spanish civil law, such as community property. California's prison population grew from 25,000 in 1980 to over 170,000 in 2007. Capital punishment is a legal form of punishment and the state has the largest "Death Row" population in the country (though Oklahoma and Texas are far more active in carrying out executions). California has performed 13 executions since 1976, with the last being in 2006.
California's judiciary system is the largest in the United States with a total of 1,600 judges (the federal system has only about 840). At the apex is the seven-member Supreme Court of California, while the California Courts of Appeal serve as the primary appellate courts and the California Superior Courts serve as the primary trial courts. Justices of the Supreme Court and Courts of Appeal are appointed by the governor, but are subject to retention by the electorate every 12 years.
The administration of the state's court system is controlled by the Judicial Council, composed of the chief justice of the California Supreme Court, 14 judicial officers, four representatives from the State Bar of California, and one member from each house of the state legislature.
In fiscal year 2020–2021, the state judiciary's 2,000 judicial officers and 18,000 judicial branch employees processed approximately 4.4 million cases.
California has an extensive system of local government that manages public functions throughout the state. Like most states, California is divided into counties, of which there are 58 (including San Francisco) covering the entire state. Most urbanized areas are incorporated as cities. School districts, which are independent of cities and counties, handle public education. Many other functions, such as fire protection and water supply, especially in unincorporated areas, are handled by special districts.
California is divided into 58 counties. Per Article 11, Section 1, of the Constitution of California, they are the legal subdivisions of the state. The county government provides countywide services such as law enforcement, jails, elections and voter registration, vital records, property assessment and records, tax collection, public health, health care, social services, libraries, flood control, fire protection, animal control, agricultural regulations, building inspections, ambulance services, and education departments in charge of maintaining statewide standards. In addition, the county serves as the local government for all unincorporated areas. Each county is governed by an elected board of supervisors.
Incorporated cities and towns in California are either charter or general-law municipalities. General-law municipalities owe their existence to state law and are consequently governed by it; charter municipalities are governed by their own city or town charters. Municipalities incorporated in the 19th century tend to be charter municipalities. All ten of the state's most populous cities are charter cities. Most small cities have a council–manager form of government, where the elected city council appoints a city manager to supervise the operations of the city. Some larger cities have a directly elected mayor who oversees the city government. In many council-manager cities, the city council selects one of its members as a mayor, sometimes rotating through the council membership—but this type of mayoral position is primarily ceremonial. The Government of San Francisco is the only consolidated city-county in California, where both the city and county governments have been merged into one unified jurisdiction.
About 1,102 school districts, independent of cities and counties, handle California's public education. California school districts may be organized as elementary districts, high school districts, unified school districts combining elementary and high school grades, or community college districts.
There are about 3,400 special districts in California. A special district, defined by California Government Code § 16271(d) as "any agency of the state for the local performance of governmental or proprietary functions within limited boundaries", provides a limited range of services within a defined geographic area. The geographic area of a special district can spread across multiple cities or counties, or could consist of only a portion of one. Most of California's special districts are single-purpose districts, and provide one service.
The state of California sends 52 members to the House of Representatives, the nation's largest congressional state delegation. Consequently, California also has the largest number of electoral votes in national presidential elections, with 54. The former speaker of the House of Representatives is the representative of California's 20th district, Kevin McCarthy.
California is represented in the United States Senate by Alex Padilla, a native and former secretary of state of California, and Laphonza Butler, a labor union official who was appointed to the Senate by Governor Gavin Newson to complete the term of Dianne Feinstein, who died on the 29th of September, 2023. Former U.S. senator Kamala Harris, a native, former district attorney from San Francisco, former attorney general of California, resigned on January 18, 2021, to assume her role as the current Vice President of the United States. In the 1992 U.S. Senate election, California became the first state to elect a Senate delegation entirely composed of women, due to the victories of Feinstein and Barbara Boxer. Following the Vice President, Gov. Newsom appointed Secretary of State Alex Padilla to finish the rest of Harris's term which ended in 2022. Padilla successfully ran for a full term that same year. Padilla was sworn in on January 20, 2021, the same day as the inauguration of Joe Biden as well as Harris.
In California, as of 2009, the U.S. Department of Defense had a total of 117,806 active duty servicemembers of which 88,370 were Sailors or Marines, 18,339 were Airmen, and 11,097 were Soldiers, with 61,365 Department of Defense civilian employees. Additionally, there were a total of 57,792 Reservists and Guardsman in California.
In 2010, Los Angeles County was the largest origin of military recruits in the United States by county, with 1,437 individuals enlisting in the military. However, as of 2002, Californians were relatively under-represented in the military as a proportion to its population.
In 2000, California, had 2,569,340 veterans of United States military service: 504,010 served in World War II, 301,034 in the Korean War, 754,682 during the Vietnam War, and 278,003 during 1990–2000 (including the Persian Gulf War). As of 2010, there were 1,942,775 veterans living in California, of which 1,457,875 served during a period of armed conflict, and just over four thousand served before World War II (the largest population of this group of any state).
California's military forces consist of the Army and Air National Guard, the naval and state military reserve (militia), and the California Cadet Corps.
On August 5, 1950, a nuclear-capable United States Air Force Boeing B-29 Superfortress bomber carrying a nuclear bomb crashed shortly after takeoff from Fairfield-Suisun Air Force Base. Brigadier General Robert F. Travis, command pilot of the bomber, was among the dead.
California has an idiosyncratic political culture compared to the rest of the country, and is sometimes regarded as a trendsetter. In socio-cultural mores and national politics, Californians are perceived as more liberal than other Americans, especially those who live in the inland states. In the 2016 United States presidential election, California had the third highest percentage of Democratic votes behind the District of Columbia and Hawaii. In the 2020 United States presidential election, it had the 6th highest behind the District of Columbia, Vermont, Massachusetts, Maryland, and Hawaii. According to the Cook Political Report, California contains five of the 15 most Democratic congressional districts in the United States.
Among the political idiosyncrasies, California was the second state to recall their state governor (the first state being North Dakota in 1921), the second state to legalize abortion, and the only state to ban marriage for gay couples twice by vote (including Proposition 8 in 2008). Voters also passed Proposition 71 in 2004 to fund stem cell research, making California the second state to legalize stem cell research after New Jersey, and Proposition 14 in 2010 to completely change the state's primary election process. California has also experienced disputes over water rights; and a tax revolt, culminating with the passage of Proposition 13 in 1978, limiting state property taxes. California voters have rejected affirmative action on multiple occasions, most recently in November 2020.
The state's trend towards the Democratic Party and away from the Republican Party can be seen in state elections. From 1899 to 1939, California had Republican governors. Since 1990, California has generally elected Democratic candidates to federal, state and local offices, including current Governor Gavin Newsom; however, the state has elected Republican Governors, though many of its Republican Governors, such as Arnold Schwarzenegger, tend to be considered moderate Republicans and more centrist than the national party.
Several political movements have advocated for California independence. The California National Party and the California Freedom Coalition both advocate for California independence along the lines of progressivism and civic nationalism. The Yes California movement attempted to organize an independence referendum via ballot initiative for 2019, which was then postponed.
The Democrats also now hold a supermajority in both houses of the state legislature. There are 62 Democrats and 18 Republicans in the Assembly; and 32 Democrats and 8 Republicans in the Senate.
The trend towards the Democratic Party is most obvious in presidential elections. From 1952 through 1988, California was a Republican leaning state, with the party carrying the state's electoral votes in nine of ten elections, with 1964 as the exception. Southern California Republicans Richard Nixon and Ronald Reagan were both elected twice as the 37th and 40th U.S. Presidents, respectively. However, Democrats have won all of California's electoral votes for the last eight elections, starting in 1992.
In the United States House, the Democrats held a 34–19 edge in the CA delegation of the 110th United States Congress in 2007. As the result of gerrymandering, the districts in California were usually dominated by one or the other party, and few districts were considered competitive. In 2008, Californians passed Proposition 20 to empower a 14-member independent citizen commission to redraw districts for both local politicians and Congress. After the 2012 elections, when the new system took effect, Democrats gained four seats and held a 38–15 majority in the delegation. Following the 2018 midterm House elections, Democrats won 46 out of 53 congressional house seats in California, leaving Republicans with seven.
In general, Democratic strength is centered in the populous coastal regions of the Los Angeles metropolitan area and the San Francisco Bay Area. Republican strength is still greatest in eastern parts of the state. Orange County had remained largely Republican until the 2016 and 2018 elections, in which a majority of the county's votes were cast for Democratic candidates. One study ranked Berkeley, Oakland, Inglewood and San Francisco in the top 20 most liberal American cities; and Bakersfield, Orange, Escondido, Garden Grove, and Simi Valley in the top 20 most conservative cities.
In October 2022, out of the 26,876,800 people eligible to vote, 21,940,274 people were registered to vote. Of the people registered, the three largest registered groups were Democrats (10,283,258), Republicans (5,232,094), and No Party Preference (4,943,696). Los Angeles County had the largest number of registered Democrats (2,996,565) and Republicans (958,851) of any county in the state.
California retains the death penalty, though it has not been used since 2006. There is currently a gubernatorial hold on executions. Authorized methods of execution include the gas chamber.
California has region twinning arrangements with:
37°N 120°W / 37°N 120°W / 37; -120 (State of California) | [
{
"paragraph_id": 0,
"text": "California is a state in the Western United States. With over 38.9 million residents across a total area of approximately 163,696 square miles (423,970 km), it is the most populous U.S. state, the third-largest U.S. state by area, and the most populated subnational entity in North America. California borders Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south; it has a coastline along the Pacific Ocean to the west.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Greater Los Angeles and San Francisco Bay areas in California are the nation's second and fifth-most populous urban regions respectively. Greater Los Angeles has over 18.7 million residents and the San Francisco Bay Area has over 9.6 million residents. Los Angeles is the state's most populous city and the nation's second-most populous city. San Francisco is the second-most densely populated major city in the country. Los Angeles County is the country's most populous county, and San Bernardino County is the nation's largest county by area. Sacramento is the state's capital.",
"title": ""
},
{
"paragraph_id": 2,
"text": "California's economy is the largest of any state within the United States, with a $3.6 trillion gross state product (GSP) as of 2022. It is the largest sub-national economy in the world. If California were a sovereign nation, it would rank as the world's fifth-largest economy as of 2022, behind India and ahead of the United Kingdom, as well as the 37th most populous. The Greater Los Angeles area and the San Francisco area are the nation's second- and fourth-largest urban economies ($1.0 trillion and $0.6 trillion respectively as of 2020). The San Francisco Bay Area Combined Statistical Area had the nation's highest gross domestic product per capita ($106,757) among large primary statistical areas in 2018, and is home to five of the world's ten largest companies by market capitalization and four of the world's ten richest people. Slightly over 84 percent of the state's residents 25 or older hold a high school degree, the lowest high school education rate of all 50 states.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Prior to European colonization, California was one of the most culturally and linguistically diverse areas in pre-Columbian North America, and the indigenous peoples of California constituted the highest Native American population density north of what is now Mexico. European exploration in the 16th and 17th centuries led to the colonization of California by the Spanish Empire. In 1804, it was included in Alta California province within the Viceroyalty of New Spain. The area became a part of Mexico in 1821, following its successful war for independence, but was ceded to the United States in 1848 after the Mexican–American War. The California Gold Rush started in 1848 and led to dramatic social and demographic changes, including the depopulation of indigenous peoples in the California genocide. The western portion of Alta California was then organized and admitted as the 31st state on September 9, 1850, as a free state, following the Compromise of 1850.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Notable contributions to popular culture, ranging from entertainment, sports, music, and fashion, have their origins in California. The state also has made substantial contributions in the fields of communication, information, innovation, education, environmentalism, entertainment, economics, politics, technology, and religion. California is the home of Hollywood, the oldest and one of the largest film industries in the world, profoundly influencing global entertainment. It is considered the origin of the American film industry, hippie counterculture, beach and car culture, the personal computer, the internet, fast food, diners, burger joints, skateboarding, and the fortune cookie, among other inventions. The San Francisco Bay Area and the Greater Los Angeles Area are widely seen as the centers of the global technology and U.S. film industries, respectively. California's economy is very diverse. California's agricultural industry has the highest output of any U.S. state, and is led by its dairy, almonds, and grapes. With the busiest ports in the country (Los Angeles and Long Beach), California plays a pivotal role in the global supply chain, hauling in about 40% of all goods imported to the United States.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The state's extremely diverse geography ranges from the Pacific Coast and metropolitan areas in the west to the Sierra Nevada mountains in the east, and from the redwood and Douglas fir forests in the northwest to the Mojave Desert in the southeast. Two-thirds of the nation's earthquake risk lies in California. The Central Valley, a fertile agricultural area, dominates the state's center. California is well known for its warm Mediterranean climate along the coast and monsoon seasonal weather inland. The large size of the state results in climates that vary from moist temperate rainforest in the north to arid desert in the interior, as well as snowy alpine in the mountains. Droughts and wildfires are an ongoing issue for the state.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The Spaniards gave the name Las Californias to the peninsula of Baja California and to Alta California, the latter region becoming the present-day state of California.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "The name derived from the mythical island of California in the fictional story of Queen Calafia, as recorded in a 1510 work The Adventures of Esplandián by Castilian author Garci Rodríguez de Montalvo. This work was the fifth in a popular Spanish chivalric romance series that began with Amadís de Gaula. Queen Calafia's kingdom was said to be a remote land rich in gold and pearls, inhabited by beautiful Black women who wore gold armor and lived like Amazons, as well as griffins and other strange beasts. In the fictional paradise, the ruler Queen Calafia fought alongside Muslims and her name may have been chosen to echo the Muslim title caliph, used for Muslim leaders.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "Know ye that at the right hand of the Indies there is an island called California, very close to that part of the Terrestrial Paradise, which was inhabited by black women without a single man among them, and they lived in the manner of Amazons. They were robust of body with strong passionate hearts and great virtue. The island itself is one of the wildest in the world on account of the bold and craggy rocks.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "Official abbreviations of the state's name include CA, Cal., Calif., and US-CA.",
"title": "Etymology"
},
{
"paragraph_id": 10,
"text": "California was one of the most culturally and linguistically diverse areas in pre-Columbian North America. Historians generally agree that there were at least 300,000 people living in California prior to European colonization. The indigenous peoples of California included more than 70 distinct ethnic groups, inhabiting environments ranging from mountains and deserts to islands and redwood forests.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Living in these diverse geographic areas, the indigenous peoples developed complex forms of ecosystem management, including forest gardening to ensure the regular availability of food and medicinal plants. This was a form of sustainable agriculture. To mitigate destructive large wildfires from ravaging the natural environment, indigenous peoples developed a practice of controlled burning. This practice was recognized for its benefits by the California government in 2022.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "These groups were also diverse in their political organization, with bands, tribes, villages, and, on the resource-rich coasts, large chiefdoms, such as the Chumash, Pomo and Salinan. Trade, intermarriage, craft specialists, and military alliances fostered social and economic relationships between many groups. Although nations would sometimes war, most armed conflicts were between groups of men for vengeance. Acquiring territory was not usually the purpose of these small-scale battles.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Men and women generally had different roles in society. Women were often responsible for weaving, harvesting, processing, and preparing food, while men for hunting and other forms of physical labor. Most societies also had roles for people whom the Spanish referred to as joyas, who they saw as \"men who dressed as women\". Joyas were responsible for death, burial, and mourning rituals, and they performed women's social roles. Indigenous societies had terms such as two-spirit to refer to them. The Chumash referred to them as 'aqi. The early Spanish settlers detested and sought to eliminate them.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The first Europeans to explore the coast of California were the members of a Spanish maritime expedition led by Portuguese captain Juan Rodríguez Cabrillo in 1542. Cabrillo was commissioned by Antonio de Mendoza, the Viceroy of New Spain, to lead an expedition up the Pacific coast in search of trade opportunities; they entered San Diego Bay on September 28, 1542, and reached at least as far north as San Miguel Island. Privateer and explorer Francis Drake explored and claimed an undefined portion of the California coast in 1579, landing north of the future city of San Francisco. The first Asians to set foot on what would be the United States occurred in 1587, when Filipino sailors arrived in Spanish ships at Morro Bay. Coincidentally the descendants of the Muslim Caliph Hasan ibn Ali in formerly Islamic Manila and had converted to Christianity, upon Spanish conquest, transited through California (Named after a Caliph) on their way to Guerrero, Mexico. Sebastián Vizcaíno explored and mapped the coast of California in 1602 for New Spain, putting ashore in Monterey. Despite the on-the-ground explorations of California in the 16th century, Rodríguez's idea of California as an island persisted. Such depictions appeared on many European maps well into the 18th century.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Portolá expedition of 1769–70 was a pivotal event in the Spanish colonization of California, resulting in the establishment of numerous missions, presidios, and pueblos. The military and civil contingent of the expedition was led by Gaspar de Portolá, who traveled over land from Sonora into California, while the religious component was headed by Junípero Serra, who came by sea from Baja California. In 1769, Portolá and Serra established Mission San Diego de Alcalá and the Presidio of San Diego, the first religious and military settlements founded by the Spanish in California. By the end of the expedition in 1770, they would establish the Presidio of Monterey and Mission San Carlos Borromeo de Carmelo on Monterey Bay.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "After the Portolà expedition, Spanish missionaries led by Father-President Serra set out to establish 21 Spanish missions of California along El Camino Real (\"The Royal Road\") and along the California coast, 16 sites of which having been chosen during the Portolá expedition. Numerous major cities in California grew out of missions, including San Francisco (Mission San Francisco de Asís), San Diego (Mission San Diego de Alcalá), Ventura (Mission San Buenaventura), or Santa Barbara (Mission Santa Barbara), among others.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Juan Bautista de Anza led a similarly important expedition throughout California in 1775–76, which would extend deeper into the interior and north of California. The Anza expedition selected numerous sites for missions, presidios, and pueblos, which subsequently would be established by settlers. Gabriel Moraga, a member of the expedition, would also christen many of California's prominent rivers with their names in 1775–1776, such as the Sacramento River and the San Joaquin River. After the expedition, Gabriel's son, José Joaquín Moraga, would found the pueblo of San Jose in 1777, making it the first civilian-established city in California.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "During this same period, sailors from the Russian Empire explored along the northern coast of California. In 1812, the Russian-American Company established a trading post and small fortification at Fort Ross on the North Coast. Fort Ross was primarily used to supply Russia's Alaskan colonies with food supplies. The settlement did not meet much success, failing to attract settlers or establish long term trade viability, and was abandoned by 1841.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "During the War of Mexican Independence, Alta California was largely unaffected and uninvolved in the revolution, though many Californios supported independence from Spain, which many believed had neglected California and limited its development. Spain's trade monopoly on California had limited local trade prospects. Following Mexican independence, California ports were freely able to trade with foreign merchants. Governor Pablo Vicente de Solá presided over the transition from Spanish colonial rule to independent Mexican rule.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1821, the Mexican War of Independence gave the Mexican Empire (which included California) independence from Spain. For the next 25 years, Alta California remained a remote, sparsely populated, northwestern administrative district of the newly independent country of Mexico, which shortly after independence became a republic. The missions, which controlled most of the best land in the state, were secularized by 1834 and became the property of the Mexican government. The governor granted many square leagues of land to others with political influence. These huge ranchos or cattle ranches emerged as the dominant institutions of Mexican California. The ranchos developed under ownership by Californios (Hispanics native of California) who traded cowhides and tallow with Boston merchants. Beef did not become a commodity until the 1849 California Gold Rush.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "From the 1820s, trappers and settlers from the United States and Canada began to arrive in Northern California. These new arrivals used the Siskiyou Trail, California Trail, Oregon Trail and Old Spanish Trail to cross the rugged mountains and harsh deserts in and surrounding California. The early government of the newly independent Mexico was highly unstable, and in a reflection of this, from 1831 onwards, California also experienced a series of armed disputes, both internal and with the central Mexican government. During this tumultuous political period Juan Bautista Alvarado was able to secure the governorship during 1836–1842. The military action which first brought Alvarado to power had momentarily declared California to be an independent state, and had been aided by Anglo-American residents of California, including Isaac Graham. In 1840, one hundred of those residents who did not have passports were arrested, leading to the Graham Affair, which was resolved in part with the intercession of Royal Navy officials.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "One of the largest ranchers in California was John Marsh. After failing to obtain justice against squatters on his land from the Mexican courts, he determined that California should become part of the United States. Marsh conducted a letter-writing campaign espousing the California climate, the soil, and other reasons to settle there, as well as the best route to follow, which became known as \"Marsh's route\". His letters were read, reread, passed around, and printed in newspapers throughout the country, and started the first wagon trains rolling to California. He invited immigrants to stay on his ranch until they could get settled, and assisted in their obtaining passports.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "After ushering in the period of organized emigration to California, Marsh became involved in a military battle between the much-hated Mexican general, Manuel Micheltorena and the California governor he had replaced, Juan Bautista Alvarado. The armies of each met at the Battle of Providencia near Los Angeles. Marsh had been forced against his will to join Micheltorena's army. Ignoring his superiors, during the battle, he signaled the other side for a parley. There were many settlers from the United States fighting on both sides. He convinced each side that they had no reason to be fighting each other. As a result of Marsh's actions, they abandoned the fight, Micheltorena was defeated, and California-born Pio Pico was returned to the governorship. This paved the way to California's ultimate acquisition by the United States.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 1846, a group of American settlers in and around Sonoma rebelled against Mexican rule during the Bear Flag Revolt. Afterward, rebels raised the Bear Flag (featuring a bear, a star, a red stripe and the words \"California Republic\") at Sonoma. The Republic's only president was William B. Ide, who played a pivotal role during the Bear Flag Revolt. This revolt by American settlers served as a prelude to the later American military invasion of California and was closely coordinated with nearby American military commanders.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The California Republic was short-lived; the same year marked the outbreak of the Mexican–American War (1846–1848).",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Commodore John D. Sloat of the United States Navy sailed into Monterey Bay in 1846 and began the U.S. military invasion of California, with Northern California capitulating in less than a month to the United States forces. In Southern California, Californios continued to resist American forces. Notable military engagements of the conquest include the Battle of San Pasqual and the Battle of Dominguez Rancho in Southern California, as well as the Battle of Olómpali and the Battle of Santa Clara in Northern California. After a series of defensive battles in the south, the Treaty of Cahuenga was signed by the Californios on January 13, 1847, securing a censure and establishing de facto American control in California.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Following the Treaty of Guadalupe Hidalgo (February 2, 1848) that ended the war, the westernmost portion of the annexed Mexican territory of Alta California soon became the American state of California, and the remainder of the old territory was then subdivided into the new American Territories of Arizona, Nevada, Colorado and Utah. The even more lightly populated and arid lower region of old Baja California remained as a part of Mexico. In 1846, the total settler population of the western part of the old Alta California had been estimated to be no more than 8,000, plus about 100,000 Native Americans, down from about 300,000 before Hispanic settlement in 1769.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In 1848, only one week before the official American annexation of the area, gold was discovered in California, this being an event which was to forever alter both the state's demographics and its finances. Soon afterward, a massive influx of immigration into the area resulted, as prospectors and miners arrived by the thousands. The population burgeoned with United States citizens, Europeans, Middle Easterns, Chinese and other immigrants during the great California Gold Rush. By the time of California's application for statehood in 1850, the settler population of California had multiplied to 100,000. By 1854, more than 300,000 settlers had come. Between 1847 and 1870, the population of San Francisco increased from 500 to 150,000.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The seat of government for California under Spanish and later Mexican rule had been located in Monterey from 1777 until 1845. Pio Pico, the last Mexican governor of Alta California, had briefly moved the capital to Los Angeles in 1845. The United States consulate had also been located in Monterey, under consul Thomas O. Larkin.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In 1849, a state Constitutional Convention was first held in Monterey. Among the first tasks of the convention was a decision on a location for the new state capital. The first full legislative sessions were held in San Jose (1850–1851). Subsequent locations included Vallejo (1852–1853), and nearby Benicia (1853–1854); these locations eventually proved to be inadequate as well. The capital has been located in Sacramento since 1854 with only a short break in 1862 when legislative sessions were held in San Francisco due to flooding in Sacramento. Once the state's Constitutional Convention had finalized its state constitution, it applied to the U.S. Congress for admission to statehood. On September 9, 1850, as part of the Compromise of 1850, California became a free state and September 9 a state holiday.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "During the American Civil War (1861–1865), California sent gold shipments eastward to Washington in support of the Union. However, due to the existence of a large contingent of pro-South sympathizers within the state, the state was not able to muster any full military regiments to send eastwards to officially serve in the Union war effort. Still, several smaller military units within the Union army, such as the \"California 100 Company\", were unofficially associated with the state of California due to a majority of their members being from California.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "At the time of California's admission into the Union, travel between California and the rest of the continental United States had been a time-consuming and dangerous feat. Nineteen years later, and seven years after it was greenlighted by President Lincoln, the first transcontinental railroad was completed in 1869. California was then reachable from the eastern States in a week's time.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Much of the state was extremely well suited to fruit cultivation and agriculture in general. Vast expanses of wheat, other cereal crops, vegetable crops, cotton, and nut and fruit trees were grown (including oranges in Southern California), and the foundation was laid for the state's prodigious agricultural production in the Central Valley and elsewhere.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In the nineteenth century, a large number of migrants from China traveled to the state as part of the Gold Rush or to seek work. Even though the Chinese proved indispensable in building the transcontinental railroad from California to Utah, perceived job competition with the Chinese led to anti-Chinese riots in the state, and eventually the US ended migration from China partially as a response to pressure from California with the 1882 Chinese Exclusion Act.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Under earlier Spanish and Mexican rule, California's original native population had precipitously declined, above all, from Eurasian diseases to which the indigenous people of California had not yet developed a natural immunity. Under its new American administration, California's first governor Peter Hardeman Burnett instituted policies that have been described as a state-sanctioned policy of elimination toward California's indigenous people. Burnett announced in 1851 in his Second Annual Message to the Legislature: \"That a war of extermination will continue to be waged between the races until the Indian race becomes extinct must be expected. While we cannot anticipate the result with but painful regret, the inevitable destiny of the race is beyond the power and wisdom of man to avert.\"",
"title": "History"
},
{
"paragraph_id": 36,
"text": "As in other American states, indigenous peoples were forcibly removed from their lands by American settlers, like miners, ranchers, and farmers. Although California had entered the American union as a free state, the \"loitering or orphaned Indians\", were de facto enslaved by their new Anglo-American masters under the 1850 Act for the Government and Protection of Indians. One of these de facto slave auctions was approved by the Los Angeles City Council and occurred for nearly twenty years. There were many massacres in which hundreds of indigenous people were killed by settlers for their land.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Between 1850 and 1860, the California state government paid around 1.5 million dollars (some 250,000 of which was reimbursed by the federal government) to hire militias with the stated purpose of protecting settlers, however these militias perpetrated numerous massacres of indigenous people. Indigenous people were also forcibly moved to reservations and rancherias, which were often small and isolated and without enough natural resources or funding from the government to adequately sustain the populations living on them. As a result, settler colonialism was a calamity for indigenous people. Several scholars and Native American activists, including Benjamin Madley and Ed Castillo, have described the actions of the California government as a genocide, as well as the 40th governor of California Gavin Newsom. Benjamin Madley estimates that from 1846 to 1873, between 9,492 and 16,092 indigenous people were killed, including between 1,680 and 3,741 killed by the U.S. Army.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "In the twentieth century, thousands of Japanese people migrated to the US and California specifically to attempt to purchase and own land in the state. However, the state in 1913 passed the Alien Land Act, excluding Asian immigrants from owning land. During World War II, Japanese Americans in California were interned in concentration camps such as at Tule Lake and Manzanar. In 2020, California officially apologized for this internment.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Migration to California accelerated during the early 20th century with the completion of major transcontinental highways like the Lincoln Highway and Route 66. In the period from 1900 to 1965, the population grew from fewer than one million to the greatest in the Union. In 1940, the Census Bureau reported California's population as 6.0% Hispanic, 2.4% Asian, and 89.5% non-Hispanic white.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "To meet the population's needs, major engineering feats like the California and Los Angeles Aqueducts; the Oroville and Shasta Dams; and the Bay and Golden Gate Bridges were built across the state. The state government also adopted the California Master Plan for Higher Education in 1960 to develop a highly efficient system of public education.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Meanwhile, attracted to the mild Mediterranean climate, cheap land, and the state's wide variety of geography, filmmakers established the studio system in Hollywood in the 1920s. California manufactured 8.7 percent of total United States military armaments produced during World War II, ranking third (behind New York and Michigan) among the 48 states. California however easily ranked first in production of military ships during the war (transport, cargo, [merchant ships] such as Liberty ships, Victory ships, and warships) at drydock facilities in San Diego, Los Angeles, and the San Francisco Bay Area, which were used on the naval heavy Asia–Pacific War Theater of World War II. Due to the hiring opportunities California offered during the conflict, the population of the state greatly multiplied from the immigration it received due to the work offered in its war factories, military bases, and training facilities. After World War II, California's economy greatly expanded due to strong aerospace and defense industries, whose size decreased following the end of the Cold War. Stanford University and its Dean of Engineering Frederick Terman began encouraging faculty and graduates to stay in California instead of leaving the state, and develop a high-tech region in the area now known as Silicon Valley. As a result of these efforts, California is regarded as a world center of the entertainment and music industries, of technology, engineering, and the aerospace industry, and as the United States center of agricultural production. Just before the Dot Com Bust, California had the fifth-largest economy in the world among nations.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "In the mid and late twentieth century, a number of race-related incidents occurred in the state. Tensions between police and African Americans, combined with unemployment and poverty in inner cities, led to violent riots, such as the 1965 Watts riots and 1992 Rodney King riots. California was also the hub of the Black Panther Party, a group known for arming African Americans to defend against racial injustice and for organizing free breakfast programs for schoolchildren. Additionally, Mexican, Filipino, and other migrant farm workers rallied in the state around Cesar Chavez for better pay in the 1960s and 1970s.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "During the 20th century, two great disasters happened in California. The 1906 San Francisco earthquake and 1928 St. Francis Dam flood remain the deadliest in U.S. history.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Although air pollution problems have been reduced, health problems associated with pollution have continued. The brown haze known as \"smog\" has been substantially abated after the passage of federal and state restrictions on automobile exhaust.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "An energy crisis in 2001 led to rolling blackouts, soaring power rates, and the importation of electricity from neighboring states. Southern California Edison and Pacific Gas and Electric Company came under heavy criticism.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "Housing prices in urban areas continued to increase; a modest home which in the 1960s cost $25,000 would cost half a million dollars or more in urban areas by 2005. More people commuted longer hours to afford a home in more rural areas while earning larger salaries in the urban areas. Speculators bought houses they never intended to live in, expecting to make a huge profit in a matter of months, then rolling it over by buying more properties. Mortgage companies were compliant, as everyone assumed the prices would keep rising. The bubble burst in 2007–8 as housing prices began to crash and the boom years ended. Hundreds of billions in property values vanished and foreclosures soared as many financial institutions and investors were badly hurt.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "In the twenty-first century, droughts and frequent wildfires attributed to climate change have occurred in the state. From 2011 to 2017, a persistent drought was the worst in its recorded history. The 2018 wildfire season was the state's deadliest and most destructive, most notably Camp Fire.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "One of the first confirmed COVID-19 cases in the United States that occurred in California was first of which was confirmed on January 26, 2020. Meaning, all of the early confirmed cases were persons who had recently travelled to China in Asia, as testing was restricted to this group. On this January 29, 2020, as disease containment protocols were still being developed, the U.S. Department of State evacuated 195 persons from Wuhan, China aboard a chartered flight to March Air Reserve Base in Riverside County, and in this process, it may have granted and conferred to escalated within the land and the US at cosmic. On February 5, 2020, the U.S. evacuated 345 more citizens from Hubei Province to two military bases in California, Travis Air Force Base in Solano County and Marine Corps Air Station Miramar, San Diego, where they were quarantined for 14 days. A state of emergency was largely declared in this state of the nation on March 4, 2020, and as of February 24, 2021, remains in effect. A mandatory statewide stay-at-home order was issued on March 19, 2020, due to increase, which was ended on January 25, 2021, allowing citizens to return to normal life. On April 6, 2021, the state announced plans to fully reopen the economy by June 15, 2021.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "In 2019, the 40th governor of California, Gavin Newsom formally apologized to the indigenous peoples of California for the California genocide: \"Genocide. No other way to describe it, and that's the way it needs to be described in the history books.\" Newsom further acknowledged that \"the actions of the state 150 years ago have ongoing ramifications even today.\" Cultural and language revitalization efforts among indigenous Californians have progressed among several tribes as of 2022. Some land returns to indigenous stewardship have occurred throughout California. In 2022, the largest dam removal and river restoration project in US history was announced for the Klamath River as a win for California tribes.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "Covering an area of 163,696 sq mi (423,970 km), California is the third-largest state in the United States in area, after Alaska and Texas. California is one of the most geographically diverse states in the union and is often geographically bisected into two regions, Southern California, comprising the ten southernmost counties, and Northern California, comprising the 48 northernmost counties. It is bordered by Oregon to the north, Nevada to the east and northeast, Arizona to the southeast, the Pacific Ocean to the west and shares an international border with the Mexican state of Baja California to the south (with which it makes up part of The Californias region of North America, alongside Baja California Sur).",
"title": "Geography"
},
{
"paragraph_id": 51,
"text": "In the middle of the state lies the California Central Valley, bounded by the Sierra Nevada in the east, the coastal mountain ranges in the west, the Cascade Range to the north and by the Tehachapi Mountains in the south. The Central Valley is California's productive agricultural heartland.",
"title": "Geography"
},
{
"paragraph_id": 52,
"text": "Divided in two by the Sacramento-San Joaquin River Delta, the northern portion, the Sacramento Valley serves as the watershed of the Sacramento River, while the southern portion, the San Joaquin Valley is the watershed for the San Joaquin River. Both valleys derive their names from the rivers that flow through them. With dredging, the Sacramento and the San Joaquin Rivers have remained deep enough for several inland cities to be seaports.",
"title": "Geography"
},
{
"paragraph_id": 53,
"text": "The Sacramento-San Joaquin River Delta is a critical water supply hub for the state. Water is diverted from the delta and through an extensive network of pumps and canals that traverse nearly the length of the state, to the Central Valley and the State Water Projects and other needs. Water from the Delta provides drinking water for nearly 23 million people, almost two-thirds of the state's population as well as water for farmers on the west side of the San Joaquin Valley.",
"title": "Geography"
},
{
"paragraph_id": 54,
"text": "Suisun Bay lies at the confluence of the Sacramento and San Joaquin Rivers. The water is drained by the Carquinez Strait, which flows into San Pablo Bay, a northern extension of San Francisco Bay, which then connects to the Pacific Ocean via the Golden Gate strait.",
"title": "Geography"
},
{
"paragraph_id": 55,
"text": "The Channel Islands are located off the Southern coast, while the Farallon Islands lie west of San Francisco.",
"title": "Geography"
},
{
"paragraph_id": 56,
"text": "The Sierra Nevada (Spanish for \"snowy range\") includes the highest peak in the contiguous 48 states, Mount Whitney, at 14,505 feet (4,421 m). The range embraces Yosemite Valley, famous for its glacially carved domes, and Sequoia National Park, home to the giant sequoia trees, the largest living organisms on Earth, and the deep freshwater lake, Lake Tahoe, the largest lake in the state by volume.",
"title": "Geography"
},
{
"paragraph_id": 57,
"text": "To the east of the Sierra Nevada are Owens Valley and Mono Lake, an essential migratory bird habitat. In the western part of the state is Clear Lake, the largest freshwater lake by area entirely in California. Although Lake Tahoe is larger, it is divided by the California/Nevada border. The Sierra Nevada falls to Arctic temperatures in winter and has several dozen small glaciers, including Palisade Glacier, the southernmost glacier in the United States.",
"title": "Geography"
},
{
"paragraph_id": 58,
"text": "The Tulare Lake was the largest freshwater lake west of the Mississippi River. A remnant of Pleistocene-era Lake Corcoran, Tulare Lake dried up by the early 20th century after its tributary rivers were diverted for agricultural irrigation and municipal water uses.",
"title": "Geography"
},
{
"paragraph_id": 59,
"text": "About 45 percent of the state's total surface area is covered by forests, and California's diversity of pine species is unmatched by any other state. California contains more forestland than any other state except Alaska. Many of the trees in the California White Mountains are the oldest in the world; an individual bristlecone pine is over 5,000 years old.",
"title": "Geography"
},
{
"paragraph_id": 60,
"text": "In the south is a large inland salt lake, the Salton Sea. The south-central desert is called the Mojave; to the northeast of the Mojave lies Death Valley, which contains the lowest and hottest place in North America, the Badwater Basin at −279 feet (−85 m). The horizontal distance from the bottom of Death Valley to the top of Mount Whitney is less than 90 miles (140 km). Indeed, almost all of southeastern California is arid, hot desert, with routine extreme high temperatures during the summer. The southeastern border of California with Arizona is entirely formed by the Colorado River, from which the southern part of the state gets about half of its water.",
"title": "Geography"
},
{
"paragraph_id": 61,
"text": "A majority of California's cities are located in either the San Francisco Bay Area or the Sacramento metropolitan area in Northern California; or the Los Angeles area, the Inland Empire, or the San Diego metropolitan area in Southern California. The Los Angeles Area, the Bay Area, and the San Diego metropolitan area are among several major metropolitan areas along the California coast.",
"title": "Geography"
},
{
"paragraph_id": 62,
"text": "As part of the Ring of Fire, California is subject to tsunamis, floods, droughts, Santa Ana winds, wildfires, and landslides on steep terrain; California also has several volcanoes. It has many earthquakes due to several faults running through the state, the largest being the San Andreas Fault. About 37,000 earthquakes are recorded each year; most are too small to be felt, but two-thirds of the human risk from earthquakes lies in California.",
"title": "Geography"
},
{
"paragraph_id": 63,
"text": "Most of the state has a Mediterranean climate. The cool California Current offshore often creates summer fog near the coast. Farther inland, there are colder winters and hotter summers. The maritime moderation results in the shoreline summertime temperatures of Los Angeles and San Francisco being the coolest of all major metropolitan areas of the United States and uniquely cool compared to areas on the same latitude in the interior and on the east coast of the North American continent. Even the San Diego shoreline bordering Mexico is cooler in summer than most areas in the contiguous United States. Just a few miles inland, summer temperature extremes are significantly higher, with downtown Los Angeles being several degrees warmer than at the coast. The same microclimate phenomenon is seen in the climate of the Bay Area, where areas sheltered from the ocean experience significantly hotter summers and colder winters in contrast with nearby areas closer to the ocean.",
"title": "Geography"
},
{
"paragraph_id": 64,
"text": "Northern parts of the state have more rain than the south. California's mountain ranges also influence the climate: some of the rainiest parts of the state are west-facing mountain slopes. Coastal northwestern California has a temperate climate, and the Central Valley has a Mediterranean climate but with greater temperature extremes than the coast. The high mountains, including the Sierra Nevada, have an alpine climate with snow in winter and mild to moderate heat in summer.",
"title": "Geography"
},
{
"paragraph_id": 65,
"text": "California's mountains produce rain shadows on the eastern side, creating extensive deserts. The higher elevation deserts of eastern California have hot summers and cold winters, while the low deserts east of the Southern California mountains have hot summers and nearly frostless mild winters. Death Valley, a desert with large expanses below sea level, is considered the hottest location in the world; the highest temperature in the world, 134 °F (56.7 °C), was recorded there on July 10, 1913. The lowest temperature in California was −45 °F (−43 °C) on January 20, 1937, in Boca.",
"title": "Geography"
},
{
"paragraph_id": 66,
"text": "The table below lists average temperatures for January and August in a selection of places throughout the state; some highly populated and some not. This includes the relatively cool summers of the Humboldt Bay region around Eureka, the extreme heat of Death Valley, and the mountain climate of Mammoth in the Sierra Nevada.",
"title": "Geography"
},
{
"paragraph_id": 67,
"text": "The wide range of climates leads to a high demand for water. Over time, droughts have been increasing due to climate change and overextraction, becoming less seasonal and more year-round, further straining California's electricity supply and water security and having an impact on California business, industry, and agriculture.",
"title": "Geography"
},
{
"paragraph_id": 68,
"text": "In 2022, a new state program was created in collaboration with indigenous peoples of California to revive the practice of controlled burns as a way of clearing excessive forest debris and making landscapes more resilient to wildfires. Native American use of fire in ecosystem management was outlawed in 1911, yet has now been recognized.",
"title": "Geography"
},
{
"paragraph_id": 69,
"text": "California is one of the ecologically richest and most diverse parts of the world, and includes some of the most endangered ecological communities. California is part of the Nearctic realm and spans a number of terrestrial ecoregions.",
"title": "Geography"
},
{
"paragraph_id": 70,
"text": "California's large number of endemic species includes relict species, which have died out elsewhere, such as the Catalina ironwood (Lyonothamnus floribundus). Many other endemics originated through differentiation or adaptive radiation, whereby multiple species develop from a common ancestor to take advantage of diverse ecological conditions such as the California lilac (Ceanothus). Many California endemics have become endangered, as urbanization, logging, overgrazing, and the introduction of exotic species have encroached on their habitat.",
"title": "Geography"
},
{
"paragraph_id": 71,
"text": "California boasts several superlatives in its collection of flora: the largest trees, the tallest trees, and the oldest trees. California's native grasses are perennial plants, and there are close to hundred succulent species native to the state. After European contact, these were generally replaced by invasive species of European annual grasses; and, in modern times, California's hills turn a characteristic golden-brown in summer.",
"title": "Geography"
},
{
"paragraph_id": 72,
"text": "Because California has the greatest diversity of climate and terrain, the state has six life zones which are the lower Sonoran Desert; upper Sonoran (foothill regions and some coastal lands), transition (coastal areas and moist northeastern counties); and the Canadian, Hudsonian, and Arctic Zones, comprising the state's highest elevations.",
"title": "Geography"
},
{
"paragraph_id": 73,
"text": "Plant life in the dry climate of the lower Sonoran zone contains a diversity of native cactus, mesquite, and paloverde. The Joshua tree is found in the Mojave Desert. Flowering plants include the dwarf desert poppy and a variety of asters. Fremont cottonwood and valley oak thrive in the Central Valley. The upper Sonoran zone includes the chaparral belt, characterized by forests of small shrubs, stunted trees, and herbaceous plants. Nemophila, mint, Phacelia, Viola, and the California poppy (Eschscholzia californica, the state flower) also flourish in this zone, along with the lupine, more species of which occur here than anywhere else in the world.",
"title": "Geography"
},
{
"paragraph_id": 74,
"text": "The transition zone includes most of California's forests with the redwood (Sequoia sempervirens) and the \"big tree\" or giant sequoia (Sequoiadendron giganteum), among the oldest living things on earth (some are said to have lived at least 4,000 years). Tanbark oak, California laurel, sugar pine, madrona, broad-leaved maple, and Douglas-fir also grow here. Forest floors are covered with swordfern, alumnroot, barrenwort, and trillium, and there are thickets of huckleberry, azalea, elder, and wild currant. Characteristic wild flowers include varieties of mariposa, tulip, and tiger and leopard lilies.",
"title": "Geography"
},
{
"paragraph_id": 75,
"text": "The high elevations of the Canadian zone allow the Jeffrey pine, red fir, and lodgepole pine to thrive. Brushy areas are abundant with dwarf manzanita and ceanothus; the unique Sierra puffball is also found here. Right below the timberline, in the Hudsonian zone, the whitebark, foxtail, and silver pines grow. At about 10,500 feet (3,200 m), begins the Arctic zone, a treeless region whose flora include a number of wildflowers, including Sierra primrose, yellow columbine, alpine buttercup, and alpine shooting star.",
"title": "Geography"
},
{
"paragraph_id": 76,
"text": "Palm trees are a well-known feature of California, particularly in Southern California and Los Angeles; many species have been imported, though the Washington filifera (commonly known as the California fan palm) is native to the state, mainly growing in the Colorado Desert oases. Other common plants that have been introduced to the state include the eucalyptus, acacia, pepper tree, geranium, and Scotch broom. The species that are federally classified as endangered are the Contra Costa wallflower, Antioch Dunes evening primrose, Solano grass, San Clemente Island larkspur, salt marsh bird's beak, McDonald's rock-cress, and Santa Barbara Island liveforever. As of December 1997, 85 plant species were listed as threatened or endangered.",
"title": "Geography"
},
{
"paragraph_id": 77,
"text": "In the deserts of the lower Sonoran zone, the mammals include the jackrabbit, kangaroo rat, squirrel, and opossum. Common birds include the owl, roadrunner, cactus wren, and various species of hawk. The area's reptilian life include the sidewinder viper, desert tortoise, and horned toad. The upper Sonoran zone boasts mammals such as the antelope, brown-footed woodrat, and ring-tailed cat. Birds unique to this zone are the California thrasher, bushtit, and California condor.",
"title": "Geography"
},
{
"paragraph_id": 78,
"text": "In the transition zone, there are Colombian black-tailed deer, black bears, gray foxes, cougars, bobcats, and Roosevelt elk. Reptiles such as the garter snakes and rattlesnakes inhabit the zone. In addition, amphibians such as the water puppy and redwood salamander are common too. Birds such as the kingfisher, chickadee, towhee, and hummingbird thrive here as well.",
"title": "Geography"
},
{
"paragraph_id": 79,
"text": "The Canadian zone mammals include the mountain weasel, snowshoe hare, and several species of chipmunks. Conspicuous birds include the blue-fronted jay, mountain chickadee, hermit thrush, American dipper, and Townsend's solitaire. As one ascends into the Hudsonian zone, birds become scarcer. While the gray-crowned rosy finch is the only bird native to the high Arctic region, other bird species such as Anna's hummingbird and Clark's nutcracker. Principal mammals found in this region include the Sierra coney, white-tailed jackrabbit, and the bighorn sheep. As of April 2003, the bighorn sheep was listed as endangered by the U.S. Fish and Wildlife Service. The fauna found throughout several zones are the mule deer, coyote, mountain lion, northern flicker, and several species of hawk and sparrow.",
"title": "Geography"
},
{
"paragraph_id": 80,
"text": "Aquatic life in California thrives, from the state's mountain lakes and streams to the rocky Pacific coastline. Numerous trout species are found, among them rainbow, golden, and cutthroat. Migratory species of salmon are common as well. Deep-sea life forms include sea bass, yellowfin tuna, barracuda, and several types of whale. Native to the cliffs of northern California are seals, sea lions, and many types of shorebirds, including migratory species.",
"title": "Geography"
},
{
"paragraph_id": 81,
"text": "As of April 2003, 118 California animals were on the federal endangered list; 181 plants were listed as endangered or threatened. Endangered animals include the San Joaquin kitfox, Point Arena mountain beaver, Pacific pocket mouse, salt marsh harvest mouse, Morro Bay kangaroo rat (and five other species of kangaroo rat), Amargosa vole, California least tern, California condor, loggerhead shrike, San Clemente sage sparrow, San Francisco garter snake, five species of salamander, three species of chub, and two species of pupfish. Eleven butterflies are also endangered and two that are threatened are on the federal list. Among threatened animals are the coastal California gnatcatcher, Paiute cutthroat trout, southern sea otter, and northern spotted owl. California has a total of 290,821 acres (1,176.91 km) of National Wildlife Refuges. As of September 2010, 123 California animals were listed as either endangered or threatened on the federal list. Also, as of the same year, 178 species of California plants were listed either as endangered or threatened on this federal list.",
"title": "Geography"
},
{
"paragraph_id": 82,
"text": "The most prominent river system within California is formed by the Sacramento River and San Joaquin River, which are fed mostly by snowmelt from the west slope of the Sierra Nevada, and respectively drain the north and south halves of the Central Valley. The two rivers join in the Sacramento–San Joaquin River Delta, flowing into the Pacific Ocean through San Francisco Bay. Many major tributaries feed into the Sacramento–San Joaquin system, including the Pit River, Feather River and Tuolumne River.",
"title": "Geography"
},
{
"paragraph_id": 83,
"text": "The Klamath and Trinity Rivers drain a large area in far northwestern California. The Eel River and Salinas River each drain portions of the California coast, north and south of San Francisco Bay, respectively. The Mojave River is the primary watercourse in the Mojave Desert, and the Santa Ana River drains much of the Transverse Ranges as it bisects Southern California. The Colorado River forms the state's southeast border with Arizona.",
"title": "Geography"
},
{
"paragraph_id": 84,
"text": "Most of California's major rivers are dammed as part of two massive water projects: the Central Valley Project, providing water for agriculture in the Central Valley, and the California State Water Project diverting water from Northern to Southern California. The state's coasts, rivers, and other bodies of water are regulated by the California Coastal Commission.",
"title": "Geography"
},
{
"paragraph_id": 85,
"text": "California is traditionally separated into Northern California and Southern California, divided by a straight border which runs across the state, separating the northern 48 counties from the southern 10 counties. Despite the persistence of the northern-southern divide, California is more precisely divided into many regions, multiple of which stretch across the northern-southern divide.",
"title": "Geography"
},
{
"paragraph_id": 86,
"text": "The state has 482 incorporated cities and towns, of which 460 are cities and 22 are towns. Under California law, the terms \"city\" and \"town\" are explicitly interchangeable; the name of an incorporated municipality in the state can either be \"City of (Name)\" or \"Town of (Name)\".",
"title": "Geography"
},
{
"paragraph_id": 87,
"text": "Sacramento became California's first incorporated city on February 27, 1850. San Jose, San Diego, and Benicia tied for California's second incorporated city, each receiving incorporation on March 27, 1850. Jurupa Valley became the state's most recent and 482nd incorporated municipality, on July 1, 2011.",
"title": "Geography"
},
{
"paragraph_id": 88,
"text": "The majority of these cities and towns are within one of five metropolitan areas: the Los Angeles Metropolitan Area, the San Francisco Bay Area, the Riverside-San Bernardino Area, the San Diego metropolitan area, or the Sacramento metropolitan area.",
"title": "Geography"
},
{
"paragraph_id": 89,
"text": "Nearly one out of every eight Americans lives in California. The United States Census Bureau reported that the population of California was 39,538,223 on April 1, 2020, a 6.13% increase since the 2010 census. The estimated state population in 2022 was 39.22 million. For over a century (1900–2020), California experienced steady population growth, adding an average of more than 300,000 people per year from 1940 onward. California's rate of growth began to slow by the 1990s, although it continued to experience population growth in the first two decades of the 21st century. The state experienced population declines in 2020 and 2021, attributable to declining birth rates, COVID-19 pandemic deaths, and less internal migration from other states to California. According to the U.S. Census Bureau, between 2021 and 2022, 818,000 California residents moved out of state with emigrants listing high cost of living, high taxes, and a difficult business environment as the motivation.",
"title": "Demographics"
},
{
"paragraph_id": 90,
"text": "The Greater Los Angeles Area is the second-largest metropolitan area in the United States (U.S.), while Los Angeles is the second-largest city in the U.S. Conversely, San Francisco is the most densely-populated city in California and one of the most densely populated cities in the U.S.. Also, Los Angeles County has held the title of most populous U.S. county for decades, and it alone is more populous than 42 U.S. states. Including Los Angeles, four of the top 20 most populous cities in the U.S. are in California: Los Angeles (2nd), San Diego (8th), San Jose (10th), and San Francisco (17th). The center of population of California is located four miles west-southwest of the city of Shafter, Kern County.",
"title": "Demographics"
},
{
"paragraph_id": 91,
"text": "As of 2019, California ranked second among states by life expectancy, with a life expectancy of 80.9 years.",
"title": "Demographics"
},
{
"paragraph_id": 92,
"text": "Starting in the year 2010, for the first time since the California Gold Rush, California-born residents made up the majority of the state's population. Along with the rest of the United States, California's immigration pattern has also shifted over the course of the late 2000s to early 2010s. Immigration from Latin American countries has dropped significantly with most immigrants now coming from Asia. In total for 2011, there were 277,304 immigrants. Fifty-seven percent came from Asian countries versus 22% from Latin American countries. Net immigration from Mexico, previously the most common country of origin for new immigrants, has dropped to zero / less than zero since more Mexican nationals are departing for their home country than immigrating.",
"title": "Demographics"
},
{
"paragraph_id": 93,
"text": "The state's population of undocumented immigrants has been shrinking in recent years, due to increased enforcement and decreased job opportunities for lower-skilled workers. The number of migrants arrested attempting to cross the Mexican border in the Southwest decreased from a high of 1.1 million in 2005 to 367,000 in 2011. Despite these recent trends, illegal aliens constituted an estimated 7.3 percent of the state's population, the third highest percentage of any state in the country, totaling nearly 2.6 million. In particular, illegal immigrants tended to be concentrated in Los Angeles, Monterey, San Benito, Imperial, and Napa Counties—the latter four of which have significant agricultural industries that depend on manual labor. More than half of illegal immigrants originate from Mexico. The state of California and some California cities, including Los Angeles, Oakland and San Francisco, have adopted sanctuary policies.",
"title": "Demographics"
},
{
"paragraph_id": 94,
"text": "According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 171,521 homeless people in California.",
"title": "Demographics"
},
{
"paragraph_id": 95,
"text": "According to the United States Census Bureau in 2018 the population self-identified as (alone or in combination): 72.1% White (including Hispanic Whites), 36.8% non-Hispanic whites, 15.3% Asian, 6.5% Black or African American, 1.6% Native American and Alaska Native, 0.5% Native Hawaiian or Pacific Islander, and 3.9% two or more races.",
"title": "Demographics"
},
{
"paragraph_id": 96,
"text": "By ethnicity, in 2018 the population was 60.7% non-Hispanic (of any race) and 39.3% Hispanic or Latino (of any race). Hispanics are the largest single ethnic group in California. Non-Hispanic whites constituted 36.8% of the state's population. Californios are the Hispanic residents native to California, who make up the Spanish-speaking community that has existed in California since 1542, of varying Mexican American/Chicano, Criollo Spaniard, and Mestizo origin.",
"title": "Demographics"
},
{
"paragraph_id": 97,
"text": "As of 2011, 75.1% of California's population younger than age 1 were minorities, meaning they had at least one parent who was not non-Hispanic white (white Hispanics are counted as minorities).",
"title": "Demographics"
},
{
"paragraph_id": 98,
"text": "In terms of total numbers, California has the largest population of White Americans in the United States, an estimated 22,200,000 residents. The state has the 5th largest population of African Americans in the United States, an estimated 2,250,000 residents. California's Asian American population is estimated at 4.4 million, constituting a third of the nation's total. California's Native American population of 285,000 is the most of any state.",
"title": "Demographics"
},
{
"paragraph_id": 99,
"text": "According to estimates from 2011, California has the largest minority population in the United States by numbers, making up 60% of the state population. Over the past 25 years, the population of non-Hispanic whites has declined, while Hispanic and Asian populations have grown. Between 1970 and 2011, non-Hispanic whites declined from 80% of the state's population to 40%, while Hispanics grew from 32% in 2000 to 38% in 2011. It is currently projected that Hispanics will rise to 49% of the population by 2060, primarily due to domestic births rather than immigration. With the decline of immigration from Latin America, Asian Americans now constitute the fastest growing racial/ethnic group in California; this growth is primarily driven by immigration from China, India and the Philippines, respectively.",
"title": "Demographics"
},
{
"paragraph_id": 100,
"text": "Most of California's immigrant population are born in Mexico (3.9 million), the Philippines (825,200), China (768,400), India (556,500) and Vietnam (502,600).",
"title": "Demographics"
},
{
"paragraph_id": 101,
"text": "California has the largest multiracial population in the United States. California has the highest rate of interracial marriage.",
"title": "Demographics"
},
{
"paragraph_id": 102,
"text": "English serves as California's de jure and de facto official language. According to the 2021 American Community Survey conducted by the United States Census Bureau, 56.08% (20,763,638) of California residents age 5 and older spoke only English at home, while 43.92% spoke another language at home. 60.35% of people who speak a language other than English at home are able to speak English \"well\" or \"very well\", with this figure varying significantly across the different linguistic groups. Like most U.S. states (32 out of 50), California law enshrines English as its official language, and has done so since the passage of Proposition 63 by California voters in 1986. Various government agencies do, and are often required to, furnish documents in the various languages needed to reach their intended audiences.",
"title": "Demographics"
},
{
"paragraph_id": 103,
"text": "Spanish is the most commonly spoken language in California, behind English, spoken by 28.18% (10,434,308) of the population (in 2021). The Spanish language has been spoken in California since 1542 and is deeply intertwined with California's cultural landscape and history. Spanish was the official administrative language of California through the Spanish and Mexican eras, until 1848. Following the U.S. Conquest of California and the Treaty of Guadalupe-Hidalgo, the U.S. Government guaranteed the rights of Spanish speaking Californians. The first Constitution of California was written in both languages at the Monterey Constitutional Convention of 1849 and protected the rights of Spanish speakers to use their language in government proceedings and mandating that all government documents be published in both English and Spanish.",
"title": "Demographics"
},
{
"paragraph_id": 104,
"text": "Despite the initial recognition of Spanish by early American governments in California, the revised 1879 constitution stripped the rights of Spanish speakers and the official status of Spanish. The growth of the English-only movement by the mid-20th century led to the passage of 1986 California Proposition 63, which enshrined English as the only official language in California and ended Spanish language instruction in schools. 2016 California Proposition 58 reversed the prohibition on bilingual education, though there are still many barriers to the proliferation of Spanish bilingual education, including a shortage of teachers and lack of funding. The government of California has since made efforts to promote Spanish language access and bilingual education, as have private educational institutions in California. Many businesses in California promote the usage of Spanish by their employees, to better serve both California's Hispanic population and the larger Spanish-speaking world.",
"title": "Demographics"
},
{
"paragraph_id": 105,
"text": "California has historically been one of the most linguistically diverse areas in the world, with more than 70 indigenous languages derived from 64 root languages in six language families. A survey conducted between 2007 and 2009 identified 23 different indigenous languages among California farmworkers. All of California's indigenous languages are endangered, although there are now efforts toward language revitalization. California has the highest concentration nationwide of Chinese, Vietnamese and Punjabi speakers.",
"title": "Demographics"
},
{
"paragraph_id": 106,
"text": "As a result of the state's increasing diversity and migration from other areas across the country and around the globe, linguists began noticing a noteworthy set of emerging characteristics of spoken American English in California since the late 20th century. This variety, known as California English, has a vowel shift and several other phonological processes that are different from varieties of American English used in other regions of the United States.",
"title": "Demographics"
},
{
"paragraph_id": 107,
"text": "Religious self-identification, per Public Religion Research Institute's 2021 American Values Survey",
"title": "Demographics"
},
{
"paragraph_id": 108,
"text": "The largest religious denominations by number of adherents as a percentage of California's population in 2014 were the Catholic Church with 28 percent, Evangelical Protestants with 20 percent, and Mainline Protestants with 10 percent. Together, all kinds of Protestants accounted for 32 percent. Those unaffiliated with any religion represented 27 percent of the population. The breakdown of other religions is 1% Muslim, 2% Hindu and 2% Buddhist. This is a change from 2008, when the population identified their religion with the Catholic Church with 31 percent; Evangelical Protestants with 18 percent; and Mainline Protestants with 14 percent. In 2008, those unaffiliated with any religion represented 21 percent of the population. The breakdown of other religions in 2008 was 0.5% Muslim, 1% Hindu and 2% Buddhist. The American Jewish Year Book placed the total Jewish population of California at about 1,194,190 in 2006. According to the Association of Religion Data Archives (ARDA) the largest denominations by adherents in 2010 were the Catholic Church with 10,233,334; The Church of Jesus Christ of Latter-day Saints with 763,818; and the Southern Baptist Convention with 489,953.",
"title": "Demographics"
},
{
"paragraph_id": 109,
"text": "The first priests to come to California were Catholic missionaries from Spain. Catholics founded 21 missions along the California coast, as well as the cities of Los Angeles and San Francisco. California continues to have a large Catholic population due to the large numbers of Mexicans and Central Americans living within its borders. California has twelve dioceses and two archdioceses, the Archdiocese of Los Angeles and the Archdiocese of San Francisco, the former being the largest archdiocese in the United States.",
"title": "Demographics"
},
{
"paragraph_id": 110,
"text": "A Pew Research Center survey revealed that California is somewhat less religious than the rest of the states: 62 percent of Californians say they are \"absolutely certain\" of their belief in God, while in the nation 71 percent say so. The survey also revealed 48 percent of Californians say religion is \"very important\", compared to 56 percent nationally.",
"title": "Demographics"
},
{
"paragraph_id": 111,
"text": "The culture of California is a Western culture and most clearly has its modern roots in the culture of the United States, but also, historically, many Hispanic Californio and Mexican influences. As a border and coastal state, California culture has been greatly influenced by several large immigrant populations, especially those from Latin America and Asia.",
"title": "Culture"
},
{
"paragraph_id": 112,
"text": "California has long been a subject of interest in the public mind and has often been promoted by its boosters as a kind of paradise. In the early 20th century, fueled by the efforts of state and local boosters, many Americans saw the Golden State as an ideal resort destination, sunny and dry all year round with easy access to the ocean and mountains. In the 1960s, popular music groups such as the Beach Boys promoted the image of Californians as laid-back, tanned beach-goers.",
"title": "Culture"
},
{
"paragraph_id": 113,
"text": "The California Gold Rush of the 1850s is still seen as a symbol of California's economic style, which tends to generate technology, social, entertainment, and economic fads and booms and related busts.",
"title": "Culture"
},
{
"paragraph_id": 114,
"text": "Hollywood and the rest of the Los Angeles area is a major global center for entertainment, with the U.S. film industry's \"Big Five\" major film studios (Columbia, Disney, Paramount, Universal, and Warner Bros.) as well as many minor film studios being based in or around the area. Many animation studios are also headquartered in the state.",
"title": "Culture"
},
{
"paragraph_id": 115,
"text": "The four major American television commercial broadcast networks (ABC, CBS, NBC, and Fox) as well as other networks all have production facilities and offices in the state. All the four major commercial broadcast networks, plus the two major Spanish-language networks (Telemundo and Univision) each have at least three owned-and-operated TV stations in California, including at least one in Los Angeles and at least one in San Francisco.",
"title": "Culture"
},
{
"paragraph_id": 116,
"text": "One of the oldest radio stations in the United States still in existence, KCBS (AM) in the San Francisco Bay Area, was founded in 1909. Universal Music Group, one of the \"Big Four\" record labels, is based in Santa Monica, while Warner Records is based in Los Angeles. Many independent record labels, such as Mind of a Genius Records, are also headquartered in the state. California is also the birthplace of several international music genres, including the Bakersfield sound, Bay Area thrash metal, alternative rock, g-funk, nu metal, glam metal, thrash metal, psychedelic rock, stoner rock, punk rock, hardcore punk, metalcore, pop punk, surf music, third wave ska, west coast hip hop, west coast jazz, jazz rap, and many other genres. Other genres such as pop rock, indie rock, hard rock, hip hop, pop, rock, rockabilly, country, heavy metal, grunge, new wave and disco were popularized in the state. In addition, many British bands, such as Led Zeppelin, Deep Purple, Black Sabbath, and the Rolling Stones settled in the state after becoming internationally famous.",
"title": "Culture"
},
{
"paragraph_id": 117,
"text": "As the home of Silicon Valley, the Bay Area is the headquarters of several prominent internet media, social media, and other technology companies. Three of the \"Big Five\" technology companies (Apple, Meta, and Google) are based in the area as well as other services such as Netflix, Pandora Radio, Twitter, Yahoo!, and YouTube. Other prominent companies that are headquartered here include HP inc. and Intel. Microsoft and Amazon also have offices in the area.",
"title": "Culture"
},
{
"paragraph_id": 118,
"text": "California, particularly Southern California, is considered the birthplace of modern car culture.",
"title": "Culture"
},
{
"paragraph_id": 119,
"text": "Several fast food, fast casual, and casual dining chains were also founded California, including some that have since expanded internationally like California Pizza Kitchen, Denny's, IHOP, McDonald's, Panda Express, and Taco Bell.",
"title": "Culture"
},
{
"paragraph_id": 120,
"text": "California has nineteen major professional sports league franchises, far more than any other state. The San Francisco Bay Area has six major league teams spread in its three major cities: San Francisco, San Jose, and Oakland, while the Greater Los Angeles Area is home to ten major league franchises. San Diego and Sacramento each have one major league team. The NFL Super Bowl has been hosted in California 12 times at five different stadiums: Los Angeles Memorial Coliseum, the Rose Bowl, Stanford Stadium, Levi's Stadium, and San Diego's Qualcomm Stadium. A thirteenth, Super Bowl LVI, was held at Sofi Stadium in Inglewood on February 13, 2022.",
"title": "Culture"
},
{
"paragraph_id": 121,
"text": "California has long had many respected collegiate sports programs. California is home to the oldest college bowl game, the annual Rose Bowl, among others.",
"title": "Culture"
},
{
"paragraph_id": 122,
"text": "The NFL has three teams in the state: the Los Angeles Rams, Los Angeles Chargers, and San Francisco 49ers.",
"title": "Culture"
},
{
"paragraph_id": 123,
"text": "MLB has five teams in the state: the San Francisco Giants, Oakland Athletics, Los Angeles Dodgers, Los Angeles Angels, and San Diego Padres.",
"title": "Culture"
},
{
"paragraph_id": 124,
"text": "The NBA has four teams in the state: the Golden State Warriors, Los Angeles Clippers, Los Angeles Lakers, and Sacramento Kings. Additionally, the WNBA also has one team in the state: the Los Angeles Sparks.",
"title": "Culture"
},
{
"paragraph_id": 125,
"text": "The NHL has three teams in the state: the Anaheim Ducks, Los Angeles Kings, and San Jose Sharks.",
"title": "Culture"
},
{
"paragraph_id": 126,
"text": "MLS has three teams in the state: the Los Angeles Galaxy, San Jose Earthquakes, and Los Angeles Football Club.",
"title": "Culture"
},
{
"paragraph_id": 127,
"text": "MLR has one team in the state: the San Diego Legion.",
"title": "Culture"
},
{
"paragraph_id": 128,
"text": "California is the only U.S. state to have hosted both the Summer and Winter Olympics. The 1932 and 1984 summer games were held in Los Angeles. Squaw Valley Ski Resort (now Palisades Tahoe) in the Lake Tahoe region hosted the 1960 Winter Olympics. Los Angeles will host the 2028 Summer Olympics, marking the fourth time that California will have hosted the Olympic Games. Multiple games during the 1994 FIFA World Cup took place in California, with the Rose Bowl hosting eight matches (including the final), while Stanford Stadium hosted six matches.",
"title": "Culture"
},
{
"paragraph_id": 129,
"text": "In addition to the Olympic games, California also hosts the California State Games.",
"title": "Culture"
},
{
"paragraph_id": 130,
"text": "Many sports, such as surfing, snowboarding, and skateboarding, were invented in California, while others like volleyball, beach soccer, and skiing were popularized in the state.",
"title": "Culture"
},
{
"paragraph_id": 131,
"text": "Other sports that are big in the state include golf, rodeo, tennis, mountain climbing, marathon running, horse racing, bowling, mixed martial arts, boxing, and motorsports, especially NASCAR and Formula One.",
"title": "Culture"
},
{
"paragraph_id": 132,
"text": "California has the most school students in the country, with over 6.2 million in the 2005–06 school year, giving California more students in school than 36 states have in total population and one of the highest projected enrollments in the country. Public secondary education consists of high schools that teach elective courses in trades, languages, and liberal arts with tracks for gifted, college-bound and industrial arts students. California's public educational system is supported by a unique constitutional amendment that requires a minimum annual funding level for grades K–12 and community colleges that grows with the economy and student enrollment figures.",
"title": "Education"
},
{
"paragraph_id": 133,
"text": "In 2016, California's K–12 public school per-pupil spending was ranked 22nd in the nation ($11,500 per student vs. $11,800 for the U.S. average).",
"title": "Education"
},
{
"paragraph_id": 134,
"text": "For 2012, California's K–12 public schools ranked 48th in the number of employees per student, at 0.102 (the U.S. average was 0.137), while paying the 7th most per employee, $49,000 (the U.S. average was $39,000).",
"title": "Education"
},
{
"paragraph_id": 135,
"text": "A 2007 study concluded that California's public school system was \"broken\" in that it suffered from overregulation.",
"title": "Education"
},
{
"paragraph_id": 136,
"text": "California public postsecondary education is organized into three separate systems:",
"title": "Education"
},
{
"paragraph_id": 137,
"text": "California is also home to notable private universities such as Stanford University, the California Institute of Technology (Caltech), the University of Southern California, the Claremont Colleges, Santa Clara University, Loyola Marymount University, the University of San Diego, the University of San Francisco, Chapman University, Pepperdine University, Occidental College, and University of the Pacific, among numerous other private colleges and universities, including many religious and special-purpose institutions. California has a particularly high density of arts colleges, including the California College of the Arts, California Institute of the Arts, San Francisco Art Institute, Art Center College of Design, and Academy of Art University, among others.",
"title": "Education"
},
{
"paragraph_id": 138,
"text": "California's economy ranks among the largest in the world. As of 2022, the gross state product (GSP) was $3.6 trillion ($92,190 per capita), the largest in the United States. California is responsible for one seventh of the nation's gross domestic product (GDP). As of 2018, California's nominal GDP is larger than all but four countries (the United States, China, Japan, and Germany). In terms of purchasing power parity (PPP), it is larger than all but eight countries (the United States, China, India, Japan, Germany, Russia, Brazil, and Indonesia). California's economy is larger than Africa and Australia and is almost as large as South America. The state recorded total, non-farm employment of 16,677,800 as of September 2021 among 966,224 employer establishments.",
"title": "Economy"
},
{
"paragraph_id": 139,
"text": "As the largest and second-largest U.S. ports respectively, the Port of Los Angeles and the Port of Long Beach in Southern California collectively play a pivotal role in the global supply chain, together hauling in about 40% of all imports to the United States by TEU volume. The Port of Oakland and Port of Hueneme are the 10th and 26th largest seaports in the U.S., respectively, by number of TEUs handled.",
"title": "Economy"
},
{
"paragraph_id": 140,
"text": "The five largest sectors of employment in California are trade, transportation, and utilities; government; professional and business services; education and health services; and leisure and hospitality. In output, the five largest sectors are financial services, followed by trade, transportation, and utilities; education and health services; government; and manufacturing. California has an unemployment rate of 3.9% as of September 2022.",
"title": "Economy"
},
{
"paragraph_id": 141,
"text": "California's economy is dependent on trade and international related commerce accounts for about one-quarter of the state's economy. In 2008, California exported $144 billion worth of goods, up from $134 billion in 2007 and $127 billion in 2006. Computers and electronic products are California's top export, accounting for 42 percent of all the state's exports in 2008.",
"title": "Economy"
},
{
"paragraph_id": 142,
"text": "Agriculture is an important sector in California's economy. According to the USDA in 2011, the three largest California agricultural products by value were milk and cream, shelled almonds, and grapes. Farming-related sales more than quadrupled over the past three decades, from $7.3 billion in 1974 to nearly $31 billion in 2004. This increase has occurred despite a 15 percent decline in acreage devoted to farming during the period, and water supply suffering from chronic instability. Factors contributing to the growth in sales-per-acre include more intensive use of active farmlands and technological improvements in crop production. In 2008, California's 81,500 farms and ranches generated $36.2 billion products revenue. In 2011, that number grew to $43.5 billion products revenue. The agriculture sector accounts for two percent of the state's GDP and employs around three percent of its total workforce.",
"title": "Economy"
},
{
"paragraph_id": 143,
"text": "Per capita GDP in 2007 was $38,956, ranking eleventh in the nation. Per capita income varies widely by geographic region and profession. The Central Valley is the most impoverished, with migrant farm workers making less than minimum wage. According to a 2005 report by the Congressional Research Service, the San Joaquin Valley was characterized as one of the most economically depressed regions in the United States, on par with the region of Appalachia.",
"title": "Economy"
},
{
"paragraph_id": 144,
"text": "Using the supplemental poverty measure, California has a poverty rate of 23.5%, the highest of any state in the country. However, using the official measure the poverty rate was only 13.3% as of 2017. Many coastal cities include some of the wealthiest per-capita areas in the United States. The high-technology sectors in Northern California, specifically Silicon Valley, in Santa Clara and San Mateo counties, have emerged from the economic downturn caused by the dot-com bust.",
"title": "Economy"
},
{
"paragraph_id": 145,
"text": "In 2019, there were 1,042,027 millionaire households in the state, more than any other state in the nation. In 2010, California residents were ranked first among the states with the best average credit score of 754.",
"title": "Economy"
},
{
"paragraph_id": 146,
"text": "State spending increased from $56 billion in 1998 to $127 billion in 2011. California has the third highest per capita spending on welfare among the states, as well as the highest spending on welfare at $6.67 billion. In January 2011, California's total debt was at least $265 billion. On June 27, 2013, Governor Jerry Brown signed a balanced budget (no deficit) for the state, its first in decades; however, the state's debt remains at $132 billion.",
"title": "Economy"
},
{
"paragraph_id": 147,
"text": "With the passage of Proposition 30 in 2012 and Proposition 55 in 2016, California now levies a 13.3% maximum marginal income tax rate with ten tax brackets, ranging from 1% at the bottom tax bracket of $0 annual individual income to 13.3% for annual individual income over $1,000,000 (though the top brackets are only temporary until Proposition 55 expires at the end of 2030). While Proposition 30 also enacted a minimum state sales tax of 7.5%, this sales tax increase was not extended by Proposition 55 and reverted to a previous minimum state sales tax rate of 7.25% in 2017. Local governments can and do levy additional sales taxes in addition to this minimum rate.",
"title": "Economy"
},
{
"paragraph_id": 148,
"text": "All real property is taxable annually; the ad valorem tax is based on the property's fair market value at the time of purchase or the value of new construction. Property tax increases are capped at 2% annually or the rate of inflation (whichever is lower), per Proposition 13.",
"title": "Economy"
},
{
"paragraph_id": 149,
"text": "Because it is the most populous state in the United States, California is one of the country's largest users of energy. The state has extensive hydro-electric energy generation facilities, however, moving water is the single largest energy use in the state. Also, due to high energy rates, conservation mandates, mild weather in the largest population centers and strong environmental movement, its per capita energy use is one of the smallest of any state in the United States. Due to the high electricity demand, California imports more electricity than any other state, primarily hydroelectric power from states in the Pacific Northwest (via Path 15 and Path 66) and coal- and natural gas-fired production from the desert Southwest via Path 46.",
"title": "Infrastructure"
},
{
"paragraph_id": 150,
"text": "The state's crude oil and natural gas deposits are located in the Central Valley and along the coast, including the large Midway-Sunset Oil Field. Natural gas-fired power plants typically account for more than one-half of state electricity generation.",
"title": "Infrastructure"
},
{
"paragraph_id": 151,
"text": "As a result of the state's strong environmental movement, California has some of the most aggressive renewable energy goals in the United States. Senate Bill SB 1020 (the Clean Energy, Jobs and Affordability Act of 2022) commits the state to running its operations on clean, renewable energy resources by 2035, and SB 1203 also requires the state to achieve net-zero operations for all agencies. Currently, several solar power plants such as the Solar Energy Generating Systems facility are located in the Mojave Desert. California's wind farms include Altamont Pass, San Gorgonio Pass, and Tehachapi Pass. The Tehachapi area is also where the Tehachapi Energy Storage Project is located. Several dams across the state provide hydro-electric power. It would be possible to convert the total supply to 100% renewable energy, including heating, cooling and mobility, by 2050.",
"title": "Infrastructure"
},
{
"paragraph_id": 152,
"text": "California has one major nuclear power plant (Diablo Canyon) in operation. The San Onofre nuclear plant was shut down in 2013. More than 1,700 tons of radioactive waste are stored at San Onofre, and sit on the coast where there is a record of past tsunamis. Voters banned the approval of new nuclear power plants since the late 1970s because of concerns over radioactive waste disposal. In addition, several cities such as Oakland, Berkeley and Davis have declared themselves as nuclear-free zones.",
"title": "Infrastructure"
},
{
"paragraph_id": 153,
"text": "California's vast terrain is connected by an extensive system of controlled-access highways ('freeways'), limited-access roads ('expressways'), and highways. California is known for its car culture, giving California's cities a reputation for severe traffic congestion. Construction and maintenance of state roads and statewide transportation planning are primarily the responsibility of the California Department of Transportation, nicknamed \"Caltrans\". The rapidly growing population of the state is straining all of its transportation networks, and California has some of the worst roads in the United States. The Reason Foundation's 19th Annual Report on the Performance of State Highway Systems ranked California's highways the third-worst of any state, with Alaska second, and Rhode Island first.",
"title": "Infrastructure"
},
{
"paragraph_id": 154,
"text": "The state has been a pioneer in road construction. One of the state's more visible landmarks, the Golden Gate Bridge, was the longest suspension bridge main span in the world at 4,200 feet (1,300 m) between 1937 (when it opened) and 1964. With its orange paint and panoramic views of the bay, this highway bridge is a popular tourist attraction and also accommodates pedestrians and bicyclists. The San Francisco–Oakland Bay Bridge (often abbreviated the \"Bay Bridge\"), completed in 1936, transports about 280,000 vehicles per day on two-decks. Its two sections meet at Yerba Buena Island through the world's largest diameter transportation bore tunnel, at 76 feet (23 m) wide by 58 feet (18 m) high. The Arroyo Seco Parkway, connecting Los Angeles and Pasadena, opened in 1940 as the first freeway in the Western United States. It was later extended south to the Four Level Interchange in downtown Los Angeles, regarded as the first stack interchange ever built.",
"title": "Infrastructure"
},
{
"paragraph_id": 155,
"text": "The California Highway Patrol is the largest statewide police agency in the United States in employment with more than 10,000 employees. They are responsible for providing any police-sanctioned service to anyone on California's state-maintained highways and on state property.",
"title": "Infrastructure"
},
{
"paragraph_id": 156,
"text": "By the end of 2021, 30,610,058 people in California held a California Department of Motor Vehicles-issued driver's licenses or state identification card, and there were 36,229,205 registered vehicles, including 25,643,076 automobiles, 853,368 motorcycles, 8,981,787 trucks and trailers, and 121,716 miscellaneous vehicles (including historical vehicles and farm equipment).",
"title": "Infrastructure"
},
{
"paragraph_id": 157,
"text": "Los Angeles International Airport (LAX), the 4th busiest airport in the world in 2018, and San Francisco International Airport (SFO), the 25th busiest airport in the world in 2018, are major hubs for trans-Pacific and transcontinental traffic. There are about a dozen important commercial airports and many more general aviation airports throughout the state.",
"title": "Infrastructure"
},
{
"paragraph_id": 158,
"text": "Inter-city rail travel is provided by Amtrak California; the three routes, the Capitol Corridor, Pacific Surfliner, and San Joaquin, are funded by Caltrans. These services are the busiest intercity rail lines in the United States outside the Northeast Corridor and ridership is continuing to set records. The routes are becoming increasingly popular over flying, especially on the LAX-SFO route. Integrated subway and light rail networks are found in Los Angeles (Los Angeles Metro Rail) and San Francisco (Muni Metro). Light rail systems are also found in San Jose (VTA light rail), San Diego (San Diego Trolley), Sacramento (Sacramento RT Light Rail), and Northern San Diego County (Sprinter). Furthermore, commuter rail networks serve the San Francisco Bay Area (Altamont Corridor Express, Bay Area Rapid Transit, Caltrain, Sonoma–Marin Area Rail Transit), Greater Los Angeles (Metrolink), and San Diego County (Coaster).",
"title": "Infrastructure"
},
{
"paragraph_id": 159,
"text": "The California High-Speed Rail Authority was authorized in 1996 by the state legislature to plan a California High-Speed Rail system to put before the voters. The plan they devised, 2008 California Proposition 1A, connecting all the major population centers in the state, was approved by the voters at the November 2008 general election. The first phase of construction was begun in 2015, and the first segment 171 miles (275 km) long, is planned to be put into operation by the end of 2030. Planning and work on the rest of the system is continuing, with funding for completing it is an ongoing issue. California's 2023 integrated passenger rail master plan includes a high speed rail system.",
"title": "Infrastructure"
},
{
"paragraph_id": 160,
"text": "Nearly all counties operate bus lines, and many cities operate their own city bus lines as well. Intercity bus travel is provided by Greyhound, Megabus, and Amtrak Thruway.",
"title": "Infrastructure"
},
{
"paragraph_id": 161,
"text": "California's interconnected water system is the world's largest, managing over 40,000,000 acre-feet (49 km) of water per year, centered on six main systems of aqueducts and infrastructure projects. Water use and conservation in California is a politically divisive issue, as the state experiences periodic droughts and has to balance the demands of its large agricultural and urban sectors, especially in the arid southern portion of the state. The state's widespread redistribution of water also invites the frequent scorn of environmentalists.",
"title": "Infrastructure"
},
{
"paragraph_id": 162,
"text": "The California Water Wars, a conflict between Los Angeles and the Owens Valley over water rights, is one of the most well-known examples of the struggle to secure adequate water supplies. Former California Governor Arnold Schwarzenegger said: \"We've been in crisis for quite some time because we're now 38 million people and not anymore 18 million people like we were in the late 60s. So it developed into a battle between environmentalists and farmers and between the south and the north and between rural and urban. And everyone has been fighting for the last four decades about water.\"",
"title": "Infrastructure"
},
{
"paragraph_id": 163,
"text": "The capital city of California is Sacramento. The state is organized into three branches of government—the executive branch consisting of the governor and the other independently elected constitutional officers; the legislative branch consisting of the Assembly and Senate; and the judicial branch consisting of the Supreme Court of California and lower courts. The state also allows ballot propositions: direct participation of the electorate by initiative, referendum, recall, and ratification. Before the passage of Proposition 14 in 2010, California allowed each political party to choose whether to have a closed primary or a primary where only party members and independents vote. After June 8, 2010, when Proposition 14 was approved, excepting only the United States president and county central committee offices, all candidates in the primary elections are listed on the ballot with their preferred party affiliation, but they are not the official nominee of that party. At the primary election, the two candidates with the top votes will advance to the general election regardless of party affiliation. If at a special primary election, one candidate receives more than 50% of all the votes cast, they are elected to fill the vacancy and no special general election will be held.",
"title": "Government and politics"
},
{
"paragraph_id": 164,
"text": "The California executive branch consists of the governor and seven other elected constitutional officers: lieutenant governor, attorney general, secretary of state, state controller, state treasurer, insurance commissioner, and state superintendent of public instruction. They serve four-year terms and may be re-elected only once.",
"title": "Government and politics"
},
{
"paragraph_id": 165,
"text": "The many California state agencies that are under the governor's cabinet are grouped together to form cabinet-level entities that are referred to by government officials as \"superagencies\". Those departments that are directly under the other independently elected officers work separately from these superagencies.",
"title": "Government and politics"
},
{
"paragraph_id": 166,
"text": "The California State Legislature consists of a 40-member Senate and 80-member Assembly. Senators serve four-year terms and Assembly members two. Members of the Assembly are subject to term limits of six terms, and members of the Senate are subject to term limits of three terms.",
"title": "Government and politics"
},
{
"paragraph_id": 167,
"text": "California's legal system is explicitly based upon English common law but carries many features from Spanish civil law, such as community property. California's prison population grew from 25,000 in 1980 to over 170,000 in 2007. Capital punishment is a legal form of punishment and the state has the largest \"Death Row\" population in the country (though Oklahoma and Texas are far more active in carrying out executions). California has performed 13 executions since 1976, with the last being in 2006.",
"title": "Government and politics"
},
{
"paragraph_id": 168,
"text": "California's judiciary system is the largest in the United States with a total of 1,600 judges (the federal system has only about 840). At the apex is the seven-member Supreme Court of California, while the California Courts of Appeal serve as the primary appellate courts and the California Superior Courts serve as the primary trial courts. Justices of the Supreme Court and Courts of Appeal are appointed by the governor, but are subject to retention by the electorate every 12 years.",
"title": "Government and politics"
},
{
"paragraph_id": 169,
"text": "The administration of the state's court system is controlled by the Judicial Council, composed of the chief justice of the California Supreme Court, 14 judicial officers, four representatives from the State Bar of California, and one member from each house of the state legislature.",
"title": "Government and politics"
},
{
"paragraph_id": 170,
"text": "In fiscal year 2020–2021, the state judiciary's 2,000 judicial officers and 18,000 judicial branch employees processed approximately 4.4 million cases.",
"title": "Government and politics"
},
{
"paragraph_id": 171,
"text": "California has an extensive system of local government that manages public functions throughout the state. Like most states, California is divided into counties, of which there are 58 (including San Francisco) covering the entire state. Most urbanized areas are incorporated as cities. School districts, which are independent of cities and counties, handle public education. Many other functions, such as fire protection and water supply, especially in unincorporated areas, are handled by special districts.",
"title": "Government and politics"
},
{
"paragraph_id": 172,
"text": "California is divided into 58 counties. Per Article 11, Section 1, of the Constitution of California, they are the legal subdivisions of the state. The county government provides countywide services such as law enforcement, jails, elections and voter registration, vital records, property assessment and records, tax collection, public health, health care, social services, libraries, flood control, fire protection, animal control, agricultural regulations, building inspections, ambulance services, and education departments in charge of maintaining statewide standards. In addition, the county serves as the local government for all unincorporated areas. Each county is governed by an elected board of supervisors.",
"title": "Government and politics"
},
{
"paragraph_id": 173,
"text": "Incorporated cities and towns in California are either charter or general-law municipalities. General-law municipalities owe their existence to state law and are consequently governed by it; charter municipalities are governed by their own city or town charters. Municipalities incorporated in the 19th century tend to be charter municipalities. All ten of the state's most populous cities are charter cities. Most small cities have a council–manager form of government, where the elected city council appoints a city manager to supervise the operations of the city. Some larger cities have a directly elected mayor who oversees the city government. In many council-manager cities, the city council selects one of its members as a mayor, sometimes rotating through the council membership—but this type of mayoral position is primarily ceremonial. The Government of San Francisco is the only consolidated city-county in California, where both the city and county governments have been merged into one unified jurisdiction.",
"title": "Government and politics"
},
{
"paragraph_id": 174,
"text": "About 1,102 school districts, independent of cities and counties, handle California's public education. California school districts may be organized as elementary districts, high school districts, unified school districts combining elementary and high school grades, or community college districts.",
"title": "Government and politics"
},
{
"paragraph_id": 175,
"text": "There are about 3,400 special districts in California. A special district, defined by California Government Code § 16271(d) as \"any agency of the state for the local performance of governmental or proprietary functions within limited boundaries\", provides a limited range of services within a defined geographic area. The geographic area of a special district can spread across multiple cities or counties, or could consist of only a portion of one. Most of California's special districts are single-purpose districts, and provide one service.",
"title": "Government and politics"
},
{
"paragraph_id": 176,
"text": "The state of California sends 52 members to the House of Representatives, the nation's largest congressional state delegation. Consequently, California also has the largest number of electoral votes in national presidential elections, with 54. The former speaker of the House of Representatives is the representative of California's 20th district, Kevin McCarthy.",
"title": "Government and politics"
},
{
"paragraph_id": 177,
"text": "California is represented in the United States Senate by Alex Padilla, a native and former secretary of state of California, and Laphonza Butler, a labor union official who was appointed to the Senate by Governor Gavin Newson to complete the term of Dianne Feinstein, who died on the 29th of September, 2023. Former U.S. senator Kamala Harris, a native, former district attorney from San Francisco, former attorney general of California, resigned on January 18, 2021, to assume her role as the current Vice President of the United States. In the 1992 U.S. Senate election, California became the first state to elect a Senate delegation entirely composed of women, due to the victories of Feinstein and Barbara Boxer. Following the Vice President, Gov. Newsom appointed Secretary of State Alex Padilla to finish the rest of Harris's term which ended in 2022. Padilla successfully ran for a full term that same year. Padilla was sworn in on January 20, 2021, the same day as the inauguration of Joe Biden as well as Harris.",
"title": "Government and politics"
},
{
"paragraph_id": 178,
"text": "In California, as of 2009, the U.S. Department of Defense had a total of 117,806 active duty servicemembers of which 88,370 were Sailors or Marines, 18,339 were Airmen, and 11,097 were Soldiers, with 61,365 Department of Defense civilian employees. Additionally, there were a total of 57,792 Reservists and Guardsman in California.",
"title": "Government and politics"
},
{
"paragraph_id": 179,
"text": "In 2010, Los Angeles County was the largest origin of military recruits in the United States by county, with 1,437 individuals enlisting in the military. However, as of 2002, Californians were relatively under-represented in the military as a proportion to its population.",
"title": "Government and politics"
},
{
"paragraph_id": 180,
"text": "In 2000, California, had 2,569,340 veterans of United States military service: 504,010 served in World War II, 301,034 in the Korean War, 754,682 during the Vietnam War, and 278,003 during 1990–2000 (including the Persian Gulf War). As of 2010, there were 1,942,775 veterans living in California, of which 1,457,875 served during a period of armed conflict, and just over four thousand served before World War II (the largest population of this group of any state).",
"title": "Government and politics"
},
{
"paragraph_id": 181,
"text": "California's military forces consist of the Army and Air National Guard, the naval and state military reserve (militia), and the California Cadet Corps.",
"title": "Government and politics"
},
{
"paragraph_id": 182,
"text": "On August 5, 1950, a nuclear-capable United States Air Force Boeing B-29 Superfortress bomber carrying a nuclear bomb crashed shortly after takeoff from Fairfield-Suisun Air Force Base. Brigadier General Robert F. Travis, command pilot of the bomber, was among the dead.",
"title": "Government and politics"
},
{
"paragraph_id": 183,
"text": "California has an idiosyncratic political culture compared to the rest of the country, and is sometimes regarded as a trendsetter. In socio-cultural mores and national politics, Californians are perceived as more liberal than other Americans, especially those who live in the inland states. In the 2016 United States presidential election, California had the third highest percentage of Democratic votes behind the District of Columbia and Hawaii. In the 2020 United States presidential election, it had the 6th highest behind the District of Columbia, Vermont, Massachusetts, Maryland, and Hawaii. According to the Cook Political Report, California contains five of the 15 most Democratic congressional districts in the United States.",
"title": "Government and politics"
},
{
"paragraph_id": 184,
"text": "Among the political idiosyncrasies, California was the second state to recall their state governor (the first state being North Dakota in 1921), the second state to legalize abortion, and the only state to ban marriage for gay couples twice by vote (including Proposition 8 in 2008). Voters also passed Proposition 71 in 2004 to fund stem cell research, making California the second state to legalize stem cell research after New Jersey, and Proposition 14 in 2010 to completely change the state's primary election process. California has also experienced disputes over water rights; and a tax revolt, culminating with the passage of Proposition 13 in 1978, limiting state property taxes. California voters have rejected affirmative action on multiple occasions, most recently in November 2020.",
"title": "Government and politics"
},
{
"paragraph_id": 185,
"text": "The state's trend towards the Democratic Party and away from the Republican Party can be seen in state elections. From 1899 to 1939, California had Republican governors. Since 1990, California has generally elected Democratic candidates to federal, state and local offices, including current Governor Gavin Newsom; however, the state has elected Republican Governors, though many of its Republican Governors, such as Arnold Schwarzenegger, tend to be considered moderate Republicans and more centrist than the national party.",
"title": "Government and politics"
},
{
"paragraph_id": 186,
"text": "Several political movements have advocated for California independence. The California National Party and the California Freedom Coalition both advocate for California independence along the lines of progressivism and civic nationalism. The Yes California movement attempted to organize an independence referendum via ballot initiative for 2019, which was then postponed.",
"title": "Government and politics"
},
{
"paragraph_id": 187,
"text": "The Democrats also now hold a supermajority in both houses of the state legislature. There are 62 Democrats and 18 Republicans in the Assembly; and 32 Democrats and 8 Republicans in the Senate.",
"title": "Government and politics"
},
{
"paragraph_id": 188,
"text": "The trend towards the Democratic Party is most obvious in presidential elections. From 1952 through 1988, California was a Republican leaning state, with the party carrying the state's electoral votes in nine of ten elections, with 1964 as the exception. Southern California Republicans Richard Nixon and Ronald Reagan were both elected twice as the 37th and 40th U.S. Presidents, respectively. However, Democrats have won all of California's electoral votes for the last eight elections, starting in 1992.",
"title": "Government and politics"
},
{
"paragraph_id": 189,
"text": "In the United States House, the Democrats held a 34–19 edge in the CA delegation of the 110th United States Congress in 2007. As the result of gerrymandering, the districts in California were usually dominated by one or the other party, and few districts were considered competitive. In 2008, Californians passed Proposition 20 to empower a 14-member independent citizen commission to redraw districts for both local politicians and Congress. After the 2012 elections, when the new system took effect, Democrats gained four seats and held a 38–15 majority in the delegation. Following the 2018 midterm House elections, Democrats won 46 out of 53 congressional house seats in California, leaving Republicans with seven.",
"title": "Government and politics"
},
{
"paragraph_id": 190,
"text": "In general, Democratic strength is centered in the populous coastal regions of the Los Angeles metropolitan area and the San Francisco Bay Area. Republican strength is still greatest in eastern parts of the state. Orange County had remained largely Republican until the 2016 and 2018 elections, in which a majority of the county's votes were cast for Democratic candidates. One study ranked Berkeley, Oakland, Inglewood and San Francisco in the top 20 most liberal American cities; and Bakersfield, Orange, Escondido, Garden Grove, and Simi Valley in the top 20 most conservative cities.",
"title": "Government and politics"
},
{
"paragraph_id": 191,
"text": "In October 2022, out of the 26,876,800 people eligible to vote, 21,940,274 people were registered to vote. Of the people registered, the three largest registered groups were Democrats (10,283,258), Republicans (5,232,094), and No Party Preference (4,943,696). Los Angeles County had the largest number of registered Democrats (2,996,565) and Republicans (958,851) of any county in the state.",
"title": "Government and politics"
},
{
"paragraph_id": 192,
"text": "California retains the death penalty, though it has not been used since 2006. There is currently a gubernatorial hold on executions. Authorized methods of execution include the gas chamber.",
"title": "Government and politics"
},
{
"paragraph_id": 193,
"text": "California has region twinning arrangements with:",
"title": "Government and politics"
},
{
"paragraph_id": 194,
"text": "37°N 120°W / 37°N 120°W / 37; -120 (State of California)",
"title": "External links"
}
] | California is a state in the Western United States. With over 38.9 million residents across a total area of approximately 163,696 square miles (423,970 km2), it is the most populous U.S. state, the third-largest U.S. state by area, and the most populated subnational entity in North America. California borders Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south; it has a coastline along the Pacific Ocean to the west. The Greater Los Angeles and San Francisco Bay areas in California are the nation's second and fifth-most populous urban regions respectively. Greater Los Angeles has over 18.7 million residents and the San Francisco Bay Area has over 9.6 million residents. Los Angeles is the state's most populous city and the nation's second-most populous city. San Francisco is the second-most densely populated major city in the country. Los Angeles County is the country's most populous county, and San Bernardino County is the nation's largest county by area. Sacramento is the state's capital. California's economy is the largest of any state within the United States, with a $3.6 trillion gross state product (GSP) as of 2022. It is the largest sub-national economy in the world. If California were a sovereign nation, it would rank as the world's fifth-largest economy as of 2022, behind India and ahead of the United Kingdom, as well as the 37th most populous. The Greater Los Angeles area and the San Francisco area are the nation's second- and fourth-largest urban economies. The San Francisco Bay Area Combined Statistical Area had the nation's highest gross domestic product per capita ($106,757) among large primary statistical areas in 2018, and is home to five of the world's ten largest companies by market capitalization and four of the world's ten richest people. Slightly over 84 percent of the state's residents 25 or older hold a high school degree, the lowest high school education rate of all 50 states. Prior to European colonization, California was one of the most culturally and linguistically diverse areas in pre-Columbian North America, and the indigenous peoples of California constituted the highest Native American population density north of what is now Mexico. European exploration in the 16th and 17th centuries led to the colonization of California by the Spanish Empire. In 1804, it was included in Alta California province within the Viceroyalty of New Spain. The area became a part of Mexico in 1821, following its successful war for independence, but was ceded to the United States in 1848 after the Mexican–American War. The California Gold Rush started in 1848 and led to dramatic social and demographic changes, including the depopulation of indigenous peoples in the California genocide. The western portion of Alta California was then organized and admitted as the 31st state on September 9, 1850, as a free state, following the Compromise of 1850. Notable contributions to popular culture, ranging from entertainment, sports, music, and fashion, have their origins in California. The state also has made substantial contributions in the fields of communication, information, innovation, education, environmentalism, entertainment, economics, politics, technology, and religion. California is the home of Hollywood, the oldest and one of the largest film industries in the world, profoundly influencing global entertainment. It is considered the origin of the American film industry, hippie counterculture, beach and car culture, the personal computer, the internet, fast food, diners, burger joints, skateboarding, and the fortune cookie, among other inventions. The San Francisco Bay Area and the Greater Los Angeles Area are widely seen as the centers of the global technology and U.S. film industries, respectively. California's economy is very diverse. California's agricultural industry has the highest output of any U.S. state, and is led by its dairy, almonds, and grapes. With the busiest ports in the country, California plays a pivotal role in the global supply chain, hauling in about 40% of all goods imported to the United States. The state's extremely diverse geography ranges from the Pacific Coast and metropolitan areas in the west to the Sierra Nevada mountains in the east, and from the redwood and Douglas fir forests in the northwest to the Mojave Desert in the southeast. Two-thirds of the nation's earthquake risk lies in California. The Central Valley, a fertile agricultural area, dominates the state's center. California is well known for its warm Mediterranean climate along the coast and monsoon seasonal weather inland. The large size of the state results in climates that vary from moist temperate rainforest in the north to arid desert in the interior, as well as snowy alpine in the mountains. Droughts and wildfires are an ongoing issue for the state. | 2001-11-17T21:38:56Z | 2023-12-21T22:00:31Z | [
"Template:Further",
"Template:Coord",
"Template:About",
"Template:Cite map",
"Template:Infobox region symbols",
"Template:Percentage",
"Template:S-bef",
"Template:S-aft",
"Template:S-end",
"Template:Refn",
"Template:Sfn",
"Template:CongRec",
"Template:Webarchive",
"Template:S-start",
"Template:Convert",
"Template:ISBN",
"Template:Cite magazine",
"Template:S-ttl",
"Template:Cite book",
"Template:Div col",
"Template:US Census population",
"Template:Flagicon",
"Template:Citation",
"Template:Short description",
"Template:Multiple image",
"Template:Bartable",
"Template:Party color cell",
"Template:Harvnb",
"Template:Navboxes",
"Template:As of",
"Template:Nts",
"Template:Pie chart",
"Template:Osmrelation-inline",
"Template:Toclimit",
"Template:Use mdy dates",
"Template:Legend",
"Template:Clear",
"Template:Curlie",
"Template:Pp",
"Template:Lang",
"Template:Cite press release",
"Template:Infobox U.S. state",
"Template:Cite news",
"Template:Cite ngs",
"Template:Sister project links",
"Template:Change",
"Template:Largest cities",
"Template:Break",
"Template:NoteFoot",
"Template:Reflist",
"Template:Cite journal",
"Template:SemiBareRefNeedsTitle",
"Template:Main",
"Template:See also",
"Template:Cite web",
"Template:Doi",
"Template:Portal bar",
"Template:Spaces",
"Template:Authority control",
"Template:Blockquote"
] | https://en.wikipedia.org/wiki/California |
5,408 | Columbia River | The Columbia River (Upper Chinook: Wimahl or Wimal; Sahaptin: Nch’i-Wàna or Nchi wana; Sinixt dialect swah'netk'qhu) is the largest river in the Pacific Northwest region of North America. The river forms in the Rocky Mountains of British Columbia, Canada. It flows northwest and then south into the U.S. state of Washington, then turns west to form most of the border between Washington and the state of Oregon before emptying into the Pacific Ocean. The river is 1,243 miles (2,000 kilometers) long, and its largest tributary is the Snake River. Its drainage basin is roughly the size of France and extends into seven states of the United States and one Canadian province. The fourth-largest river in the United States by volume, the Columbia has the greatest flow of any North American river entering the Pacific. The Columbia has the 36th greatest discharge of any river in the world.
The Columbia and its tributaries have been central to the region's culture and economy for thousands of years. They have been used for transportation since ancient times, linking the region's many cultural groups. The river system hosts many species of anadromous fish, which migrate between freshwater habitats and the saline waters of the Pacific Ocean. These fish—especially the salmon species—provided the core subsistence for native peoples.
The first documented European discovery of the Columbia River occurred when Bruno de Heceta sighted the river's mouth in 1775. On May 11, 1792, a private American ship, Columbia Rediviva, under Captain Robert Gray from Boston became the first non-indigenous vessel to enter the river. Later in 1792, William Robert Broughton of the British Royal Navy commanding HMS Chatham as part of the Vancouver Expedition, navigated past the Oregon Coast Range and 100 miles upriver to what is now Vancouver, Washington. In the following decades, fur-trading companies used the Columbia as a key transportation route. Overland explorers entered the Willamette Valley through the scenic, but treacherous Columbia River Gorge, and pioneers began to settle the valley in increasing numbers. Steamships along the river linked communities and facilitated trade; the arrival of railroads in the late 19th century, many running along the river, supplemented these links.
Since the late 19th century, public and private sectors have extensively developed the river. To aid ship and barge navigation, locks have been built along the lower Columbia and its tributaries, and dredging has opened, maintained, and enlarged shipping channels. Since the early 20th century, dams have been built across the river for power generation, navigation, irrigation, and flood control. The 14 hydroelectric dams on the Columbia's main stem and many more on its tributaries produce more than 44 percent of total U.S. hydroelectric generation. Production of nuclear power has taken place at two sites along the river. Plutonium for nuclear weapons was produced for decades at the Hanford Site, which is now the most contaminated nuclear site in the United States. These developments have greatly altered river environments in the watershed, mainly through industrial pollution and barriers to fish migration.
The Columbia begins its 1,243-mile (2,000 km) journey in the southern Rocky Mountain Trench in British Columbia (BC). Columbia Lake – 2,690 feet (820 meters) above sea level – and the adjoining Columbia Wetlands form the river's headwaters. The trench is a broad, deep, and long glacial valley between the Canadian Rockies and the Columbia Mountains in BC. For its first 200 miles (320 km), the Columbia flows northwest along the trench through Windermere Lake and the town of Invermere, a region known in BC as the Columbia Valley, then northwest to Golden and into Kinbasket Lake. Rounding the northern end of the Selkirk Mountains, the river turns sharply south through a region known as the Big Bend Country, passing through Revelstoke Lake and the Arrow Lakes. Revelstoke, the Big Bend, and the Columbia Valley combined are referred to in BC parlance as the Columbia Country. Below the Arrow Lakes, the Columbia passes the cities of Castlegar, located at the Columbia's confluence with the Kootenay River, and Trail, two major population centers of the West Kootenay region. The Pend Oreille River joins the Columbia about 2 miles (3 km) north of the United States–Canada border.
The Columbia enters eastern Washington flowing south and turning to the west at the Spokane River confluence. It marks the southern and eastern borders of the Colville Indian Reservation and the western border of the Spokane Indian Reservation. The river turns south after the Okanogan River confluence, then southeasterly near the confluence with the Wenatchee River in central Washington. This C-shaped segment of the river is also known as the "Big Bend". During the Missoula Floods 10–15,000 years ago, much of the floodwater took a more direct route south, forming the ancient river bed known as the Grand Coulee. After the floods, the river found its present course, and the Grand Coulee was left dry. The construction of the Grand Coulee Dam in the mid-20th century impounded the river, forming Lake Roosevelt, from which water was pumped into the dry coulee, forming the reservoir of Banks Lake.
The river flows past The Gorge Amphitheatre, a prominent concert venue in the Northwest, then through Priest Rapids Dam, and then through the Hanford Nuclear Reservation. Entirely within the reservation is Hanford Reach, the only U.S. stretch of the river that is completely free-flowing, unimpeded by dams, and not a tidal estuary. The Snake River and Yakima River join the Columbia in the Tri-Cities population center. The Columbia makes a sharp bend to the west at the Washington–Oregon border. The river defines that border for the final 309 miles (497 km) of its journey.
The Deschutes River joins the Columbia near The Dalles. Between The Dalles and Portland, the river cuts through the Cascade Range, forming the dramatic Columbia River Gorge. No other rivers except for the Klamath and Pit River completely breach the Cascades – the other rivers that flow through the range also originate in or very near the mountains. The headwaters and upper course of the Pit River are on the Modoc Plateau; downstream, the Pit cuts a canyon through the southern reaches of the Cascades. In contrast, the Columbia cuts through the range nearly a thousand miles from its source in the Rocky Mountains. The gorge is known for its strong and steady winds, scenic beauty, and its role as an important transportation link. The river continues west, bending sharply to the north-northwest near Portland and Vancouver, Washington, at the Willamette River confluence. Here the river slows considerably, dropping sediment that might otherwise form a river delta. Near Longview, Washington and the Cowlitz River confluence, the river turns west again. The Columbia empties into the Pacific Ocean just west of Astoria, Oregon, over the Columbia Bar, a shifting sandbar that makes the river's mouth one of the most hazardous stretches of water to navigate in the world. Because of the danger and the many shipwrecks near the mouth, it acquired a reputation as the "Graveyard of Ships".
The Columbia drains an area of about 258,000 square miles (670,000 square kilometers). Its drainage basin covers nearly all of Idaho, large portions of British Columbia, Oregon, and Washington, and ultimately all of Montana west of the Continental Divide, and small portions of Wyoming, Utah, and Nevada; the total area is similar to the size of France. Roughly 745 miles (1,200 km) of the river's length and 85 percent of its drainage basin are in the US. The Columbia is the twelfth-longest river and has the sixth-largest drainage basin in the United States. In Canada, where the Columbia flows for 498 miles (801 km) and drains 39,700 square miles (103,000 km), the river ranks 23rd in length, and the Canadian part of its basin ranks 13th in size among Canadian basins. The Columbia shares its name with nearby places, such as British Columbia, as well as with landforms and bodies of water.
With an average flow at the mouth of about 265,000 cubic feet per second (7,500 cubic meters per second), the Columbia is the largest river by discharge flowing into the Pacific from the Americas and is the fourth-largest by volume in the U.S. The average flow where the river crosses the international border between Canada and the United States is 99,000 cubic feet per second (2,790 cubic meters per second) from a drainage basin of 39,700 square miles (102,800 km). This amounts to about 15 percent of the entire Columbia watershed. The Columbia's highest recorded flow, measured at The Dalles, was 1,240,000 cubic feet per second (35,000 m/s) in June 1894, before the river was dammed. The lowest flow recorded at The Dalles was 12,100 cubic feet per second (340 m/s) on April 16, 1968, and was caused by the initial closure of the John Day Dam, 28 miles (45 km) upstream. The Dalles is about 190 miles (310 km) from the mouth; the river at this point drains about 237,000 square miles (610,000 km) or about 91 percent of the total watershed. Flow rates on the Columbia are affected by many large upstream reservoirs, many diversions for irrigation, and, on the lower stretches, reverse flow from the tides of the Pacific Ocean. The National Ocean Service observes water levels at six tide gauges and issues tide forecasts for twenty-two additional locations along the river between the entrance at the North Jetty and the base of Bonneville Dam, its head of tide.
The Columbia River multiannual average discharge:
When the rifting of Pangaea, due to the process of plate tectonics, pushed North America away from Europe and Africa and into the Panthalassic Ocean (ancestor to the modern Pacific Ocean), the Pacific Northwest was not part of the continent. As the North American continent moved westward, the Farallon Plate subducted under its western margin. As the plate subducted, it carried along island arcs which were accreted to the North American continent, resulting in the creation of the Pacific Northwest between 150 and 90 million years ago. The general outline of the Columbia Basin was not complete until between 60 and 40 million years ago, but it lay under a large inland sea later subject to uplift. Between 50 and 20 million years ago, from the Eocene through the Miocene eras, tremendous volcanic eruptions frequently modified much of the landscape traversed by the Columbia. The lower reaches of the ancestral river passed through a valley near where Mount Hood later arose. Carrying sediments from erosion and erupting volcanoes, it built a 2-mile (3.2 km) thick delta that underlies the foothills on the east side of the Coast Range near Vernonia in northwestern Oregon. Between 17 million and 6 million years ago, huge outpourings of flood basalt lava covered the Columbia River Plateau and forced the lower Columbia into its present course. The modern Cascade Range began to uplift 5 to 4 million years ago. Cutting through the uplifting mountains, the Columbia River significantly deepened the Columbia River Gorge.
The river and its drainage basin experienced some of the world's greatest known catastrophic floods toward the end of the last ice age. The periodic rupturing of ice dams at Glacial Lake Missoula resulted in the Missoula Floods, with discharges exceeding the combined flow of all the other rivers in the world, dozens of times over thousands of years. The exact number of floods is unknown, but geologists have documented at least 40; evidence suggests that they occurred between about 19,000 and 13,000 years ago.
The floodwaters rushed across eastern Washington, creating the channeled scablands, which are a complex network of dry canyon-like channels, or coulees that are often braided and sharply gouged into the basalt rock underlying the region's deep topsoil. Numerous flat-topped buttes with rich soil stand high above the chaotic scablands. Constrictions at several places caused the floodwaters to pool into large temporary lakes, such as Lake Lewis, in which sediments were deposited. Water depths have been estimated at 1,000 feet (300 m) at Wallula Gap and 400 feet (120 m) over modern Portland, Oregon. Sediments were also deposited when the floodwaters slowed in the broad flats of the Quincy, Othello, and Pasco Basins. The floods' periodic inundation of the lower Columbia River Plateau deposited rich sediments; 21st-century farmers in the Willamette Valley "plow fields of fertile Montana soil and clays from Washington's Palouse".
Over the last several thousand years a series of large landslides have occurred on the north side of the Columbia River Gorge, sending massive amounts of debris south from Table Mountain and Greenleaf Peak into the gorge near the present site of Bonneville Dam. The most recent and significant is known as the Bonneville Slide, which formed a massive earthen dam, filling 3.5 miles (5.6 km) of the river's length. Various studies have placed the date of the Bonneville Slide anywhere between 1060 and 1760 AD; the idea that the landslide debris present today was formed by more than one slide is relatively recent and may explain the large range of estimates. It has been suggested that if the later dates are accurate there may be a link with the 1700 Cascadia earthquake. The pile of debris resulting from the Bonneville Slide blocked the river until rising water finally washed away the sediment. It is not known how long it took the river to break through the barrier; estimates range from several months to several years. Much of the landslide's debris remained, forcing the river about 1.5 miles (2.4 km) south of its previous channel and forming the Cascade Rapids. In 1938, the construction of Bonneville Dam inundated the rapids as well as the remaining trees that could be used to refine the estimated date of the landslide.
In 1980, the eruption of Mount St. Helens deposited large amounts of sediment in the lower Columbia, temporarily reducing the depth of the shipping channel by 26 feet (7.9 m).
Humans have inhabited the Columbia's watershed for more than 15,000 years, with a transition to a sedentary lifestyle based mainly on salmon starting about 3,500 years ago. In 1962, archaeologists found evidence of human activity dating back 11,230 years at the Marmes Rockshelter, near the confluence of the Palouse and Snake rivers in eastern Washington. In 1996 the skeletal remains of a 9,000-year-old prehistoric man (dubbed Kennewick Man) were found near Kennewick, Washington. The discovery rekindled debate in the scientific community over the origins of human habitation in North America and sparked a protracted controversy over whether the scientific or Native American community was entitled to possess and/or study the remains.
Many different Native Americans and First Nations peoples have a historical and continuing presence on the Columbia. South of the Canada–US border, the Colville, Spokane, Coeur d'Alene, Yakama, Nez Perce, Cayuse, Palus, Umatilla, Cowlitz, and the Confederated Tribes of Warm Springs live along the US stretch. Along the upper Snake River and Salmon River, the Shoshone Bannock tribes are present. The Sinixt or Lakes people lived on the lower stretch of the Canadian portion, while above that the Shuswap people (Secwepemc in their own language) reckon the whole of the upper Columbia east to the Rockies as part of their territory. The Canadian portion of the Columbia Basin outlines the traditional homelands of the Canadian Kootenay–Ktunaxa.
The Chinook tribe, which is not federally recognized, who live near the lower Columbia River, call it Wimahl or Wimal in the Upper Chinook (Kiksht) language, and it is Nch’i-Wàna or Nchi wana to the Sahaptin (Ichishkíin Sɨ́nwit)-speaking peoples of its middle course in present-day Washington. The river is known as swah'netk'qhu by the Sinixt people, who live in the area of the Arrow Lakes in the river's upper reaches in Canada. All three terms essentially mean "the big river".
Oral histories describe the formation and destruction of the Bridge of the Gods, a land bridge that connected the Oregon and Washington sides of the river in the Columbia River Gorge. The bridge, which aligns with geological records of the Bonneville Slide, was described in some stories as the result of a battle between gods, represented by Mount Adams and Mount Hood, in their competition for the affection of a goddess, represented by Mount St. Helens. Native American stories about the bridge differ in their details but agree in general that the bridge permitted increased interaction between tribes on the north and south sides of the river.
Horses, originally acquired from Spanish New Mexico, spread widely via native trade networks, reaching the Shoshone of the Snake River Plain by 1700. The Nez Perce, Cayuse, and Flathead people acquired their first horses around 1730. Along with horses came aspects of the emerging plains culture, such as equestrian and horse training skills, greatly increased mobility, hunting efficiency, trade over long distances, intensified warfare, the linking of wealth and prestige to horses and war, and the rise of large and powerful tribal confederacies. The Nez Perce and Cayuse kept large herds and made annual long-distance trips to the Great Plains for bison hunting, adopted the plains culture to a significant degree, and became the main conduit through which horses and the plains culture diffused into the Columbia River region. Other peoples acquired horses and aspects of the plains culture unevenly. The Yakama, Umatilla, Palus, Spokane, and Coeur d'Alene maintained sizable herds of horses and adopted some of the plains cultural characteristics, but fishing and fish-related economies remained important. Less affected groups included the Molala, Klickitat, Wenatchi, Okanagan, and Sinkiuse-Columbia peoples, who owned small numbers of horses and adopted few plains culture features. Some groups remained essentially unaffected, such as the Sanpoil and Nespelem people, whose culture remained centered on fishing.
Natives of the region encountered foreigners at several times and places during the 18th and 19th centuries. European and American vessels explored the coastal area around the mouth of the river in the late 18th century, trading with local natives. The contact would prove devastating to the Indian tribes; a large portion of their population was wiped out by a smallpox epidemic. Canadian explorer Alexander Mackenzie crossed what is now interior British Columbia in 1793. From 1805 to 1806, the Lewis and Clark Expedition entered the Oregon Country along the Clearwater and Snake rivers, and encountered numerous small settlements of natives. Their records recount tales of hospitable traders who were not above stealing small items from the visitors. They also noted brass teakettles, a British musket, and other artifacts that had been obtained in trade with coastal tribes. From the earliest contact with westerners, the natives of the mid- and lower Columbia were not tribal, but instead congregated in social units no larger than a village, and more often at a family level; these units would shift with the season as people moved about, following the salmon catch up and down the river's tributaries.
Sparked by the 1847 Whitman Massacre, a number of violent battles were fought between American settlers and the region's natives. The subsequent Indian Wars, especially the Yakima War, decimated the native population and removed much land from native control. As years progressed, the right of natives to fish along the Columbia became the central issue of contention with the states, commercial fishers, and private property owners. The US Supreme Court upheld fishing rights in landmark cases in 1905 and 1918, as well as the 1974 case United States v. Washington, commonly called the Boldt Decision.
Fish were central to the culture of the region's natives, both as sustenance and as part of their religious beliefs. Natives drew fish from the Columbia at several major sites, which also served as trading posts. Celilo Falls, located east of the modern city of The Dalles, was a vital hub for trade and the interaction of different cultural groups, being used for fishing and trading for 11,000 years. Prior to contact with westerners, villages along this 9-mile (14 km) stretch may have at times had a population as great as 10,000. The site drew traders from as far away as the Great Plains.
The Cascades Rapids of the Columbia River Gorge, and Kettle Falls and Priest Rapids in eastern Washington, were also major fishing and trading sites.
In prehistoric times the Columbia's salmon and steelhead runs numbered an estimated annual average of 10 to 16 million fish. In comparison, the largest run since 1938 was in 1986, with 3.2 million fish entering the Columbia. The annual catch by natives has been estimated at 42 million pounds (19,000 metric tons). The most important and productive native fishing site was located at Celilo Falls, which was perhaps the most productive inland fishing site in North America. The falls were located at the border between Chinookan- and Sahaptian-speaking peoples and served as the center of an extensive trading network across the Pacific Plateau. Celilo was the oldest continuously inhabited community on the North American continent.
Salmon canneries established by white settlers beginning in 1866 had a strong negative impact on the salmon population, and in 1908 US President Theodore Roosevelt observed that the salmon runs were but a fraction of what they had been 25 years prior.
As river development continued in the 20th century, each of these major fishing sites was flooded by a dam, beginning with Cascades Rapids in 1938. The development was accompanied by extensive negotiations between natives and US government agencies. The Confederated Tribes of Warm Springs, a coalition of various tribes, adopted a constitution and incorporated after the 1938 completion of the Bonneville Dam flooded Cascades Rapids; Still, in the 1930s, there were natives who lived along the river and fished year round, moving along with the fish's migration patterns throughout the seasons. The Yakama were slower to do so, organizing a formal government in 1944. In the 21st century, the Yakama, Nez Perce, Umatilla, and Warm Springs tribes all have treaty fishing rights along the Columbia and its tributaries.
In 1957 Celilo Falls was submerged by the construction of The Dalles Dam, and the native fishing community was displaced. The affected tribes received a $26.8 million settlement for the loss of Celilo and other fishing sites submerged by The Dalles Dam. The Confederated Tribes of Warm Springs used part of its $4 million settlement to establish the Kah-Nee-Ta resort south of Mount Hood.
Some historians believe that Japanese or Chinese vessels blown off course reached the Northwest Coast long before Europeans—possibly as early as 219 BCE. Historian Derek Hayes claims that "It is a near certainty that Japanese or Chinese people arrived on the northwest coast long before any European." It is unknown whether they landed near the Columbia. Evidence exists that Spanish castaways reached the shore in 1679 and traded with the Clatsop; if these were the first Europeans to see the Columbia, they failed to send word home to Spain.
In the 18th century, there was strong interest in discovering a Northwest Passage that would permit navigation between the Atlantic (or inland North America) and the Pacific Ocean. Many ships in the area, especially those under Spanish and British command, searched the northwest coast for a large river that might connect to Hudson Bay or the Missouri River. The first documented European discovery of the Columbia River was that of Bruno de Heceta, who in 1775 sighted the river's mouth. On the advice of his officers, he did not explore it, as he was short-staffed and the current was strong. He considered it a bay, and called it Ensenada de Asunción (Assumption Cove). Later Spanish maps, based on his sighting, showed a river, labeled Río de San Roque (The Saint Roch River), or an entrance, called Entrada de Hezeta, named for Bruno de Hezeta, who sailed the region. Following Hezeta's reports, British maritime fur trader Captain John Meares searched for the river in 1788 but concluded that it did not exist. He named Cape Disappointment for the non-existent river, not realizing the cape marks the northern edge of the river's mouth.
What happened next would form the basis for decades of both cooperation and dispute between British and American exploration of, and ownership claim to, the region. Royal Navy commander George Vancouver sailed past the mouth in April 1792 and observed a change in the water's color, but he accepted Meares' report and continued on his journey northward. Later that month, Vancouver encountered the American captain Robert Gray at the Strait of Juan de Fuca. Gray reported that he had seen the entrance to the Columbia and had spent nine days trying but failing to enter.
On May 12, 1792, Gray returned south and crossed the Columbia Bar, becoming the first known explorer of European descent to enter the river. Gray's fur trading mission had been financed by Boston merchants, who outfitted him with a private vessel named Columbia Rediviva; he named the river after the ship on May 18. Gray spent nine days trading near the mouth of the Columbia, then left without having gone beyond 13 miles (21 km) upstream. The farthest point reached was Grays Bay at the mouth of Grays River. Gray's discovery of the Columbia River was later used by the United States to support its claim to the Oregon Country, which was also claimed by Russia, Great Britain, Spain and other nations.
In October 1792, Vancouver sent Lieutenant William Robert Broughton, his second-in-command, up the river. Broughton got as far as the Sandy River at the western end of the Columbia River Gorge, about 100 miles (160 km) upstream, sighting and naming Mount Hood. Broughton formally claimed the river, its drainage basin, and the nearby coast for Britain. In contrast, Gray had not made any formal claims on behalf of the United States.
Because the Columbia was at the same latitude as the headwaters of the Missouri River, there was some speculation that Gray and Vancouver had discovered the long-sought Northwest Passage. A 1798 British map showed a dotted line connecting the Columbia with the Missouri. When the American explorers Meriwether Lewis and William Clark charted the vast, unmapped lands of the American West in their overland expedition (1803–1805), they found no passage between the rivers. After crossing the Rocky Mountains, Lewis and Clark built dugout canoes and paddled down the Snake River, reaching the Columbia near the present-day Tri-Cities, Washington. They explored a few miles upriver, as far as Bateman Island, before heading down the Columbia, concluding their journey at the river's mouth and establishing Fort Clatsop, a short-lived establishment that was occupied for less than three months.
Canadian explorer David Thompson, of the North West Company, spent the winter of 1807–08 at Kootanae House near the source of the Columbia at present-day Invermere, BC. Over the next few years he explored much of the river and its northern tributaries. In 1811 he traveled down the Columbia to the Pacific Ocean, arriving at the mouth just after John Jacob Astor's Pacific Fur Company had founded Astoria. On his return to the north, Thompson explored the one remaining part of the river he had not yet seen, becoming the first Euro-descended person to travel the entire length of the river.
In 1825, the Hudson's Bay Company (HBC) established Fort Vancouver on the bank of the Columbia, in what is now Vancouver, Washington, as the headquarters of the company's Columbia District, which encompassed everything west of the Rocky Mountains, north of California, and south of Russian-claimed Alaska. Chief Factor John McLoughlin, a physician who had been in the fur trade since 1804, was appointed superintendent of the Columbia District. The HBC reoriented its Columbia District operations toward the Pacific Ocean via the Columbia, which became the region's main trunk route. In the early 1840s Americans began to colonize the Oregon country in large numbers via the Oregon Trail, despite the HBC's efforts to discourage American settlement in the region. For many the final leg of the journey involved travel down the lower Columbia River to Fort Vancouver. This part of the Oregon Trail, the treacherous stretch from The Dalles to below the Cascades, could not be traversed by horses or wagons (only watercraft, at great risk). This prompted the 1846 construction of the Barlow Road.
In the Treaty of 1818 the United States and Britain agreed that both nations were to enjoy equal rights in Oregon Country for 10 years. By 1828, when the so-called "joint occupation" was renewed indefinitely, it seemed probable that the lower Columbia River would in time become the border between the two nations. For years the Hudson's Bay Company successfully maintained control of the Columbia River and American attempts to gain a foothold were fended off. In the 1830s, American religious missions were established at several locations in the lower Columbia River region. In the 1840s a mass migration of American settlers undermined British control. The Hudson's Bay Company tried to maintain dominance by shifting from the fur trade, which was in decline, to exporting other goods such as salmon and lumber. Colonization schemes were attempted, but failed to match the scale of American settlement. Americans generally settled south of the Columbia, mainly in the Willamette Valley. The Hudson's Bay Company tried to establish settlements north of the river, but nearly all the British colonists moved south to the Willamette Valley. The hope that the British colonists might dilute the American presence in the valley failed in the face of the overwhelming number of American settlers. These developments rekindled the issue of "joint occupation" and the boundary dispute. While some British interests, especially the Hudson's Bay Company, fought for a boundary along the Columbia River, the Oregon Treaty of 1846 set the boundary at the 49th parallel. As part of the treaty, the British retained all areas north of the line while the United States acquired the south. The Columbia River became much of the border between the U.S. territories of Oregon and Washington. Oregon became a U.S. state in 1859, while Washington later entered into the Union in 1889.
By the turn of the 20th century, the difficulty of navigating the Columbia was seen as an impediment to the economic development of the Inland Empire region east of the Cascades. The dredging and dam building that followed would permanently alter the river, disrupting its natural flow but also providing electricity, irrigation, navigability and other benefits to the region.
American captain Robert Gray and British captain George Vancouver, who explored the river in 1792, proved that it was possible to cross the Columbia Bar. Many of the challenges associated with that feat remain today; even with modern engineering alterations to the mouth of the river, the strong currents and shifting sandbar make it dangerous to pass between the river and the Pacific Ocean.
The use of steamboats along the river, beginning with the British Beaver in 1836 and followed by American vessels in 1850, contributed to the rapid settlement and economic development of the region. Steamboats operated in several distinct stretches of the river: on its lower reaches, from the Pacific Ocean to Cascades Rapids; from the Cascades to the Dalles-Celilo Falls; from Celilo to Priests Rapids; on the Wenatchee Reach of eastern Washington; on British Columbia's Arrow Lakes; and on tributaries like the Willamette, the Snake and Kootenay Lake. The boats, initially powered by burning wood, carried passengers and freight throughout the region for many years. Early railroads served to connect steamboat lines interrupted by waterfalls on the river's lower reaches. In the 1880s, railroads maintained by companies such as the Oregon Railroad and Navigation Company began to supplement steamboat operations as the major transportation links along the river.
As early as 1881, industrialists proposed altering the natural channel of the Columbia to improve navigation. Changes to the river over the years have included the construction of jetties at the river's mouth, dredging, and the construction of canals and navigation locks. Today, ocean freighters can travel upriver as far as Portland and Vancouver, and barges can reach as far inland as Lewiston, Idaho.
The shifting Columbia Bar makes passage between the river and the Pacific Ocean difficult and dangerous, and numerous rapids along the river hinder navigation. Pacific Graveyard, a 1964 book by James A. Gibbs, describes the many shipwrecks near the mouth of the Columbia. Jetties, first constructed in 1886, extend the river's channel into the ocean. Strong currents and the shifting sandbar remain a threat to ships entering the river and necessitate continuous maintenance of the jetties.
In 1891, the Columbia was dredged to enhance shipping. The channel between the ocean and Portland and Vancouver was deepened from 17 feet (5.2 m) to 25 feet (7.6 m). The Columbian called for the channel to be deepened to 40 feet (12 m) as early as 1905, but that depth was not attained until 1976.
Cascade Locks and Canal were first constructed in 1896 around the Cascades Rapids, enabling boats to travel safely through the Columbia River Gorge. The Celilo Canal, bypassing Celilo Falls, opened to river traffic in 1915. In the mid-20th century, the construction of dams along the length of the river submerged the rapids beneath a series of reservoirs. An extensive system of locks allowed ships and barges to pass easily between reservoirs. A navigation channel reaching Lewiston, Idaho, along the Columbia and Snake rivers, was completed in 1975. Among the main commodities are wheat and other grains, mainly for export. As of 2016, the Columbia ranked third, behind the Mississippi and Paraná rivers, among the world's largest export corridors for grain.
The 1980 eruption of Mount St. Helens caused mudslides in the area, which reduced the Columbia's depth by 25 feet (7.6 m) for a 4-mile (6.4 km) stretch, disrupting Portland's economy.
Efforts to maintain and improve the navigation channel have continued to the present day. In 1990 a new round of studies examined the possibility of further dredging on the lower Columbia. The plans were controversial from the start because of economic and environmental concerns.
In 1999, Congress authorized deepening the channel between Portland and Astoria from 40 to 43 feet (12–13 m), which will make it possible for large container and grain ships to reach Portland and Vancouver. The project has met opposition because of concerns about stirring up toxic sediment on the riverbed. Portland-based Northwest Environmental Advocates brought a lawsuit against the Army Corps of Engineers, but it was rejected by the Ninth U.S. Circuit Court of Appeals in August 2006. The project includes measures to mitigate environmental damage; for instance, the US Army Corps of Engineers must restore 12 times the area of wetland damaged by the project. In early 2006, the Corps spilled 50 US gallons (190 L) of hydraulic oil into the Columbia, drawing further criticism from environmental organizations.
Work on the project began in 2005 and concluded in 2010. The project's cost is estimated at $150 million. The federal government is paying 65 percent, Oregon and Washington are paying $27 million each, and six local ports are also contributing to the cost.
In 1902, the United States Bureau of Reclamation was established to aid in the economic development of arid western states. One of its major undertakings was building Grand Coulee Dam to provide irrigation for the 600 thousand acres (2,400 km) of the Columbia Basin Project in central Washington. With the onset of World War II, the focus of dam construction shifted to production of hydroelectricity. Irrigation efforts resumed after the war.
River development occurred within the structure of the 1909 International Boundary Waters Treaty between the United States and Canada. The United States Congress passed the Rivers and Harbors Act of 1925, which directed the U.S. Army Corps of Engineers and the Federal Power Commission to explore the development of the nation's rivers. This prompted agencies to conduct the first formal financial analysis of hydroelectric development; the reports produced by various agencies were presented in House Document 308. Those reports, and subsequent related reports, are referred to as 308 Reports.
In the late 1920s, political forces in the Northwestern United States generally favored the private development of hydroelectric dams along the Columbia. But the overwhelming victories of gubernatorial candidate George W. Joseph in the 1930 Republican primary, and later his law partner Julius Meier, were understood to demonstrate strong public support for public ownership of dams. In 1933, President Franklin D. Roosevelt signed a bill that enabled the construction of the Bonneville and Grand Coulee dams as public works projects. The legislation was attributed to the efforts of Oregon Senator Charles McNary, Washington Senator Clarence Dill, and Oregon Congressman Charles Martin, among others.
In 1948, floods swept through the Columbia watershed, destroying Vanport, then the second largest city in Oregon, and impacting cities as far north as Trail, BC. The flooding prompted the U.S. Congress to pass the Flood Control Act of 1950, authorizing the federal development of additional dams and other flood control mechanisms. By that time local communities had become wary of federal hydroelectric projects, and sought local control of new developments; a public utility district in Grant County, Washington, ultimately began construction of the dam at Priest Rapids.
In the 1960s, the United States and Canada signed the Columbia River Treaty, which focused on flood control and the maximization of downstream power generation. Canada agreed to build dams and provide reservoir storage, and the United States agreed to deliver to Canada one-half of the increase in United States downstream power benefits as estimated five years in advance. Canada's obligation was met by building three dams (two on the Columbia, and one on the Duncan River), the last of which was completed in 1973.
Today the main stem of the Columbia River has fourteen dams, of which three are in Canada and eleven in the United States. Four mainstem dams and four lower Snake River dams contain navigation locks to allow ship and barge passage from the ocean as far as Lewiston, Idaho. The river system as a whole has more than 400 dams for hydroelectricity and irrigation. The dams address a variety of demands, including flood control, navigation, stream flow regulation, storage, and delivery of stored waters, reclamation of public lands and Indian reservations, and the generation of hydroelectric power.
This river may have been shaped by God, or glaciers, or the remnants of the inland sea, or gravity, or a combination of all, but the Army Corps of Engineers controls it now. The Columbia rises and falls, not by the dictates of tide or rainfall, but by a computer-activated, legally arbitrated, federally allocated schedule that changes only when significant litigation is concluded, or a United States Senator nears election time. In that sense, it is reliable.
Timothy Egan, in The Good Rain
The larger U.S. dams are owned and operated by the federal government (some by the Army Corps of Engineers and some by the Bureau of Reclamation), while the smaller dams are operated by public utility districts and private power companies. The federally operated system is known as the Federal Columbia River Power System, which includes 31 dams on the Columbia and its tributaries. The system has altered the seasonal flow of the river to meet higher electricity demands during the winter. At the beginning of the 20th century, roughly 75 percent of the Columbia's flow occurred in the summer, between April and September. By 1980, the summer proportion had been lowered to about 50 percent, essentially eliminating the seasonal pattern.
The installation of dams dramatically altered the landscape and ecosystem of the river. At one time, the Columbia was one of the top salmon-producing river systems in the world. Previously active fishing sites, such as Celilo Falls in the eastern Columbia River Gorge, have exhibited a sharp decline in fishing along the Columbia in the last century, and salmon populations have been dramatically reduced. Fish ladders have been installed at some dam sites to help the fish journey to spawning waters. Chief Joseph Dam has no fish ladders and completely blocks fish migration to the upper half of the Columbia River system.
The Bureau of Reclamation's Columbia Basin Project focused on the generally dry region of central Washington known as the Columbia Basin, which features rich loess soil. Several groups developed competing proposals, and in 1933, President Franklin D. Roosevelt authorized the Columbia Basin Project. The Grand Coulee Dam was the project's central component; upon completion, it pumped water up from the Columbia to fill the formerly dry Grand Coulee, forming Banks Lake. By 1935, the intended height of the dam was increased from a range between 200 and 300 feet (61 and 91 m) to 500 feet (150 m), a height that would extend the lake impounded by the dam to the Canada–United States border; the project had grown from a local New Deal relief measure to a major national project.
The project's initial purpose was irrigation, but the onset of World War II created a high electricity demand, mainly for aluminum production and for the development of nuclear weapons at the Hanford Site. Irrigation began in 1951. The project provides water to more than 670 thousand acres (2,700 square kilometers) of fertile but arid land in central Washington, transforming the region into a major agricultural center. Important crops include orchard fruit, potatoes, alfalfa, mint, beans, beets, and wine grapes.
Since 1750, the Columbia has experienced six multi-year droughts. The longest, lasting 12 years in the mid‑19th century, reduced the river's flow to 20 percent below average. Scientists have expressed concern that a similar drought would have grave consequences in a region so dependent on the Columbia. In 1992–1993, a lesser drought affected farmers, hydroelectric power producers, shippers, and wildlife managers.
Many farmers in central Washington build dams on their property for irrigation and to control frost on their crops. The Washington Department of Ecology, using new techniques involving aerial photographs, estimated there may be as many as a hundred such dams in the area, most of which are illegal. Six such dams have failed in recent years, causing hundreds of thousands of dollars of damage to crops and public roads. Fourteen farms in the area have gone through the permitting process to build such dams legally.
The Columbia's heavy flow and large elevation drop over a short distance, 2.16 feet per mile (40.9 centimeters per kilometer), give it tremendous capacity for hydroelectricity generation. In comparison, the Mississippi drops less than 0.65 feet per mile (12.3 cm/km). The Columbia alone possesses one-third of the United States's hydroelectric potential. In 2012, the river and its tributaries accounted for 29 GW of hydroelectric generating capacity, contributing 44 percent of the total hydroelectric generation in the nation.
The largest of the 150 hydroelectric projects, the Grand Coulee Dam and Chief Joseph Dam are also the largest in the United States. As of 2017, Grand Coulee is the fifth largest hydroelectric plant in the world.
Inexpensive hydropower supported the location of a large aluminum industry in the region because its reduction from bauxite requires large amounts of electricity. Until 2000, the Northwestern United States produced up to 17 percent of the world's aluminum and 40 percent of the aluminum produced in the United States. The commoditization of power in the early 21st century, coupled with a drought that reduced the generation capacity of the river, damaged the industry and by 2001, Columbia River aluminum producers had idled 80 percent of its production capacity. By 2003, the entire United States produced only 15 percent of the world's aluminum and many smelters along the Columbia had gone dormant or out of business.
Power remains relatively inexpensive along the Columbia, and since the mid-2000 several global enterprises have moved server farm operations into the area to avail themselves of cheap power. Downriver of Grand Coulee, each dam's reservoir is closely regulated by the Bonneville Power Administration (BPA), the U.S. Army Corps of Engineers, and various Washington public utility districts to ensure flow, flood control, and power generation objectives are met. Increasingly, hydro-power operations are required to meet standards under the U.S. Endangered Species Act and other agreements to manage operations to minimize impacts on salmon and other fish, and some conservation and fishing groups support removing four dams on the lower Snake River, the largest tributary of the Columbia.
In 1941, the BPA hired Oklahoma folksinger Woody Guthrie to write songs for a documentary film promoting the benefits of hydropower. In the month he spent traveling the region Guthrie wrote 26 songs, which have become an important part of the cultural history of the region.
The Columbia supports several species of anadromous fish that migrate between the Pacific Ocean and freshwater tributaries of the river. Sockeye salmon, Coho and Chinook ("king") salmon, and steelhead, all of the genus Oncorhynchus, are ocean fish that migrate up the rivers at the end of their life cycles to spawn. White sturgeon, which take 15 to 25 years to mature, typically migrate between the ocean and the upstream habitat several times during their lives.
Salmon populations declined dramatically after the establishment of canneries in 1867. In 1879 it was reported that 545,450 salmon, with an average weight of 22 pounds (10.0 kg) were caught (in a recent season) and mainly canned for export to England. A can weighing 1 pound (0.45 kg) could be sold for 8d or 9d. By 1908, there was widespread concern about the decline of salmon and sturgeon. In that year, the people of Oregon passed two laws under their newly instituted program of citizens' initiatives limiting fishing on the Columbia and other rivers. Then in 1948, another initiative banned the use of seine nets (devices already used by Native Americans, and refined by later settlers) altogether.
Dams interrupt the migration of anadromous fish. Salmon and steelhead return to the streams in which they were born to spawn; where dams prevent their return, entire populations of salmon die. Some of the Columbia and Snake River dams employ fish ladders, which are effective to varying degrees at allowing these fish to travel upstream. Another problem exists for the juvenile salmon headed downstream to the ocean. Previously, this journey would have taken two to three weeks. With river currents slowed by the dams, and the Columbia converted from a wild river to a series of slackwater pools, the journey can take several months, which increases the mortality rate. In some cases, the Army Corps of Engineers transports juvenile fish downstream by truck or river barge. The Chief Joseph Dam and several dams on the Columbia's tributaries entirely block migration, and there are no migrating fish on the river above these dams. Sturgeons have different migration habits and can survive without ever visiting the ocean. In many upstream areas cut off from the ocean by dams, sturgeon simply live upstream of the dam.
Not all fish have suffered from the modifications to the river; the northern pikeminnow (formerly known as the squawfish) thrives in the warmer, slower water created by the dams. Research in the mid-1980s found that juvenile salmon were suffering substantially from the predatory pikeminnow, and in 1990, in the interest of protecting salmon, a "bounty" program was established to reward anglers for catching pikeminnow.
In 1994, the salmon catch was smaller than usual in the rivers of Oregon, Washington, and British Columbia, causing concern among commercial fishermen, government agencies, and tribal leaders. US government intervention, to which the states of Alaska, Idaho, and Oregon objected, included an 11-day closure of an Alaska fishery. In April 1994 the Pacific Fisheries Management Council unanimously approved the strictest regulations in 18 years, banning all commercial salmon fishing for that year from Cape Falcon north to the Canada–US border. In the winter of 1994, the return of coho salmon far exceeded expectations, which was attributed in part to the fishing ban.
Also in 1994, United States Secretary of the Interior Bruce Babbitt proposed the removal of several Pacific Northwest dams because of their impact on salmon spawning. The Northwest Power Planning Council approved a plan that provided more water for fish and less for electricity, irrigation, and transportation. Environmental advocates have called for the removal of certain dams in the Columbia system in the years since. Of the 227 major dams in the Columbia River drainage basin, the four Washington dams on the lower Snake River are often identified for removal, for example in an ongoing lawsuit concerning a Bush administration plan for salmon recovery. These dams and reservoirs limit the recovery of upriver salmon runs to Idaho's Salmon and Clearwater rivers. Historically, the Snake produced over 1.5 million spring and summer Chinook salmon, a number that has dwindled to several thousand in recent years. Idaho Power Company's Hells Canyon dams have no fish ladders (and do not pass juvenile salmon downstream), and thus allow no steelhead or salmon to migrate above Hells Canyon. In 2007, the destruction of the Marmot Dam on the Sandy River was the first dam removal in the system. Other Columbia Basin dams that have been removed include Condit Dam on Washington's White Salmon River, and the Milltown Dam on the Clark Fork in Montana.
In southeastern Washington, a 50-mile (80 km) stretch of the river passes through the Hanford Site, established in 1943 as part of the Manhattan Project. The site served as a plutonium production complex, with nine nuclear reactors and related facilities along the banks of the river. From 1944 to 1971, pump systems drew cooling water from the river and, after treating this water for use by the reactors, returned it to the river. Before being released back into the river, the used water was held in large tanks known as retention basins for up to six hours. Longer-lived isotopes were not affected by this retention, and several terabecquerels entered the river every day. By 1957, the eight plutonium production reactors at Hanford dumped a daily average of 50,000 curies of radioactive material into the Columbia. These releases were kept secret by the federal government until the release of declassified documents in the late 1980s. Radiation was measured downstream as far west as the Washington and Oregon coasts.
The nuclear reactors were decommissioned at the end of the Cold War, and the Hanford site is the focus of one of the world's largest environmental cleanup, managed by the Department of Energy under the oversight of the Washington Department of Ecology and the Environmental Protection Agency. Nearby aquifers contain an estimated 270 billion US gallons (1 billion m) of groundwater contaminated by high-level nuclear waste that has leaked out of Hanford's underground storage tanks. As of 2008, 1 million US gallons (3,785 m) of highly radioactive waste is traveling through groundwater toward the Columbia River. This waste is expected to reach the river in 12 to 50 years if cleanup does not proceed on schedule.
In addition to concerns about nuclear waste, numerous other pollutants are found in the river. These include chemical pesticides, bacteria, arsenic, dioxins, and polychlorinated biphenyls (PCB).
Studies have also found significant levels of toxins in fish and the waters they inhabit within the basin. Accumulation of toxins in fish threatens the survival of fish species, and human consumption of these fish can lead to health problems. Water quality is also an important factor in the survival of other wildlife and plants that grow in the Columbia River drainage basin. The states, Indian tribes, and federal government are all engaged in efforts to restore and improve the water, land, and air quality of the Columbia River drainage basin and have committed to work together to accomplish critical ecosystem restoration efforts. Several cleanup efforts are underway, including Superfund projects at Portland Harbor, Hanford, and Lake Roosevelt.
Timber industry activity further contaminates river water, for example in the increased sediment runoff that results from clearcuts. The Northwest Forest Plan, a piece of federal legislation from 1994, mandated that timber companies consider the environmental impacts of their practices on rivers like the Columbia.
On July 1, 2003, Christopher Swain became the first person to swim the Columbia River's entire length, to raise public awareness about the river's environmental health.
Both natural and anthropogenic processes are involved in the cycling of nutrients in the Columbia River basin. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts on nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams.
Nutrient dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of nutrients into the Pacific, except for nitrogen, which is delivered into the estuary by ocean upwelling sources.
Most of the Columbia's drainage basin (which, at 258,000 square miles or 670,000 square kilometres, is about the size of France) lies roughly between the Rocky Mountains on the east and the Cascade Mountains on the west. In the United States and Canada the term watershed is often used to mean drainage basin. The term Columbia Basin is used to refer not only to the entire drainage basin but also to subsets of the river's watershed, such as the relatively flat and unforested area in eastern Washington bounded by the Cascades, the Rocky Mountains, and the Blue Mountains. Within the watershed are diverse landforms including mountains, arid plateaus, river valleys, rolling uplands, and deep gorges. Grand Teton National Park lies in the watershed, as well as parts of Yellowstone National Park, Glacier National Park, Mount Rainier National Park, and North Cascades National Park. Canadian National Parks in the watershed include Kootenay National Park, Yoho National Park, Glacier National Park, and Mount Revelstoke National Park. Hells Canyon, the deepest gorge in North America, and the Columbia Gorge are in the watershed. Vegetation varies widely, ranging from western hemlock and western redcedar in the moist regions to sagebrush in the arid regions. The watershed provides habitat for 609 known fish and wildlife species, including the bull trout, bald eagle, gray wolf, grizzly bear, and Canada lynx.
The World Wide Fund for Nature (WWF) divides the waters of the Columbia and its tributaries into three freshwater ecoregions: Columbia Glaciated, Columbia Unglaciated, and Upper Snake. The Columbia Glaciated ecoregion, about a third of the total watershed, lies in the north and was covered with ice sheets during the Pleistocene. The ecoregion includes the mainstem Columbia north of the Snake River and tributaries such as the Yakima, Okanagan, Pend Oreille, Clark Fork, and Kootenay rivers. The effects of glaciation include a number of large lakes and a relatively low diversity of freshwater fish. The Upper Snake ecoregion is defined as the Snake River watershed above Shoshone Falls, which totally blocks fish migration. This region has 14 species of fish, many of which are endemic. The Columbia Unglaciated ecoregion makes up the rest of the watershed. It includes the mainstem Columbia below the Snake River and tributaries such as the Salmon, John Day, Deschutes, and lower Snake Rivers. Of the three ecoregions it is the richest in terms of freshwater species diversity. There are 35 species of fish, of which four are endemic. There are also high levels of mollusk endemism.
In 2016, over eight million people lived within the Columbia's drainage basin. Of this total about 3.5 million people lived in Oregon, 2.1 million in Washington, 1.7 million in Idaho, half a million in British Columbia, and 0.4 million in Montana. Population in the watershed has been rising for many decades and is projected to rise to about 10 million by 2030. The highest population densities are found west of the Cascade Mountains along the I-5 corridor, especially in the Portland-Vancouver urban area. High densities are also found around Spokane, Washington, and Boise, Idaho. Although much of the watershed is rural and sparsely populated, areas with recreational and scenic values are growing rapidly. The central Oregon county of Deschutes is the fastest-growing in the state. Populations have also been growing just east of the Cascades in central Washington around the city of Yakima and the Tri-Cities area. Projections for the coming decades assume growth throughout the watershed. The Canadian part of the Okanagan subbasin is also growing rapidly.
Climate varies greatly within the watershed. Elevation ranges from sea level at the river mouth to more than 14,000 feet (4,300 m) in the mountains, and temperatures vary with elevation. The highest peak is Mount Rainier, at 14,411 feet (4,392 m). High elevations have cold winters and short cool summers; interior regions are subject to great temperature variability and severe droughts. Over some of the watershed, especially west of the Cascade Mountains, precipitation maximums occur in winter, when Pacific storms come ashore. Atmospheric conditions block the flow of moisture in summer, which is generally dry except for occasional thunderstorms in the interior. In some of the eastern parts of the watershed, especially shrub-steppe regions with Continental climate patterns, precipitation maximums occur in early summer. Annual precipitation varies from more than 100 inches (250 cm) a year in the Cascades to less than 8 inches (20 cm) in the interior. Much of the watershed gets less than 12 inches (30 cm) a year.
Several major North American drainage basins and many minor ones border the Columbia River's drainage basin. To the east, in northern Wyoming and Montana, the Continental Divide separates the Columbia watershed from the Mississippi-Missouri watershed, which empties into the Gulf of Mexico. To the northeast, mostly along the southern border between British Columbia and Alberta, the Continental Divide separates the Columbia watershed from the Nelson-Lake Winnipeg-Saskatchewan watershed, which empties into Hudson Bay. The Mississippi and Nelson watersheds are separated by the Laurentian Divide, which meets the Continental Divide at Triple Divide Peak near the headwaters of the Columbia's Flathead River tributary. This point marks the meeting of three of North America's main drainage patterns, to the Pacific Ocean, to Hudson Bay, and to the Atlantic Ocean via the Gulf of Mexico.
Further north along the Continental Divide, a short portion of the combined Continental and Laurentian divides separate the Columbia watershed from the MacKenzie-Slave-Athabasca watershed, which empties into the Arctic Ocean. The Nelson and Mackenzie watersheds are separated by a divide between streams flowing to the Arctic Ocean and those of the Hudson Bay watershed. This divide meets the Continental Divide at Snow Dome (also known as Dome), near the northernmost bend of the Columbia River.
To the southeast, in western Wyoming, another divide separates the Columbia watershed from the Colorado–Green watershed, which empties into the Gulf of California. The Columbia, Colorado, and Mississippi watersheds meet at Three Waters Mountain in the Wind River Range of Wyoming. To the south, in Oregon, Nevada, Utah, Idaho, and Wyoming, the Columbia watershed is divided from the Great Basin, whose several watersheds are endorheic, not emptying into any ocean but rather drying up or sinking into sumps. Great Basin watersheds that share a border with the Columbia watershed include Harney Basin, Humboldt River, and Great Salt Lake. The associated triple divide points are Commissary Ridge North, Wyoming, and Sproats Meadow Northwest, Oregon. To the north, mostly in British Columbia, the Columbia watershed borders the Fraser River watershed. To the west and southwest the Columbia watershed borders a number of smaller watersheds that drain to the Pacific Ocean, such as the Klamath River in Oregon and California and the Puget Sound Basin in Washington.
The Columbia receives more than 60 significant tributaries. The four largest that empty directly into the Columbia (measured either by discharge or by size of watershed) are the Snake River (mostly in Idaho), the Willamette River (in northwest Oregon), the Kootenay River (mostly in British Columbia), and the Pend Oreille River (mostly in northern Washington and Idaho, also known as the lower part of the Clark Fork). Each of these four averages more than 20,000 cubic feet per second (570 m/s) and drains an area of more than 20,000 square miles (52,000 km).
The Snake is by far the largest tributary. Its watershed of 108,000 square miles (280,000 km) is larger than the state of Idaho. Its discharge is roughly a third of the Columbia's at the rivers' confluence but compared to the Columbia upstream of the confluence the Snake is longer (113%) and has a larger drainage basin (104%).
The Pend Oreille River system (including its main tributaries, the Clark Fork and Flathead rivers) is also similar in size to the Columbia at their confluence. Compared to the Columbia River above the two rivers' confluence, the Pend Oreille-Clark-Flathead is nearly as long (about 86%), its basin about three-fourths as large (76%), and its discharge over a third (37%). | [
{
"paragraph_id": 0,
"text": "The Columbia River (Upper Chinook: Wimahl or Wimal; Sahaptin: Nch’i-Wàna or Nchi wana; Sinixt dialect swah'netk'qhu) is the largest river in the Pacific Northwest region of North America. The river forms in the Rocky Mountains of British Columbia, Canada. It flows northwest and then south into the U.S. state of Washington, then turns west to form most of the border between Washington and the state of Oregon before emptying into the Pacific Ocean. The river is 1,243 miles (2,000 kilometers) long, and its largest tributary is the Snake River. Its drainage basin is roughly the size of France and extends into seven states of the United States and one Canadian province. The fourth-largest river in the United States by volume, the Columbia has the greatest flow of any North American river entering the Pacific. The Columbia has the 36th greatest discharge of any river in the world.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Columbia and its tributaries have been central to the region's culture and economy for thousands of years. They have been used for transportation since ancient times, linking the region's many cultural groups. The river system hosts many species of anadromous fish, which migrate between freshwater habitats and the saline waters of the Pacific Ocean. These fish—especially the salmon species—provided the core subsistence for native peoples.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first documented European discovery of the Columbia River occurred when Bruno de Heceta sighted the river's mouth in 1775. On May 11, 1792, a private American ship, Columbia Rediviva, under Captain Robert Gray from Boston became the first non-indigenous vessel to enter the river. Later in 1792, William Robert Broughton of the British Royal Navy commanding HMS Chatham as part of the Vancouver Expedition, navigated past the Oregon Coast Range and 100 miles upriver to what is now Vancouver, Washington. In the following decades, fur-trading companies used the Columbia as a key transportation route. Overland explorers entered the Willamette Valley through the scenic, but treacherous Columbia River Gorge, and pioneers began to settle the valley in increasing numbers. Steamships along the river linked communities and facilitated trade; the arrival of railroads in the late 19th century, many running along the river, supplemented these links.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since the late 19th century, public and private sectors have extensively developed the river. To aid ship and barge navigation, locks have been built along the lower Columbia and its tributaries, and dredging has opened, maintained, and enlarged shipping channels. Since the early 20th century, dams have been built across the river for power generation, navigation, irrigation, and flood control. The 14 hydroelectric dams on the Columbia's main stem and many more on its tributaries produce more than 44 percent of total U.S. hydroelectric generation. Production of nuclear power has taken place at two sites along the river. Plutonium for nuclear weapons was produced for decades at the Hanford Site, which is now the most contaminated nuclear site in the United States. These developments have greatly altered river environments in the watershed, mainly through industrial pollution and barriers to fish migration.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Columbia begins its 1,243-mile (2,000 km) journey in the southern Rocky Mountain Trench in British Columbia (BC). Columbia Lake – 2,690 feet (820 meters) above sea level – and the adjoining Columbia Wetlands form the river's headwaters. The trench is a broad, deep, and long glacial valley between the Canadian Rockies and the Columbia Mountains in BC. For its first 200 miles (320 km), the Columbia flows northwest along the trench through Windermere Lake and the town of Invermere, a region known in BC as the Columbia Valley, then northwest to Golden and into Kinbasket Lake. Rounding the northern end of the Selkirk Mountains, the river turns sharply south through a region known as the Big Bend Country, passing through Revelstoke Lake and the Arrow Lakes. Revelstoke, the Big Bend, and the Columbia Valley combined are referred to in BC parlance as the Columbia Country. Below the Arrow Lakes, the Columbia passes the cities of Castlegar, located at the Columbia's confluence with the Kootenay River, and Trail, two major population centers of the West Kootenay region. The Pend Oreille River joins the Columbia about 2 miles (3 km) north of the United States–Canada border.",
"title": "Course"
},
{
"paragraph_id": 5,
"text": "The Columbia enters eastern Washington flowing south and turning to the west at the Spokane River confluence. It marks the southern and eastern borders of the Colville Indian Reservation and the western border of the Spokane Indian Reservation. The river turns south after the Okanogan River confluence, then southeasterly near the confluence with the Wenatchee River in central Washington. This C-shaped segment of the river is also known as the \"Big Bend\". During the Missoula Floods 10–15,000 years ago, much of the floodwater took a more direct route south, forming the ancient river bed known as the Grand Coulee. After the floods, the river found its present course, and the Grand Coulee was left dry. The construction of the Grand Coulee Dam in the mid-20th century impounded the river, forming Lake Roosevelt, from which water was pumped into the dry coulee, forming the reservoir of Banks Lake.",
"title": "Course"
},
{
"paragraph_id": 6,
"text": "The river flows past The Gorge Amphitheatre, a prominent concert venue in the Northwest, then through Priest Rapids Dam, and then through the Hanford Nuclear Reservation. Entirely within the reservation is Hanford Reach, the only U.S. stretch of the river that is completely free-flowing, unimpeded by dams, and not a tidal estuary. The Snake River and Yakima River join the Columbia in the Tri-Cities population center. The Columbia makes a sharp bend to the west at the Washington–Oregon border. The river defines that border for the final 309 miles (497 km) of its journey.",
"title": "Course"
},
{
"paragraph_id": 7,
"text": "The Deschutes River joins the Columbia near The Dalles. Between The Dalles and Portland, the river cuts through the Cascade Range, forming the dramatic Columbia River Gorge. No other rivers except for the Klamath and Pit River completely breach the Cascades – the other rivers that flow through the range also originate in or very near the mountains. The headwaters and upper course of the Pit River are on the Modoc Plateau; downstream, the Pit cuts a canyon through the southern reaches of the Cascades. In contrast, the Columbia cuts through the range nearly a thousand miles from its source in the Rocky Mountains. The gorge is known for its strong and steady winds, scenic beauty, and its role as an important transportation link. The river continues west, bending sharply to the north-northwest near Portland and Vancouver, Washington, at the Willamette River confluence. Here the river slows considerably, dropping sediment that might otherwise form a river delta. Near Longview, Washington and the Cowlitz River confluence, the river turns west again. The Columbia empties into the Pacific Ocean just west of Astoria, Oregon, over the Columbia Bar, a shifting sandbar that makes the river's mouth one of the most hazardous stretches of water to navigate in the world. Because of the danger and the many shipwrecks near the mouth, it acquired a reputation as the \"Graveyard of Ships\".",
"title": "Course"
},
{
"paragraph_id": 8,
"text": "The Columbia drains an area of about 258,000 square miles (670,000 square kilometers). Its drainage basin covers nearly all of Idaho, large portions of British Columbia, Oregon, and Washington, and ultimately all of Montana west of the Continental Divide, and small portions of Wyoming, Utah, and Nevada; the total area is similar to the size of France. Roughly 745 miles (1,200 km) of the river's length and 85 percent of its drainage basin are in the US. The Columbia is the twelfth-longest river and has the sixth-largest drainage basin in the United States. In Canada, where the Columbia flows for 498 miles (801 km) and drains 39,700 square miles (103,000 km), the river ranks 23rd in length, and the Canadian part of its basin ranks 13th in size among Canadian basins. The Columbia shares its name with nearby places, such as British Columbia, as well as with landforms and bodies of water.",
"title": "Course"
},
{
"paragraph_id": 9,
"text": "With an average flow at the mouth of about 265,000 cubic feet per second (7,500 cubic meters per second), the Columbia is the largest river by discharge flowing into the Pacific from the Americas and is the fourth-largest by volume in the U.S. The average flow where the river crosses the international border between Canada and the United States is 99,000 cubic feet per second (2,790 cubic meters per second) from a drainage basin of 39,700 square miles (102,800 km). This amounts to about 15 percent of the entire Columbia watershed. The Columbia's highest recorded flow, measured at The Dalles, was 1,240,000 cubic feet per second (35,000 m/s) in June 1894, before the river was dammed. The lowest flow recorded at The Dalles was 12,100 cubic feet per second (340 m/s) on April 16, 1968, and was caused by the initial closure of the John Day Dam, 28 miles (45 km) upstream. The Dalles is about 190 miles (310 km) from the mouth; the river at this point drains about 237,000 square miles (610,000 km) or about 91 percent of the total watershed. Flow rates on the Columbia are affected by many large upstream reservoirs, many diversions for irrigation, and, on the lower stretches, reverse flow from the tides of the Pacific Ocean. The National Ocean Service observes water levels at six tide gauges and issues tide forecasts for twenty-two additional locations along the river between the entrance at the North Jetty and the base of Bonneville Dam, its head of tide.",
"title": "Course"
},
{
"paragraph_id": 10,
"text": "The Columbia River multiannual average discharge:",
"title": "Course"
},
{
"paragraph_id": 11,
"text": "",
"title": "Course"
},
{
"paragraph_id": 12,
"text": "",
"title": "Course"
},
{
"paragraph_id": 13,
"text": "When the rifting of Pangaea, due to the process of plate tectonics, pushed North America away from Europe and Africa and into the Panthalassic Ocean (ancestor to the modern Pacific Ocean), the Pacific Northwest was not part of the continent. As the North American continent moved westward, the Farallon Plate subducted under its western margin. As the plate subducted, it carried along island arcs which were accreted to the North American continent, resulting in the creation of the Pacific Northwest between 150 and 90 million years ago. The general outline of the Columbia Basin was not complete until between 60 and 40 million years ago, but it lay under a large inland sea later subject to uplift. Between 50 and 20 million years ago, from the Eocene through the Miocene eras, tremendous volcanic eruptions frequently modified much of the landscape traversed by the Columbia. The lower reaches of the ancestral river passed through a valley near where Mount Hood later arose. Carrying sediments from erosion and erupting volcanoes, it built a 2-mile (3.2 km) thick delta that underlies the foothills on the east side of the Coast Range near Vernonia in northwestern Oregon. Between 17 million and 6 million years ago, huge outpourings of flood basalt lava covered the Columbia River Plateau and forced the lower Columbia into its present course. The modern Cascade Range began to uplift 5 to 4 million years ago. Cutting through the uplifting mountains, the Columbia River significantly deepened the Columbia River Gorge.",
"title": "Geology"
},
{
"paragraph_id": 14,
"text": "The river and its drainage basin experienced some of the world's greatest known catastrophic floods toward the end of the last ice age. The periodic rupturing of ice dams at Glacial Lake Missoula resulted in the Missoula Floods, with discharges exceeding the combined flow of all the other rivers in the world, dozens of times over thousands of years. The exact number of floods is unknown, but geologists have documented at least 40; evidence suggests that they occurred between about 19,000 and 13,000 years ago.",
"title": "Geology"
},
{
"paragraph_id": 15,
"text": "The floodwaters rushed across eastern Washington, creating the channeled scablands, which are a complex network of dry canyon-like channels, or coulees that are often braided and sharply gouged into the basalt rock underlying the region's deep topsoil. Numerous flat-topped buttes with rich soil stand high above the chaotic scablands. Constrictions at several places caused the floodwaters to pool into large temporary lakes, such as Lake Lewis, in which sediments were deposited. Water depths have been estimated at 1,000 feet (300 m) at Wallula Gap and 400 feet (120 m) over modern Portland, Oregon. Sediments were also deposited when the floodwaters slowed in the broad flats of the Quincy, Othello, and Pasco Basins. The floods' periodic inundation of the lower Columbia River Plateau deposited rich sediments; 21st-century farmers in the Willamette Valley \"plow fields of fertile Montana soil and clays from Washington's Palouse\".",
"title": "Geology"
},
{
"paragraph_id": 16,
"text": "Over the last several thousand years a series of large landslides have occurred on the north side of the Columbia River Gorge, sending massive amounts of debris south from Table Mountain and Greenleaf Peak into the gorge near the present site of Bonneville Dam. The most recent and significant is known as the Bonneville Slide, which formed a massive earthen dam, filling 3.5 miles (5.6 km) of the river's length. Various studies have placed the date of the Bonneville Slide anywhere between 1060 and 1760 AD; the idea that the landslide debris present today was formed by more than one slide is relatively recent and may explain the large range of estimates. It has been suggested that if the later dates are accurate there may be a link with the 1700 Cascadia earthquake. The pile of debris resulting from the Bonneville Slide blocked the river until rising water finally washed away the sediment. It is not known how long it took the river to break through the barrier; estimates range from several months to several years. Much of the landslide's debris remained, forcing the river about 1.5 miles (2.4 km) south of its previous channel and forming the Cascade Rapids. In 1938, the construction of Bonneville Dam inundated the rapids as well as the remaining trees that could be used to refine the estimated date of the landslide.",
"title": "Geology"
},
{
"paragraph_id": 17,
"text": "In 1980, the eruption of Mount St. Helens deposited large amounts of sediment in the lower Columbia, temporarily reducing the depth of the shipping channel by 26 feet (7.9 m).",
"title": "Geology"
},
{
"paragraph_id": 18,
"text": "Humans have inhabited the Columbia's watershed for more than 15,000 years, with a transition to a sedentary lifestyle based mainly on salmon starting about 3,500 years ago. In 1962, archaeologists found evidence of human activity dating back 11,230 years at the Marmes Rockshelter, near the confluence of the Palouse and Snake rivers in eastern Washington. In 1996 the skeletal remains of a 9,000-year-old prehistoric man (dubbed Kennewick Man) were found near Kennewick, Washington. The discovery rekindled debate in the scientific community over the origins of human habitation in North America and sparked a protracted controversy over whether the scientific or Native American community was entitled to possess and/or study the remains.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 19,
"text": "Many different Native Americans and First Nations peoples have a historical and continuing presence on the Columbia. South of the Canada–US border, the Colville, Spokane, Coeur d'Alene, Yakama, Nez Perce, Cayuse, Palus, Umatilla, Cowlitz, and the Confederated Tribes of Warm Springs live along the US stretch. Along the upper Snake River and Salmon River, the Shoshone Bannock tribes are present. The Sinixt or Lakes people lived on the lower stretch of the Canadian portion, while above that the Shuswap people (Secwepemc in their own language) reckon the whole of the upper Columbia east to the Rockies as part of their territory. The Canadian portion of the Columbia Basin outlines the traditional homelands of the Canadian Kootenay–Ktunaxa.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 20,
"text": "The Chinook tribe, which is not federally recognized, who live near the lower Columbia River, call it Wimahl or Wimal in the Upper Chinook (Kiksht) language, and it is Nch’i-Wàna or Nchi wana to the Sahaptin (Ichishkíin Sɨ́nwit)-speaking peoples of its middle course in present-day Washington. The river is known as swah'netk'qhu by the Sinixt people, who live in the area of the Arrow Lakes in the river's upper reaches in Canada. All three terms essentially mean \"the big river\".",
"title": "Indigenous peoples"
},
{
"paragraph_id": 21,
"text": "Oral histories describe the formation and destruction of the Bridge of the Gods, a land bridge that connected the Oregon and Washington sides of the river in the Columbia River Gorge. The bridge, which aligns with geological records of the Bonneville Slide, was described in some stories as the result of a battle between gods, represented by Mount Adams and Mount Hood, in their competition for the affection of a goddess, represented by Mount St. Helens. Native American stories about the bridge differ in their details but agree in general that the bridge permitted increased interaction between tribes on the north and south sides of the river.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 22,
"text": "Horses, originally acquired from Spanish New Mexico, spread widely via native trade networks, reaching the Shoshone of the Snake River Plain by 1700. The Nez Perce, Cayuse, and Flathead people acquired their first horses around 1730. Along with horses came aspects of the emerging plains culture, such as equestrian and horse training skills, greatly increased mobility, hunting efficiency, trade over long distances, intensified warfare, the linking of wealth and prestige to horses and war, and the rise of large and powerful tribal confederacies. The Nez Perce and Cayuse kept large herds and made annual long-distance trips to the Great Plains for bison hunting, adopted the plains culture to a significant degree, and became the main conduit through which horses and the plains culture diffused into the Columbia River region. Other peoples acquired horses and aspects of the plains culture unevenly. The Yakama, Umatilla, Palus, Spokane, and Coeur d'Alene maintained sizable herds of horses and adopted some of the plains cultural characteristics, but fishing and fish-related economies remained important. Less affected groups included the Molala, Klickitat, Wenatchi, Okanagan, and Sinkiuse-Columbia peoples, who owned small numbers of horses and adopted few plains culture features. Some groups remained essentially unaffected, such as the Sanpoil and Nespelem people, whose culture remained centered on fishing.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 23,
"text": "Natives of the region encountered foreigners at several times and places during the 18th and 19th centuries. European and American vessels explored the coastal area around the mouth of the river in the late 18th century, trading with local natives. The contact would prove devastating to the Indian tribes; a large portion of their population was wiped out by a smallpox epidemic. Canadian explorer Alexander Mackenzie crossed what is now interior British Columbia in 1793. From 1805 to 1806, the Lewis and Clark Expedition entered the Oregon Country along the Clearwater and Snake rivers, and encountered numerous small settlements of natives. Their records recount tales of hospitable traders who were not above stealing small items from the visitors. They also noted brass teakettles, a British musket, and other artifacts that had been obtained in trade with coastal tribes. From the earliest contact with westerners, the natives of the mid- and lower Columbia were not tribal, but instead congregated in social units no larger than a village, and more often at a family level; these units would shift with the season as people moved about, following the salmon catch up and down the river's tributaries.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 24,
"text": "Sparked by the 1847 Whitman Massacre, a number of violent battles were fought between American settlers and the region's natives. The subsequent Indian Wars, especially the Yakima War, decimated the native population and removed much land from native control. As years progressed, the right of natives to fish along the Columbia became the central issue of contention with the states, commercial fishers, and private property owners. The US Supreme Court upheld fishing rights in landmark cases in 1905 and 1918, as well as the 1974 case United States v. Washington, commonly called the Boldt Decision.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 25,
"text": "Fish were central to the culture of the region's natives, both as sustenance and as part of their religious beliefs. Natives drew fish from the Columbia at several major sites, which also served as trading posts. Celilo Falls, located east of the modern city of The Dalles, was a vital hub for trade and the interaction of different cultural groups, being used for fishing and trading for 11,000 years. Prior to contact with westerners, villages along this 9-mile (14 km) stretch may have at times had a population as great as 10,000. The site drew traders from as far away as the Great Plains.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 26,
"text": "The Cascades Rapids of the Columbia River Gorge, and Kettle Falls and Priest Rapids in eastern Washington, were also major fishing and trading sites.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 27,
"text": "In prehistoric times the Columbia's salmon and steelhead runs numbered an estimated annual average of 10 to 16 million fish. In comparison, the largest run since 1938 was in 1986, with 3.2 million fish entering the Columbia. The annual catch by natives has been estimated at 42 million pounds (19,000 metric tons). The most important and productive native fishing site was located at Celilo Falls, which was perhaps the most productive inland fishing site in North America. The falls were located at the border between Chinookan- and Sahaptian-speaking peoples and served as the center of an extensive trading network across the Pacific Plateau. Celilo was the oldest continuously inhabited community on the North American continent.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 28,
"text": "Salmon canneries established by white settlers beginning in 1866 had a strong negative impact on the salmon population, and in 1908 US President Theodore Roosevelt observed that the salmon runs were but a fraction of what they had been 25 years prior.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 29,
"text": "As river development continued in the 20th century, each of these major fishing sites was flooded by a dam, beginning with Cascades Rapids in 1938. The development was accompanied by extensive negotiations between natives and US government agencies. The Confederated Tribes of Warm Springs, a coalition of various tribes, adopted a constitution and incorporated after the 1938 completion of the Bonneville Dam flooded Cascades Rapids; Still, in the 1930s, there were natives who lived along the river and fished year round, moving along with the fish's migration patterns throughout the seasons. The Yakama were slower to do so, organizing a formal government in 1944. In the 21st century, the Yakama, Nez Perce, Umatilla, and Warm Springs tribes all have treaty fishing rights along the Columbia and its tributaries.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 30,
"text": "In 1957 Celilo Falls was submerged by the construction of The Dalles Dam, and the native fishing community was displaced. The affected tribes received a $26.8 million settlement for the loss of Celilo and other fishing sites submerged by The Dalles Dam. The Confederated Tribes of Warm Springs used part of its $4 million settlement to establish the Kah-Nee-Ta resort south of Mount Hood.",
"title": "Indigenous peoples"
},
{
"paragraph_id": 31,
"text": "Some historians believe that Japanese or Chinese vessels blown off course reached the Northwest Coast long before Europeans—possibly as early as 219 BCE. Historian Derek Hayes claims that \"It is a near certainty that Japanese or Chinese people arrived on the northwest coast long before any European.\" It is unknown whether they landed near the Columbia. Evidence exists that Spanish castaways reached the shore in 1679 and traded with the Clatsop; if these were the first Europeans to see the Columbia, they failed to send word home to Spain.",
"title": "New waves of explorers"
},
{
"paragraph_id": 32,
"text": "In the 18th century, there was strong interest in discovering a Northwest Passage that would permit navigation between the Atlantic (or inland North America) and the Pacific Ocean. Many ships in the area, especially those under Spanish and British command, searched the northwest coast for a large river that might connect to Hudson Bay or the Missouri River. The first documented European discovery of the Columbia River was that of Bruno de Heceta, who in 1775 sighted the river's mouth. On the advice of his officers, he did not explore it, as he was short-staffed and the current was strong. He considered it a bay, and called it Ensenada de Asunción (Assumption Cove). Later Spanish maps, based on his sighting, showed a river, labeled Río de San Roque (The Saint Roch River), or an entrance, called Entrada de Hezeta, named for Bruno de Hezeta, who sailed the region. Following Hezeta's reports, British maritime fur trader Captain John Meares searched for the river in 1788 but concluded that it did not exist. He named Cape Disappointment for the non-existent river, not realizing the cape marks the northern edge of the river's mouth.",
"title": "New waves of explorers"
},
{
"paragraph_id": 33,
"text": "What happened next would form the basis for decades of both cooperation and dispute between British and American exploration of, and ownership claim to, the region. Royal Navy commander George Vancouver sailed past the mouth in April 1792 and observed a change in the water's color, but he accepted Meares' report and continued on his journey northward. Later that month, Vancouver encountered the American captain Robert Gray at the Strait of Juan de Fuca. Gray reported that he had seen the entrance to the Columbia and had spent nine days trying but failing to enter.",
"title": "New waves of explorers"
},
{
"paragraph_id": 34,
"text": "On May 12, 1792, Gray returned south and crossed the Columbia Bar, becoming the first known explorer of European descent to enter the river. Gray's fur trading mission had been financed by Boston merchants, who outfitted him with a private vessel named Columbia Rediviva; he named the river after the ship on May 18. Gray spent nine days trading near the mouth of the Columbia, then left without having gone beyond 13 miles (21 km) upstream. The farthest point reached was Grays Bay at the mouth of Grays River. Gray's discovery of the Columbia River was later used by the United States to support its claim to the Oregon Country, which was also claimed by Russia, Great Britain, Spain and other nations.",
"title": "New waves of explorers"
},
{
"paragraph_id": 35,
"text": "In October 1792, Vancouver sent Lieutenant William Robert Broughton, his second-in-command, up the river. Broughton got as far as the Sandy River at the western end of the Columbia River Gorge, about 100 miles (160 km) upstream, sighting and naming Mount Hood. Broughton formally claimed the river, its drainage basin, and the nearby coast for Britain. In contrast, Gray had not made any formal claims on behalf of the United States.",
"title": "New waves of explorers"
},
{
"paragraph_id": 36,
"text": "Because the Columbia was at the same latitude as the headwaters of the Missouri River, there was some speculation that Gray and Vancouver had discovered the long-sought Northwest Passage. A 1798 British map showed a dotted line connecting the Columbia with the Missouri. When the American explorers Meriwether Lewis and William Clark charted the vast, unmapped lands of the American West in their overland expedition (1803–1805), they found no passage between the rivers. After crossing the Rocky Mountains, Lewis and Clark built dugout canoes and paddled down the Snake River, reaching the Columbia near the present-day Tri-Cities, Washington. They explored a few miles upriver, as far as Bateman Island, before heading down the Columbia, concluding their journey at the river's mouth and establishing Fort Clatsop, a short-lived establishment that was occupied for less than three months.",
"title": "New waves of explorers"
},
{
"paragraph_id": 37,
"text": "Canadian explorer David Thompson, of the North West Company, spent the winter of 1807–08 at Kootanae House near the source of the Columbia at present-day Invermere, BC. Over the next few years he explored much of the river and its northern tributaries. In 1811 he traveled down the Columbia to the Pacific Ocean, arriving at the mouth just after John Jacob Astor's Pacific Fur Company had founded Astoria. On his return to the north, Thompson explored the one remaining part of the river he had not yet seen, becoming the first Euro-descended person to travel the entire length of the river.",
"title": "New waves of explorers"
},
{
"paragraph_id": 38,
"text": "In 1825, the Hudson's Bay Company (HBC) established Fort Vancouver on the bank of the Columbia, in what is now Vancouver, Washington, as the headquarters of the company's Columbia District, which encompassed everything west of the Rocky Mountains, north of California, and south of Russian-claimed Alaska. Chief Factor John McLoughlin, a physician who had been in the fur trade since 1804, was appointed superintendent of the Columbia District. The HBC reoriented its Columbia District operations toward the Pacific Ocean via the Columbia, which became the region's main trunk route. In the early 1840s Americans began to colonize the Oregon country in large numbers via the Oregon Trail, despite the HBC's efforts to discourage American settlement in the region. For many the final leg of the journey involved travel down the lower Columbia River to Fort Vancouver. This part of the Oregon Trail, the treacherous stretch from The Dalles to below the Cascades, could not be traversed by horses or wagons (only watercraft, at great risk). This prompted the 1846 construction of the Barlow Road.",
"title": "New waves of explorers"
},
{
"paragraph_id": 39,
"text": "In the Treaty of 1818 the United States and Britain agreed that both nations were to enjoy equal rights in Oregon Country for 10 years. By 1828, when the so-called \"joint occupation\" was renewed indefinitely, it seemed probable that the lower Columbia River would in time become the border between the two nations. For years the Hudson's Bay Company successfully maintained control of the Columbia River and American attempts to gain a foothold were fended off. In the 1830s, American religious missions were established at several locations in the lower Columbia River region. In the 1840s a mass migration of American settlers undermined British control. The Hudson's Bay Company tried to maintain dominance by shifting from the fur trade, which was in decline, to exporting other goods such as salmon and lumber. Colonization schemes were attempted, but failed to match the scale of American settlement. Americans generally settled south of the Columbia, mainly in the Willamette Valley. The Hudson's Bay Company tried to establish settlements north of the river, but nearly all the British colonists moved south to the Willamette Valley. The hope that the British colonists might dilute the American presence in the valley failed in the face of the overwhelming number of American settlers. These developments rekindled the issue of \"joint occupation\" and the boundary dispute. While some British interests, especially the Hudson's Bay Company, fought for a boundary along the Columbia River, the Oregon Treaty of 1846 set the boundary at the 49th parallel. As part of the treaty, the British retained all areas north of the line while the United States acquired the south. The Columbia River became much of the border between the U.S. territories of Oregon and Washington. Oregon became a U.S. state in 1859, while Washington later entered into the Union in 1889.",
"title": "New waves of explorers"
},
{
"paragraph_id": 40,
"text": "By the turn of the 20th century, the difficulty of navigating the Columbia was seen as an impediment to the economic development of the Inland Empire region east of the Cascades. The dredging and dam building that followed would permanently alter the river, disrupting its natural flow but also providing electricity, irrigation, navigability and other benefits to the region.",
"title": "New waves of explorers"
},
{
"paragraph_id": 41,
"text": "American captain Robert Gray and British captain George Vancouver, who explored the river in 1792, proved that it was possible to cross the Columbia Bar. Many of the challenges associated with that feat remain today; even with modern engineering alterations to the mouth of the river, the strong currents and shifting sandbar make it dangerous to pass between the river and the Pacific Ocean.",
"title": "Navigation"
},
{
"paragraph_id": 42,
"text": "The use of steamboats along the river, beginning with the British Beaver in 1836 and followed by American vessels in 1850, contributed to the rapid settlement and economic development of the region. Steamboats operated in several distinct stretches of the river: on its lower reaches, from the Pacific Ocean to Cascades Rapids; from the Cascades to the Dalles-Celilo Falls; from Celilo to Priests Rapids; on the Wenatchee Reach of eastern Washington; on British Columbia's Arrow Lakes; and on tributaries like the Willamette, the Snake and Kootenay Lake. The boats, initially powered by burning wood, carried passengers and freight throughout the region for many years. Early railroads served to connect steamboat lines interrupted by waterfalls on the river's lower reaches. In the 1880s, railroads maintained by companies such as the Oregon Railroad and Navigation Company began to supplement steamboat operations as the major transportation links along the river.",
"title": "Navigation"
},
{
"paragraph_id": 43,
"text": "As early as 1881, industrialists proposed altering the natural channel of the Columbia to improve navigation. Changes to the river over the years have included the construction of jetties at the river's mouth, dredging, and the construction of canals and navigation locks. Today, ocean freighters can travel upriver as far as Portland and Vancouver, and barges can reach as far inland as Lewiston, Idaho.",
"title": "Navigation"
},
{
"paragraph_id": 44,
"text": "The shifting Columbia Bar makes passage between the river and the Pacific Ocean difficult and dangerous, and numerous rapids along the river hinder navigation. Pacific Graveyard, a 1964 book by James A. Gibbs, describes the many shipwrecks near the mouth of the Columbia. Jetties, first constructed in 1886, extend the river's channel into the ocean. Strong currents and the shifting sandbar remain a threat to ships entering the river and necessitate continuous maintenance of the jetties.",
"title": "Navigation"
},
{
"paragraph_id": 45,
"text": "In 1891, the Columbia was dredged to enhance shipping. The channel between the ocean and Portland and Vancouver was deepened from 17 feet (5.2 m) to 25 feet (7.6 m). The Columbian called for the channel to be deepened to 40 feet (12 m) as early as 1905, but that depth was not attained until 1976.",
"title": "Navigation"
},
{
"paragraph_id": 46,
"text": "Cascade Locks and Canal were first constructed in 1896 around the Cascades Rapids, enabling boats to travel safely through the Columbia River Gorge. The Celilo Canal, bypassing Celilo Falls, opened to river traffic in 1915. In the mid-20th century, the construction of dams along the length of the river submerged the rapids beneath a series of reservoirs. An extensive system of locks allowed ships and barges to pass easily between reservoirs. A navigation channel reaching Lewiston, Idaho, along the Columbia and Snake rivers, was completed in 1975. Among the main commodities are wheat and other grains, mainly for export. As of 2016, the Columbia ranked third, behind the Mississippi and Paraná rivers, among the world's largest export corridors for grain.",
"title": "Navigation"
},
{
"paragraph_id": 47,
"text": "The 1980 eruption of Mount St. Helens caused mudslides in the area, which reduced the Columbia's depth by 25 feet (7.6 m) for a 4-mile (6.4 km) stretch, disrupting Portland's economy.",
"title": "Navigation"
},
{
"paragraph_id": 48,
"text": "Efforts to maintain and improve the navigation channel have continued to the present day. In 1990 a new round of studies examined the possibility of further dredging on the lower Columbia. The plans were controversial from the start because of economic and environmental concerns.",
"title": "Navigation"
},
{
"paragraph_id": 49,
"text": "In 1999, Congress authorized deepening the channel between Portland and Astoria from 40 to 43 feet (12–13 m), which will make it possible for large container and grain ships to reach Portland and Vancouver. The project has met opposition because of concerns about stirring up toxic sediment on the riverbed. Portland-based Northwest Environmental Advocates brought a lawsuit against the Army Corps of Engineers, but it was rejected by the Ninth U.S. Circuit Court of Appeals in August 2006. The project includes measures to mitigate environmental damage; for instance, the US Army Corps of Engineers must restore 12 times the area of wetland damaged by the project. In early 2006, the Corps spilled 50 US gallons (190 L) of hydraulic oil into the Columbia, drawing further criticism from environmental organizations.",
"title": "Navigation"
},
{
"paragraph_id": 50,
"text": "Work on the project began in 2005 and concluded in 2010. The project's cost is estimated at $150 million. The federal government is paying 65 percent, Oregon and Washington are paying $27 million each, and six local ports are also contributing to the cost.",
"title": "Navigation"
},
{
"paragraph_id": 51,
"text": "In 1902, the United States Bureau of Reclamation was established to aid in the economic development of arid western states. One of its major undertakings was building Grand Coulee Dam to provide irrigation for the 600 thousand acres (2,400 km) of the Columbia Basin Project in central Washington. With the onset of World War II, the focus of dam construction shifted to production of hydroelectricity. Irrigation efforts resumed after the war.",
"title": "Dams"
},
{
"paragraph_id": 52,
"text": "River development occurred within the structure of the 1909 International Boundary Waters Treaty between the United States and Canada. The United States Congress passed the Rivers and Harbors Act of 1925, which directed the U.S. Army Corps of Engineers and the Federal Power Commission to explore the development of the nation's rivers. This prompted agencies to conduct the first formal financial analysis of hydroelectric development; the reports produced by various agencies were presented in House Document 308. Those reports, and subsequent related reports, are referred to as 308 Reports.",
"title": "Dams"
},
{
"paragraph_id": 53,
"text": "In the late 1920s, political forces in the Northwestern United States generally favored the private development of hydroelectric dams along the Columbia. But the overwhelming victories of gubernatorial candidate George W. Joseph in the 1930 Republican primary, and later his law partner Julius Meier, were understood to demonstrate strong public support for public ownership of dams. In 1933, President Franklin D. Roosevelt signed a bill that enabled the construction of the Bonneville and Grand Coulee dams as public works projects. The legislation was attributed to the efforts of Oregon Senator Charles McNary, Washington Senator Clarence Dill, and Oregon Congressman Charles Martin, among others.",
"title": "Dams"
},
{
"paragraph_id": 54,
"text": "In 1948, floods swept through the Columbia watershed, destroying Vanport, then the second largest city in Oregon, and impacting cities as far north as Trail, BC. The flooding prompted the U.S. Congress to pass the Flood Control Act of 1950, authorizing the federal development of additional dams and other flood control mechanisms. By that time local communities had become wary of federal hydroelectric projects, and sought local control of new developments; a public utility district in Grant County, Washington, ultimately began construction of the dam at Priest Rapids.",
"title": "Dams"
},
{
"paragraph_id": 55,
"text": "In the 1960s, the United States and Canada signed the Columbia River Treaty, which focused on flood control and the maximization of downstream power generation. Canada agreed to build dams and provide reservoir storage, and the United States agreed to deliver to Canada one-half of the increase in United States downstream power benefits as estimated five years in advance. Canada's obligation was met by building three dams (two on the Columbia, and one on the Duncan River), the last of which was completed in 1973.",
"title": "Dams"
},
{
"paragraph_id": 56,
"text": "Today the main stem of the Columbia River has fourteen dams, of which three are in Canada and eleven in the United States. Four mainstem dams and four lower Snake River dams contain navigation locks to allow ship and barge passage from the ocean as far as Lewiston, Idaho. The river system as a whole has more than 400 dams for hydroelectricity and irrigation. The dams address a variety of demands, including flood control, navigation, stream flow regulation, storage, and delivery of stored waters, reclamation of public lands and Indian reservations, and the generation of hydroelectric power.",
"title": "Dams"
},
{
"paragraph_id": 57,
"text": "This river may have been shaped by God, or glaciers, or the remnants of the inland sea, or gravity, or a combination of all, but the Army Corps of Engineers controls it now. The Columbia rises and falls, not by the dictates of tide or rainfall, but by a computer-activated, legally arbitrated, federally allocated schedule that changes only when significant litigation is concluded, or a United States Senator nears election time. In that sense, it is reliable.",
"title": "Dams"
},
{
"paragraph_id": 58,
"text": "Timothy Egan, in The Good Rain",
"title": "Dams"
},
{
"paragraph_id": 59,
"text": "The larger U.S. dams are owned and operated by the federal government (some by the Army Corps of Engineers and some by the Bureau of Reclamation), while the smaller dams are operated by public utility districts and private power companies. The federally operated system is known as the Federal Columbia River Power System, which includes 31 dams on the Columbia and its tributaries. The system has altered the seasonal flow of the river to meet higher electricity demands during the winter. At the beginning of the 20th century, roughly 75 percent of the Columbia's flow occurred in the summer, between April and September. By 1980, the summer proportion had been lowered to about 50 percent, essentially eliminating the seasonal pattern.",
"title": "Dams"
},
{
"paragraph_id": 60,
"text": "The installation of dams dramatically altered the landscape and ecosystem of the river. At one time, the Columbia was one of the top salmon-producing river systems in the world. Previously active fishing sites, such as Celilo Falls in the eastern Columbia River Gorge, have exhibited a sharp decline in fishing along the Columbia in the last century, and salmon populations have been dramatically reduced. Fish ladders have been installed at some dam sites to help the fish journey to spawning waters. Chief Joseph Dam has no fish ladders and completely blocks fish migration to the upper half of the Columbia River system.",
"title": "Dams"
},
{
"paragraph_id": 61,
"text": "The Bureau of Reclamation's Columbia Basin Project focused on the generally dry region of central Washington known as the Columbia Basin, which features rich loess soil. Several groups developed competing proposals, and in 1933, President Franklin D. Roosevelt authorized the Columbia Basin Project. The Grand Coulee Dam was the project's central component; upon completion, it pumped water up from the Columbia to fill the formerly dry Grand Coulee, forming Banks Lake. By 1935, the intended height of the dam was increased from a range between 200 and 300 feet (61 and 91 m) to 500 feet (150 m), a height that would extend the lake impounded by the dam to the Canada–United States border; the project had grown from a local New Deal relief measure to a major national project.",
"title": "Dams"
},
{
"paragraph_id": 62,
"text": "The project's initial purpose was irrigation, but the onset of World War II created a high electricity demand, mainly for aluminum production and for the development of nuclear weapons at the Hanford Site. Irrigation began in 1951. The project provides water to more than 670 thousand acres (2,700 square kilometers) of fertile but arid land in central Washington, transforming the region into a major agricultural center. Important crops include orchard fruit, potatoes, alfalfa, mint, beans, beets, and wine grapes.",
"title": "Dams"
},
{
"paragraph_id": 63,
"text": "Since 1750, the Columbia has experienced six multi-year droughts. The longest, lasting 12 years in the mid‑19th century, reduced the river's flow to 20 percent below average. Scientists have expressed concern that a similar drought would have grave consequences in a region so dependent on the Columbia. In 1992–1993, a lesser drought affected farmers, hydroelectric power producers, shippers, and wildlife managers.",
"title": "Dams"
},
{
"paragraph_id": 64,
"text": "Many farmers in central Washington build dams on their property for irrigation and to control frost on their crops. The Washington Department of Ecology, using new techniques involving aerial photographs, estimated there may be as many as a hundred such dams in the area, most of which are illegal. Six such dams have failed in recent years, causing hundreds of thousands of dollars of damage to crops and public roads. Fourteen farms in the area have gone through the permitting process to build such dams legally.",
"title": "Dams"
},
{
"paragraph_id": 65,
"text": "The Columbia's heavy flow and large elevation drop over a short distance, 2.16 feet per mile (40.9 centimeters per kilometer), give it tremendous capacity for hydroelectricity generation. In comparison, the Mississippi drops less than 0.65 feet per mile (12.3 cm/km). The Columbia alone possesses one-third of the United States's hydroelectric potential. In 2012, the river and its tributaries accounted for 29 GW of hydroelectric generating capacity, contributing 44 percent of the total hydroelectric generation in the nation.",
"title": "Dams"
},
{
"paragraph_id": 66,
"text": "The largest of the 150 hydroelectric projects, the Grand Coulee Dam and Chief Joseph Dam are also the largest in the United States. As of 2017, Grand Coulee is the fifth largest hydroelectric plant in the world.",
"title": "Dams"
},
{
"paragraph_id": 67,
"text": "Inexpensive hydropower supported the location of a large aluminum industry in the region because its reduction from bauxite requires large amounts of electricity. Until 2000, the Northwestern United States produced up to 17 percent of the world's aluminum and 40 percent of the aluminum produced in the United States. The commoditization of power in the early 21st century, coupled with a drought that reduced the generation capacity of the river, damaged the industry and by 2001, Columbia River aluminum producers had idled 80 percent of its production capacity. By 2003, the entire United States produced only 15 percent of the world's aluminum and many smelters along the Columbia had gone dormant or out of business.",
"title": "Dams"
},
{
"paragraph_id": 68,
"text": "Power remains relatively inexpensive along the Columbia, and since the mid-2000 several global enterprises have moved server farm operations into the area to avail themselves of cheap power. Downriver of Grand Coulee, each dam's reservoir is closely regulated by the Bonneville Power Administration (BPA), the U.S. Army Corps of Engineers, and various Washington public utility districts to ensure flow, flood control, and power generation objectives are met. Increasingly, hydro-power operations are required to meet standards under the U.S. Endangered Species Act and other agreements to manage operations to minimize impacts on salmon and other fish, and some conservation and fishing groups support removing four dams on the lower Snake River, the largest tributary of the Columbia.",
"title": "Dams"
},
{
"paragraph_id": 69,
"text": "In 1941, the BPA hired Oklahoma folksinger Woody Guthrie to write songs for a documentary film promoting the benefits of hydropower. In the month he spent traveling the region Guthrie wrote 26 songs, which have become an important part of the cultural history of the region.",
"title": "Dams"
},
{
"paragraph_id": 70,
"text": "The Columbia supports several species of anadromous fish that migrate between the Pacific Ocean and freshwater tributaries of the river. Sockeye salmon, Coho and Chinook (\"king\") salmon, and steelhead, all of the genus Oncorhynchus, are ocean fish that migrate up the rivers at the end of their life cycles to spawn. White sturgeon, which take 15 to 25 years to mature, typically migrate between the ocean and the upstream habitat several times during their lives.",
"title": "Ecology and environment"
},
{
"paragraph_id": 71,
"text": "Salmon populations declined dramatically after the establishment of canneries in 1867. In 1879 it was reported that 545,450 salmon, with an average weight of 22 pounds (10.0 kg) were caught (in a recent season) and mainly canned for export to England. A can weighing 1 pound (0.45 kg) could be sold for 8d or 9d. By 1908, there was widespread concern about the decline of salmon and sturgeon. In that year, the people of Oregon passed two laws under their newly instituted program of citizens' initiatives limiting fishing on the Columbia and other rivers. Then in 1948, another initiative banned the use of seine nets (devices already used by Native Americans, and refined by later settlers) altogether.",
"title": "Ecology and environment"
},
{
"paragraph_id": 72,
"text": "Dams interrupt the migration of anadromous fish. Salmon and steelhead return to the streams in which they were born to spawn; where dams prevent their return, entire populations of salmon die. Some of the Columbia and Snake River dams employ fish ladders, which are effective to varying degrees at allowing these fish to travel upstream. Another problem exists for the juvenile salmon headed downstream to the ocean. Previously, this journey would have taken two to three weeks. With river currents slowed by the dams, and the Columbia converted from a wild river to a series of slackwater pools, the journey can take several months, which increases the mortality rate. In some cases, the Army Corps of Engineers transports juvenile fish downstream by truck or river barge. The Chief Joseph Dam and several dams on the Columbia's tributaries entirely block migration, and there are no migrating fish on the river above these dams. Sturgeons have different migration habits and can survive without ever visiting the ocean. In many upstream areas cut off from the ocean by dams, sturgeon simply live upstream of the dam.",
"title": "Ecology and environment"
},
{
"paragraph_id": 73,
"text": "Not all fish have suffered from the modifications to the river; the northern pikeminnow (formerly known as the squawfish) thrives in the warmer, slower water created by the dams. Research in the mid-1980s found that juvenile salmon were suffering substantially from the predatory pikeminnow, and in 1990, in the interest of protecting salmon, a \"bounty\" program was established to reward anglers for catching pikeminnow.",
"title": "Ecology and environment"
},
{
"paragraph_id": 74,
"text": "In 1994, the salmon catch was smaller than usual in the rivers of Oregon, Washington, and British Columbia, causing concern among commercial fishermen, government agencies, and tribal leaders. US government intervention, to which the states of Alaska, Idaho, and Oregon objected, included an 11-day closure of an Alaska fishery. In April 1994 the Pacific Fisheries Management Council unanimously approved the strictest regulations in 18 years, banning all commercial salmon fishing for that year from Cape Falcon north to the Canada–US border. In the winter of 1994, the return of coho salmon far exceeded expectations, which was attributed in part to the fishing ban.",
"title": "Ecology and environment"
},
{
"paragraph_id": 75,
"text": "Also in 1994, United States Secretary of the Interior Bruce Babbitt proposed the removal of several Pacific Northwest dams because of their impact on salmon spawning. The Northwest Power Planning Council approved a plan that provided more water for fish and less for electricity, irrigation, and transportation. Environmental advocates have called for the removal of certain dams in the Columbia system in the years since. Of the 227 major dams in the Columbia River drainage basin, the four Washington dams on the lower Snake River are often identified for removal, for example in an ongoing lawsuit concerning a Bush administration plan for salmon recovery. These dams and reservoirs limit the recovery of upriver salmon runs to Idaho's Salmon and Clearwater rivers. Historically, the Snake produced over 1.5 million spring and summer Chinook salmon, a number that has dwindled to several thousand in recent years. Idaho Power Company's Hells Canyon dams have no fish ladders (and do not pass juvenile salmon downstream), and thus allow no steelhead or salmon to migrate above Hells Canyon. In 2007, the destruction of the Marmot Dam on the Sandy River was the first dam removal in the system. Other Columbia Basin dams that have been removed include Condit Dam on Washington's White Salmon River, and the Milltown Dam on the Clark Fork in Montana.",
"title": "Ecology and environment"
},
{
"paragraph_id": 76,
"text": "In southeastern Washington, a 50-mile (80 km) stretch of the river passes through the Hanford Site, established in 1943 as part of the Manhattan Project. The site served as a plutonium production complex, with nine nuclear reactors and related facilities along the banks of the river. From 1944 to 1971, pump systems drew cooling water from the river and, after treating this water for use by the reactors, returned it to the river. Before being released back into the river, the used water was held in large tanks known as retention basins for up to six hours. Longer-lived isotopes were not affected by this retention, and several terabecquerels entered the river every day. By 1957, the eight plutonium production reactors at Hanford dumped a daily average of 50,000 curies of radioactive material into the Columbia. These releases were kept secret by the federal government until the release of declassified documents in the late 1980s. Radiation was measured downstream as far west as the Washington and Oregon coasts.",
"title": "Ecology and environment"
},
{
"paragraph_id": 77,
"text": "The nuclear reactors were decommissioned at the end of the Cold War, and the Hanford site is the focus of one of the world's largest environmental cleanup, managed by the Department of Energy under the oversight of the Washington Department of Ecology and the Environmental Protection Agency. Nearby aquifers contain an estimated 270 billion US gallons (1 billion m) of groundwater contaminated by high-level nuclear waste that has leaked out of Hanford's underground storage tanks. As of 2008, 1 million US gallons (3,785 m) of highly radioactive waste is traveling through groundwater toward the Columbia River. This waste is expected to reach the river in 12 to 50 years if cleanup does not proceed on schedule.",
"title": "Ecology and environment"
},
{
"paragraph_id": 78,
"text": "In addition to concerns about nuclear waste, numerous other pollutants are found in the river. These include chemical pesticides, bacteria, arsenic, dioxins, and polychlorinated biphenyls (PCB).",
"title": "Ecology and environment"
},
{
"paragraph_id": 79,
"text": "Studies have also found significant levels of toxins in fish and the waters they inhabit within the basin. Accumulation of toxins in fish threatens the survival of fish species, and human consumption of these fish can lead to health problems. Water quality is also an important factor in the survival of other wildlife and plants that grow in the Columbia River drainage basin. The states, Indian tribes, and federal government are all engaged in efforts to restore and improve the water, land, and air quality of the Columbia River drainage basin and have committed to work together to accomplish critical ecosystem restoration efforts. Several cleanup efforts are underway, including Superfund projects at Portland Harbor, Hanford, and Lake Roosevelt.",
"title": "Ecology and environment"
},
{
"paragraph_id": 80,
"text": "Timber industry activity further contaminates river water, for example in the increased sediment runoff that results from clearcuts. The Northwest Forest Plan, a piece of federal legislation from 1994, mandated that timber companies consider the environmental impacts of their practices on rivers like the Columbia.",
"title": "Ecology and environment"
},
{
"paragraph_id": 81,
"text": "On July 1, 2003, Christopher Swain became the first person to swim the Columbia River's entire length, to raise public awareness about the river's environmental health.",
"title": "Ecology and environment"
},
{
"paragraph_id": 82,
"text": "Both natural and anthropogenic processes are involved in the cycling of nutrients in the Columbia River basin. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts on nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams.",
"title": "Ecology and environment"
},
{
"paragraph_id": 83,
"text": "Nutrient dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of nutrients into the Pacific, except for nitrogen, which is delivered into the estuary by ocean upwelling sources.",
"title": "Ecology and environment"
},
{
"paragraph_id": 84,
"text": "Most of the Columbia's drainage basin (which, at 258,000 square miles or 670,000 square kilometres, is about the size of France) lies roughly between the Rocky Mountains on the east and the Cascade Mountains on the west. In the United States and Canada the term watershed is often used to mean drainage basin. The term Columbia Basin is used to refer not only to the entire drainage basin but also to subsets of the river's watershed, such as the relatively flat and unforested area in eastern Washington bounded by the Cascades, the Rocky Mountains, and the Blue Mountains. Within the watershed are diverse landforms including mountains, arid plateaus, river valleys, rolling uplands, and deep gorges. Grand Teton National Park lies in the watershed, as well as parts of Yellowstone National Park, Glacier National Park, Mount Rainier National Park, and North Cascades National Park. Canadian National Parks in the watershed include Kootenay National Park, Yoho National Park, Glacier National Park, and Mount Revelstoke National Park. Hells Canyon, the deepest gorge in North America, and the Columbia Gorge are in the watershed. Vegetation varies widely, ranging from western hemlock and western redcedar in the moist regions to sagebrush in the arid regions. The watershed provides habitat for 609 known fish and wildlife species, including the bull trout, bald eagle, gray wolf, grizzly bear, and Canada lynx.",
"title": "Watershed"
},
{
"paragraph_id": 85,
"text": "The World Wide Fund for Nature (WWF) divides the waters of the Columbia and its tributaries into three freshwater ecoregions: Columbia Glaciated, Columbia Unglaciated, and Upper Snake. The Columbia Glaciated ecoregion, about a third of the total watershed, lies in the north and was covered with ice sheets during the Pleistocene. The ecoregion includes the mainstem Columbia north of the Snake River and tributaries such as the Yakima, Okanagan, Pend Oreille, Clark Fork, and Kootenay rivers. The effects of glaciation include a number of large lakes and a relatively low diversity of freshwater fish. The Upper Snake ecoregion is defined as the Snake River watershed above Shoshone Falls, which totally blocks fish migration. This region has 14 species of fish, many of which are endemic. The Columbia Unglaciated ecoregion makes up the rest of the watershed. It includes the mainstem Columbia below the Snake River and tributaries such as the Salmon, John Day, Deschutes, and lower Snake Rivers. Of the three ecoregions it is the richest in terms of freshwater species diversity. There are 35 species of fish, of which four are endemic. There are also high levels of mollusk endemism.",
"title": "Watershed"
},
{
"paragraph_id": 86,
"text": "In 2016, over eight million people lived within the Columbia's drainage basin. Of this total about 3.5 million people lived in Oregon, 2.1 million in Washington, 1.7 million in Idaho, half a million in British Columbia, and 0.4 million in Montana. Population in the watershed has been rising for many decades and is projected to rise to about 10 million by 2030. The highest population densities are found west of the Cascade Mountains along the I-5 corridor, especially in the Portland-Vancouver urban area. High densities are also found around Spokane, Washington, and Boise, Idaho. Although much of the watershed is rural and sparsely populated, areas with recreational and scenic values are growing rapidly. The central Oregon county of Deschutes is the fastest-growing in the state. Populations have also been growing just east of the Cascades in central Washington around the city of Yakima and the Tri-Cities area. Projections for the coming decades assume growth throughout the watershed. The Canadian part of the Okanagan subbasin is also growing rapidly.",
"title": "Watershed"
},
{
"paragraph_id": 87,
"text": "Climate varies greatly within the watershed. Elevation ranges from sea level at the river mouth to more than 14,000 feet (4,300 m) in the mountains, and temperatures vary with elevation. The highest peak is Mount Rainier, at 14,411 feet (4,392 m). High elevations have cold winters and short cool summers; interior regions are subject to great temperature variability and severe droughts. Over some of the watershed, especially west of the Cascade Mountains, precipitation maximums occur in winter, when Pacific storms come ashore. Atmospheric conditions block the flow of moisture in summer, which is generally dry except for occasional thunderstorms in the interior. In some of the eastern parts of the watershed, especially shrub-steppe regions with Continental climate patterns, precipitation maximums occur in early summer. Annual precipitation varies from more than 100 inches (250 cm) a year in the Cascades to less than 8 inches (20 cm) in the interior. Much of the watershed gets less than 12 inches (30 cm) a year.",
"title": "Watershed"
},
{
"paragraph_id": 88,
"text": "Several major North American drainage basins and many minor ones border the Columbia River's drainage basin. To the east, in northern Wyoming and Montana, the Continental Divide separates the Columbia watershed from the Mississippi-Missouri watershed, which empties into the Gulf of Mexico. To the northeast, mostly along the southern border between British Columbia and Alberta, the Continental Divide separates the Columbia watershed from the Nelson-Lake Winnipeg-Saskatchewan watershed, which empties into Hudson Bay. The Mississippi and Nelson watersheds are separated by the Laurentian Divide, which meets the Continental Divide at Triple Divide Peak near the headwaters of the Columbia's Flathead River tributary. This point marks the meeting of three of North America's main drainage patterns, to the Pacific Ocean, to Hudson Bay, and to the Atlantic Ocean via the Gulf of Mexico.",
"title": "Watershed"
},
{
"paragraph_id": 89,
"text": "Further north along the Continental Divide, a short portion of the combined Continental and Laurentian divides separate the Columbia watershed from the MacKenzie-Slave-Athabasca watershed, which empties into the Arctic Ocean. The Nelson and Mackenzie watersheds are separated by a divide between streams flowing to the Arctic Ocean and those of the Hudson Bay watershed. This divide meets the Continental Divide at Snow Dome (also known as Dome), near the northernmost bend of the Columbia River.",
"title": "Watershed"
},
{
"paragraph_id": 90,
"text": "To the southeast, in western Wyoming, another divide separates the Columbia watershed from the Colorado–Green watershed, which empties into the Gulf of California. The Columbia, Colorado, and Mississippi watersheds meet at Three Waters Mountain in the Wind River Range of Wyoming. To the south, in Oregon, Nevada, Utah, Idaho, and Wyoming, the Columbia watershed is divided from the Great Basin, whose several watersheds are endorheic, not emptying into any ocean but rather drying up or sinking into sumps. Great Basin watersheds that share a border with the Columbia watershed include Harney Basin, Humboldt River, and Great Salt Lake. The associated triple divide points are Commissary Ridge North, Wyoming, and Sproats Meadow Northwest, Oregon. To the north, mostly in British Columbia, the Columbia watershed borders the Fraser River watershed. To the west and southwest the Columbia watershed borders a number of smaller watersheds that drain to the Pacific Ocean, such as the Klamath River in Oregon and California and the Puget Sound Basin in Washington.",
"title": "Watershed"
},
{
"paragraph_id": 91,
"text": "The Columbia receives more than 60 significant tributaries. The four largest that empty directly into the Columbia (measured either by discharge or by size of watershed) are the Snake River (mostly in Idaho), the Willamette River (in northwest Oregon), the Kootenay River (mostly in British Columbia), and the Pend Oreille River (mostly in northern Washington and Idaho, also known as the lower part of the Clark Fork). Each of these four averages more than 20,000 cubic feet per second (570 m/s) and drains an area of more than 20,000 square miles (52,000 km).",
"title": "Watershed"
},
{
"paragraph_id": 92,
"text": "The Snake is by far the largest tributary. Its watershed of 108,000 square miles (280,000 km) is larger than the state of Idaho. Its discharge is roughly a third of the Columbia's at the rivers' confluence but compared to the Columbia upstream of the confluence the Snake is longer (113%) and has a larger drainage basin (104%).",
"title": "Watershed"
},
{
"paragraph_id": 93,
"text": "The Pend Oreille River system (including its main tributaries, the Clark Fork and Flathead rivers) is also similar in size to the Columbia at their confluence. Compared to the Columbia River above the two rivers' confluence, the Pend Oreille-Clark-Flathead is nearly as long (about 86%), its basin about three-fourths as large (76%), and its discharge over a third (37%).",
"title": "Watershed"
},
{
"paragraph_id": 94,
"text": "",
"title": "External links"
}
] | The Columbia River is the largest river in the Pacific Northwest region of North America. The river forms in the Rocky Mountains of British Columbia, Canada. It flows northwest and then south into the U.S. state of Washington, then turns west to form most of the border between Washington and the state of Oregon before emptying into the Pacific Ocean. The river is 1,243 miles long, and its largest tributary is the Snake River. Its drainage basin is roughly the size of France and extends into seven states of the United States and one Canadian province. The fourth-largest river in the United States by volume, the Columbia has the greatest flow of any North American river entering the Pacific. The Columbia has the 36th greatest discharge of any river in the world. The Columbia and its tributaries have been central to the region's culture and economy for thousands of years. They have been used for transportation since ancient times, linking the region's many cultural groups. The river system hosts many species of anadromous fish, which migrate between freshwater habitats and the saline waters of the Pacific Ocean. These fish—especially the salmon species—provided the core subsistence for native peoples. The first documented European discovery of the Columbia River occurred when Bruno de Heceta sighted the river's mouth in 1775. On May 11, 1792, a private American ship, Columbia Rediviva, under Captain Robert Gray from Boston became the first non-indigenous vessel to enter the river. Later in 1792, William Robert Broughton of the British Royal Navy commanding HMS Chatham as part of the Vancouver Expedition, navigated past the Oregon Coast Range and 100 miles upriver to what is now Vancouver, Washington. In the following decades, fur-trading companies used the Columbia as a key transportation route. Overland explorers entered the Willamette Valley through the scenic, but treacherous Columbia River Gorge, and pioneers began to settle the valley in increasing numbers. Steamships along the river linked communities and facilitated trade; the arrival of railroads in the late 19th century, many running along the river, supplemented these links. Since the late 19th century, public and private sectors have extensively developed the river. To aid ship and barge navigation, locks have been built along the lower Columbia and its tributaries, and dredging has opened, maintained, and enlarged shipping channels. Since the early 20th century, dams have been built across the river for power generation, navigation, irrigation, and flood control. The 14 hydroelectric dams on the Columbia's main stem and many more on its tributaries produce more than 44 percent of total U.S. hydroelectric generation. Production of nuclear power has taken place at two sites along the river. Plutonium for nuclear weapons was produced for decades at the Hanford Site, which is now the most contaminated nuclear site in the United States. These developments have greatly altered river environments in the watershed, mainly through industrial pollution and barriers to fish migration. | 2001-04-11T23:43:12Z | 2023-11-21T21:49:56Z | [
"Template:Citation needed",
"Template:Legend",
"Template:Cite report",
"Template:Cite video",
"Template:Short description",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:British Columbia hydrography",
"Template:Ndash",
"Template:Sfn",
"Template:Nowrap",
"Template:Div col",
"Template:Cite EB1911",
"Template:Commons category",
"Template:Featured article",
"Template:Lang",
"Template:Quote box",
"Template:As of",
"Template:Main",
"Template:Cite magazine",
"Template:Cite encyclopedia",
"Template:For",
"Template:Use American English",
"Template:Snds",
"Template:Clear",
"Template:Portal",
"Template:Use mdy dates",
"Template:Convert",
"Template:Cite web",
"Template:Cite news",
"Template:Cite peakbagger",
"Template:Rivers and streams of Portland, Oregon",
"Template:Authority control",
"Template:Infobox river",
"Template:See also",
"Template:Webarchive",
"Template:Internet Archive short film",
"Template:Columbia River",
"Template:Washington",
"Template:Refn",
"Template:Who",
"Template:Cn",
"Template:Div col end",
"Template:Cite NIE"
] | https://en.wikipedia.org/wiki/Columbia_River |
5,409 | Commelinales | Commelinales is an order of flowering plants. It comprises five families: Commelinaceae, Haemodoraceae, Hanguanaceae, Philydraceae, and Pontederiaceae. All the families combined contain over 885 species in about 70 genera; the majority of species are in the Commelinaceae. Plants in the order share a number of synapomorphies that tie them together, such as a lack of mycorrhizal associations and tapetal raphides. Estimates differ as to when the Commelinales evolved, but most suggest an origin and diversification sometime during the mid- to late Cretaceous. Depending on the methods used, studies suggest a range of origin between 123 and 73 million years, with diversification occurring within the group 110 to 66 million years ago. The order's closest relatives are in the Zingiberales, which includes ginger, bananas, cardamom, and others.
According to the most recent classification scheme, the APG IV of 2016, the order includes five families:
This is unchanged from the APG III of 2009 and the APG II of 2003, but different from the older APG system of 1998, which did not include Hanguanaceae.
The older Cronquist system of 1981, which was based purely on morphological data, placed the order in subclass Commelinidae of class Liliopsida and included the families Commelinaceae, Mayacaceae, Rapateaceae and Xyridaceae. These families are now known to be only distantly related. In the classification system of Dahlgren the Commelinales were one of four orders in the superorder Commeliniflorae (also called Commelinanae), and contained five families, of which only Commelinaceae has been retained by the Angiosperm Phylogeny Group. | [
{
"paragraph_id": 0,
"text": "Commelinales is an order of flowering plants. It comprises five families: Commelinaceae, Haemodoraceae, Hanguanaceae, Philydraceae, and Pontederiaceae. All the families combined contain over 885 species in about 70 genera; the majority of species are in the Commelinaceae. Plants in the order share a number of synapomorphies that tie them together, such as a lack of mycorrhizal associations and tapetal raphides. Estimates differ as to when the Commelinales evolved, but most suggest an origin and diversification sometime during the mid- to late Cretaceous. Depending on the methods used, studies suggest a range of origin between 123 and 73 million years, with diversification occurring within the group 110 to 66 million years ago. The order's closest relatives are in the Zingiberales, which includes ginger, bananas, cardamom, and others.",
"title": ""
},
{
"paragraph_id": 1,
"text": "According to the most recent classification scheme, the APG IV of 2016, the order includes five families:",
"title": "Taxonomy"
},
{
"paragraph_id": 2,
"text": "This is unchanged from the APG III of 2009 and the APG II of 2003, but different from the older APG system of 1998, which did not include Hanguanaceae.",
"title": "Taxonomy"
},
{
"paragraph_id": 3,
"text": "The older Cronquist system of 1981, which was based purely on morphological data, placed the order in subclass Commelinidae of class Liliopsida and included the families Commelinaceae, Mayacaceae, Rapateaceae and Xyridaceae. These families are now known to be only distantly related. In the classification system of Dahlgren the Commelinales were one of four orders in the superorder Commeliniflorae (also called Commelinanae), and contained five families, of which only Commelinaceae has been retained by the Angiosperm Phylogeny Group.",
"title": "Taxonomy"
}
] | Commelinales is an order of flowering plants. It comprises five families: Commelinaceae, Haemodoraceae, Hanguanaceae, Philydraceae, and Pontederiaceae. All the families combined contain over 885 species in about 70 genera; the majority of species are in the Commelinaceae. Plants in the order share a number of synapomorphies that tie them together, such as a lack of mycorrhizal associations and tapetal raphides. Estimates differ as to when the Commelinales evolved, but most suggest an origin and diversification sometime during the mid- to late Cretaceous. Depending on the methods used, studies suggest a range of origin between 123 and 73 million years, with diversification occurring within the group 110 to 66 million years ago. The order's closest relatives are in the Zingiberales, which includes ginger, bananas, cardamom, and others. | 2002-02-25T15:43:11Z | 2023-10-26T17:26:49Z | [
"Template:Wikispecies-inline",
"Template:Commons category-inline",
"Template:Monocotyledons",
"Template:Taxonbar",
"Template:Short description",
"Template:Expand Spanish",
"Template:Automatic taxobox",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Commelinales |
5,411 | Cucurbitales | The Cucurbitales are an order of flowering plants, included in the rosid group of dicotyledons. This order mostly belongs to tropical areas, with limited presence in subtropical and temperate regions. The order includes shrubs and trees, together with many herbs and climbers. One major characteristic of the Cucurbitales is the presence of unisexual flowers, mostly pentacyclic, with thick pointed petals (whenever present). The pollination is usually performed by insects, but wind pollination is also present (in Coriariaceae and Datiscaceae).
The order consists of roughly 2600 species in eight families. The largest families are Begoniaceae (begonia family) with around 1500 species and Cucurbitaceae (gourd family) with around 900 species. These two families include the only economically important plants. Specifically, the Cucurbitaceae (gourd family) include some food species, such as squash, pumpkin (both from Cucurbita), watermelon (Citrullus vulgaris), and cucumber and melons (Cucumis). The Begoniaceae are known for their horticultural species, of which there are over 130 with many more varieties.
The Cucurbitales are an order of plants with a cosmopolitan distribution, particularly diverse in the tropics. Most are herbs, climber herbs, woody lianas or shrubs but some genera include canopy-forming evergreen lauroid trees. Members of the Cucurbitales form an important component of low to montane tropical forest with greater representation in terms of the number of species. Although not known with certainty the total number of species in the order, conservative estimates indicate about 2600 species worldwide, distributed in 109 genera. Compared to other flowering plant orders, the taxonomy is poorly understood due to their great diversity, difficulty in identification, and limited study.
The order Cucurbitales in the eurosid I clade comprises almost 2600 species in 109 or 110 genera in eight families, tropical and temperate, of very different sizes, morphology, and ecology. It is a case of divergent evolution. In contrast, there is convergent evolution with other groups not related due to ecological or physical drivers toward a similar solution, including analogous structures. Some species are trees that have similar foliage to the true laurels due to convergent evolution.
The patterns of speciation in the Cucurbitales are diversified in a high number of species. They have a pantropical distribution with centers of diversity in Africa, South America, and Southeast Asia. They most likely originated in West Gondwana 67–107 million years ago, so the oldest split could relate to the break-up of Gondwana in the middle Eocene to late Oligocene, 45–24 million years ago. The group reached their current distribution by multiple intercontinental dispersal events. One factor was product of aridification, other groups responded to favorable climatic periods and expanded across the available habitat, occurring as opportunistic species across wide distribution; other groups diverged over long periods within isolated areas.
The Cucurbitales comprise the families: Apodanthaceae, Anisophylleaceae, Begoniaceae, Coriariaceae, Corynocarpaceae, Cucurbitaceae, Tetramelaceae, and Datiscaceae. Some of the synapomorphies of the order are: leaves in spiral with secondary veins palmated, calyx or perianth valvate, and the elevated stomatal calyx/perianth bearing separate styles. The two whorls are similar in texture.
Tetrameles nudiflora is a tree of immense proportions of height and width; Tetramelaceae, Anisophylleaceae, and Corynocarpaceae are tall canopy trees in temperate and tropical forests. The genus Dendrosicyos, with the only species being the cucumber tree, is adapted to the arid semidesert island of Socotra. Deciduous perennial Cucurbitales lose all of their leaves for part of the year depending on variations in rainfall. The leaf loss coincides with the dry season in tropical, subtropical and arid regions. In temperate or polar climates, the dry season is due to the inability of the plant to absorb water available in the form of ice. Apodanthaceae are obligatory endoparasites that only emerge once a year in the form of small flowers that develop into small berries, however taxonomists have not agreed on the exact placement of this family within the Cucurbitales. Over half of the known members of this order belong to the greatly diverse begonia family Begoniaceae, with around 1500 species in two genera. Before modern DNA-molecular classifications, some Cucurbitales species were assigned to orders as diverse as Ranunculales, Malpighiales, Violales, and Rafflesiales. Early molecular studies revealed several surprises, such as the nonmonophyly of the traditional Datiscaceae, including Tetrameles and Octomeles, but the exact relationships among the families remain unclear. The lack of knowledge about the order in general is due to many species being found in countries with limited economic means or unstable political environments, factors unsuitable for plant collection and detailed study. Thus the vast majority of species remain poorly determined, and a future increase in the number of species is expected.
Under the Cronquist system, the families Begoniaceae, Cucurbitaceae, and Datiscaceae were placed in the order Violales, within the subclass Dilleniidae, with the Tetramelaceae subsumed into the Datiscaceae. Corynocarpaceae was placed in order Celastrales, and Anisophylleaceae in order Rosales, both under subclass Rosidae. Coriariaceae was placed in Ranunculaceae, subclass Magnoliidae. Apodanthaceae was not recognised as a family, its genera being assigned to another parasitic plant family, the Rafflesiaceae. The present classification is due to APG III (2009).
Modern molecular phylogenetics suggest the following relationships: | [
{
"paragraph_id": 0,
"text": "The Cucurbitales are an order of flowering plants, included in the rosid group of dicotyledons. This order mostly belongs to tropical areas, with limited presence in subtropical and temperate regions. The order includes shrubs and trees, together with many herbs and climbers. One major characteristic of the Cucurbitales is the presence of unisexual flowers, mostly pentacyclic, with thick pointed petals (whenever present). The pollination is usually performed by insects, but wind pollination is also present (in Coriariaceae and Datiscaceae).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The order consists of roughly 2600 species in eight families. The largest families are Begoniaceae (begonia family) with around 1500 species and Cucurbitaceae (gourd family) with around 900 species. These two families include the only economically important plants. Specifically, the Cucurbitaceae (gourd family) include some food species, such as squash, pumpkin (both from Cucurbita), watermelon (Citrullus vulgaris), and cucumber and melons (Cucumis). The Begoniaceae are known for their horticultural species, of which there are over 130 with many more varieties.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Cucurbitales are an order of plants with a cosmopolitan distribution, particularly diverse in the tropics. Most are herbs, climber herbs, woody lianas or shrubs but some genera include canopy-forming evergreen lauroid trees. Members of the Cucurbitales form an important component of low to montane tropical forest with greater representation in terms of the number of species. Although not known with certainty the total number of species in the order, conservative estimates indicate about 2600 species worldwide, distributed in 109 genera. Compared to other flowering plant orders, the taxonomy is poorly understood due to their great diversity, difficulty in identification, and limited study.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "The order Cucurbitales in the eurosid I clade comprises almost 2600 species in 109 or 110 genera in eight families, tropical and temperate, of very different sizes, morphology, and ecology. It is a case of divergent evolution. In contrast, there is convergent evolution with other groups not related due to ecological or physical drivers toward a similar solution, including analogous structures. Some species are trees that have similar foliage to the true laurels due to convergent evolution.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "The patterns of speciation in the Cucurbitales are diversified in a high number of species. They have a pantropical distribution with centers of diversity in Africa, South America, and Southeast Asia. They most likely originated in West Gondwana 67–107 million years ago, so the oldest split could relate to the break-up of Gondwana in the middle Eocene to late Oligocene, 45–24 million years ago. The group reached their current distribution by multiple intercontinental dispersal events. One factor was product of aridification, other groups responded to favorable climatic periods and expanded across the available habitat, occurring as opportunistic species across wide distribution; other groups diverged over long periods within isolated areas.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "The Cucurbitales comprise the families: Apodanthaceae, Anisophylleaceae, Begoniaceae, Coriariaceae, Corynocarpaceae, Cucurbitaceae, Tetramelaceae, and Datiscaceae. Some of the synapomorphies of the order are: leaves in spiral with secondary veins palmated, calyx or perianth valvate, and the elevated stomatal calyx/perianth bearing separate styles. The two whorls are similar in texture.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "Tetrameles nudiflora is a tree of immense proportions of height and width; Tetramelaceae, Anisophylleaceae, and Corynocarpaceae are tall canopy trees in temperate and tropical forests. The genus Dendrosicyos, with the only species being the cucumber tree, is adapted to the arid semidesert island of Socotra. Deciduous perennial Cucurbitales lose all of their leaves for part of the year depending on variations in rainfall. The leaf loss coincides with the dry season in tropical, subtropical and arid regions. In temperate or polar climates, the dry season is due to the inability of the plant to absorb water available in the form of ice. Apodanthaceae are obligatory endoparasites that only emerge once a year in the form of small flowers that develop into small berries, however taxonomists have not agreed on the exact placement of this family within the Cucurbitales. Over half of the known members of this order belong to the greatly diverse begonia family Begoniaceae, with around 1500 species in two genera. Before modern DNA-molecular classifications, some Cucurbitales species were assigned to orders as diverse as Ranunculales, Malpighiales, Violales, and Rafflesiales. Early molecular studies revealed several surprises, such as the nonmonophyly of the traditional Datiscaceae, including Tetrameles and Octomeles, but the exact relationships among the families remain unclear. The lack of knowledge about the order in general is due to many species being found in countries with limited economic means or unstable political environments, factors unsuitable for plant collection and detailed study. Thus the vast majority of species remain poorly determined, and a future increase in the number of species is expected.",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "Under the Cronquist system, the families Begoniaceae, Cucurbitaceae, and Datiscaceae were placed in the order Violales, within the subclass Dilleniidae, with the Tetramelaceae subsumed into the Datiscaceae. Corynocarpaceae was placed in order Celastrales, and Anisophylleaceae in order Rosales, both under subclass Rosidae. Coriariaceae was placed in Ranunculaceae, subclass Magnoliidae. Apodanthaceae was not recognised as a family, its genera being assigned to another parasitic plant family, the Rafflesiaceae. The present classification is due to APG III (2009).",
"title": "Classification"
},
{
"paragraph_id": 8,
"text": "Modern molecular phylogenetics suggest the following relationships:",
"title": "Systematics"
}
] | The Cucurbitales are an order of flowering plants, included in the rosid group of dicotyledons. This order mostly belongs to tropical areas, with limited presence in subtropical and temperate regions. The order includes shrubs and trees, together with many herbs and climbers. One major characteristic of the Cucurbitales is the presence of unisexual flowers, mostly pentacyclic, with thick pointed petals. The pollination is usually performed by insects, but wind pollination is also present. The order consists of roughly 2600 species in eight families. The largest families are Begoniaceae with around 1500 species and Cucurbitaceae with around 900 species. These two families include the only economically important plants. Specifically, the Cucurbitaceae include some food species, such as squash, pumpkin, watermelon, and cucumber and melons (Cucumis). The Begoniaceae are known for their horticultural species, of which there are over 130 with many more varieties. | 2001-04-12T15:52:50Z | 2023-11-26T18:29:49Z | [
"Template:Short description",
"Template:Clade",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite book",
"Template:Automatic taxobox",
"Template:Reflist",
"Template:Commons category",
"Template:Angiosperm orders",
"Template:Taxonbar",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Cucurbitales |
5,412 | Contra dance | Contra dance (also contradance, contra-dance and other variant spellings) is a form of folk dancing made up of long lines of couples. It has mixed origins from English country dance, Scottish country dance, and French dance styles in the 17th century. Sometimes described as New England folk dance or Appalachian folk dance, contra dances can be found around the world, but are most common in the United States (periodically held in nearly every state), Canada, and other Anglophone countries.
A contra dance event is a social dance that one can attend without a partner. The dancers form couples, and the couples form sets of two couples in long lines starting from the stage and going down the length of the dance hall. Throughout the course of a dance, couples progress up and down these lines, dancing with each other couple in the line. The dance is led by a caller who teaches the sequence of moves, called "figures," in the dance before the music starts. In a single dance, a caller may include anywhere from six to twelve figures, which are repeated as couples progress up and down the lines. Each time through the dance takes 64 beats, after which the pattern is repeated. The essence of the dance is in following the pattern with your set and your line; since there is no required footwork, many people find contra dance easier to learn than other forms of social dancing.
Almost all contra dances are danced to live music. The music played includes, but is not limited to, Irish, Scottish, old-time, bluegrass and French-Canadian folk tunes. The fiddle is considered the core instrument, though other stringed instruments can be used, such as the guitar, banjo, bass and mandolin, as well as the piano, accordion, flute, clarinet and more. Techno contra dances are done to techno music, typically accompanied by DJ lighting. Music in a dance can consist of a single tune or a medley of tunes, and key changes during the course of a dance are common.
Many callers and bands perform for local contra dances, and some are hired to play for dances around the U.S. and Canada. Many dancers travel regionally (or even nationally) to contra dance weekends and week-long contra dance camps, where they can expect to find other dedicated and skilled dancers, callers, and bands.
Contra dance has European origins, and over 100 years of cultural influences from many different sources.
At the end of the 17th century, English country dances were taken up by French dance masters. The French called these dances contredanses (which roughly translated by sound "countrydance" to "contredanse"), as indicated in a 1706 dance book called Recueil de Contredances. As time progressed, these dances returned to England and were spread and reinterpreted in the United States, and eventually the French form of the name came to be associated with the American folk dances, where they were alternatively called "country dances" or in some parts of New England such as New Hampshire, "contradances".
Contra dances were fashionable in the United States and were considered one of the most popular social dances across class lines in the late 18th century, though these events were usually referred to as "country dances" until the 1780s, when the term contra dance became more common to describe these events. In the mid-19th century, group dances started to decline in popularity in favor of quadrilles, lancers, and couple dances such as the waltz and polka.
By the late 19th century, contras were mostly confined to rural settings. This began to change with the square dance revival of the 1920s, pioneered by Henry Ford, founder of the Ford Motor Company, in part as a response in opposition to modern jazz influences in the United States. In the 1920s, Ford asked his friend Benjamin Lovett, a dance coordinator in Massachusetts, to come to Michigan to begin a dance program. Initially, Lovett could not as he was under contract at a local inn; consequently, Ford bought the property rights to the inn. Lovett and Ford initiated a dance program in Dearborn, Michigan that included several folk dances, including contras. Ford also published a book titled Good Morning: After a Sleep of Twenty-Five Years, Old-Fashioned Dancing Is Being Revived in 1926 detailing steps for some contra dances.
In the 1930s and 1940s, the popularity of jazz, swing, and big band music caused contra dance to decline in several parts of the US; the tradition carried on primarily in towns within the northeastern portions of North America, such as Ohio, the Maritime provinces of Canada, and particularly in New England. Ralph Page almost single-handedly maintained the New England tradition until it was revitalized in the 1950s and 1960s, particularly by Ted Sannella and Dudley Laufman.
The New England contra dance tradition was also maintained in Vermont by the Ed Larkin Old Time Contra Dancers, formed by Edwin Loyal Larkin in 1934. The group Larkin founded is still performing, teaching the dances, and holding monthly open house dances in Tunbridge, Vermont.
By then, early dance camps, retreats, and weekends had emerged, such as Pinewoods Camp, in Plymouth, Massachusetts, which became primarily a music and dance camp in 1933, and NEFFA, the New England Folk Festival, also in Massachusetts, which began in 1944. Pittsburgh Contra Dance celebrated its 100th anniversary in 2015. These and others continue to be popular and some offer other dances and activities besides contra dancing.
In the 1970s, Sannella and other callers introduced dance moves from English Country Dance, such as heys and gypsies, to the contra dances. New dances, such as Shadrack's Delight by Tony Parkes, featured symmetrical dancing by all couples. (Previously, the actives and inactives – see Progression – had significantly different roles). Double progression dances, popularized by Herbie Gaudreau, added to the aerobic nature of the dances, and one caller, Gene Hubert, wrote a quadruple progression dance, Contra Madness. Becket formation was introduced, with partners starting the dance next to each other in the line instead of opposite each other. The Brattleboro Dawn Dance started in 1976, and continues to run semiannually.
In the early 1980s, Tod Whittemore started the first Saturday dance in the Peterborough Town House, which remains one of the more popular regional dances. The Peterborough dance influenced Bob McQuillen, who became a notable musician in New England. As musicians and callers moved to other locations, they founded contra dances in Michigan, Washington, Oregon, California, Texas, and elsewhere.
Contra dances take place in more than 200 cities and towns across the U.S. (as of 2020), as well as other countries.
Contra dance events are open to all, regardless of experience, unless explicitly labeled otherwise. It is common to see dancers with a wide range of ages, from children to the elderly. Most dancers are white and middle or upper-middle class. Contra dances are family-friendly, and alcohol consumption is not part of the culture. Many events offer beginner-level instructions prior to the dance. A typical evening of contra dance is three hours long, including an intermission. The event consists of a number of individual contra dances, each lasting about 15 minutes, and typically a band intermission with some waltzes, schottisches, polkas, or Swedish hambos. In some places, square dances are thrown into the mix, sometimes at the discretion of the caller. Music for the evening is typically performed by a live band, playing jigs and reels from Ireland, Scotland, Canada, or the USA. The tunes may range from traditional originating a century ago, to modern compositions including electric guitar, synth keyboard, and driving percussion – so long as the music fits the timing for contra dance patterns. Sometimes, a rock tune will be woven in.
Generally, a leader, known as a caller, will teach each individual dance just before the music for that dance begins. During this introductory walk-through, participants learn the dance by walking through the steps and formations, following the caller's instructions. The caller gives the instructions orally, and sometimes augments them with demonstrations of steps by experienced dancers in the group. The walk-through usually proceeds in the order of the moves as they will be done with the music; in some dances, the caller may vary the order of moves during the dance, a fact that is usually explained as part of the caller's instructions.
After the walk-through, the music begins and the dancers repeat that sequence some number of times before that dance ends, often 10 to 15 minutes, depending on the length of the contra lines. Calls are normally given at least the first few times through, and often for the last. At the end of each dance, the dancers thank their partners. The contra dance tradition in North America is to change partners for every dance, while in the United Kingdom typically people dance with the same partner the entire evening. One who attends an evening of contra dances in North America does not need to bring his or her own partner. In the short break between individual dances, the dancers invite each other to dance. Booking ahead by asking partner or partners ahead of time for each individual dance is common at some venues, but has been discouraged by some.
Most contra dances do not have an expected dress code. No special outfits are worn, but comfortable and loose-fitting clothing that does not restrict movement is usually recommended. Women usually wear skirts or dresses as they are cooler than wearing trousers; some men also dance in kilts or skirts. Low heeled, broken-in, soft-soled, non-marking shoes, such as dance shoes, sneakers, or sandals, are recommended and, in some places, required. As dancing can be aerobic, dancers are sometimes encouraged to bring a change of clothes.
As in any social dance, cooperation is vital to contra dancing. Since over the course of any single dance, individuals interact with not just their partners but everyone else in the set, contra dancing might be considered a group activity. As will necessarily be the case when beginners are welcomed in by more practiced dancers, mistakes are made; most dancers are willing to help beginners in learning the steps. However, because the friendly, social nature of the dances can be misinterpreted or even abused, some groups have created anti-harassment policies.
Contra dances are arranged in long lines of couples. A pair of lines is called a set. Sets are generally arranged so they run the length of the hall, with the top of the set being the end closest to the band and caller, and the bottom of the set being the end farthest from the caller.
Couples consist of two people, traditionally one male and one female, though same-sex pairs are increasingly common. Traditionally the dancers are referred to as the lady and gent, though various other terms have been used: some dances have used men and women, rejecting ladies and gents as elitist; others have used gender-neutral role terms including bares and bands, jets and rubies, and larks and ravens or robins. Couples interact primarily with an adjacent couple for each round of the dance. Each sub-group of two interacting couples is known to choreographers as a minor set and to dancers as a foursome or hands four. Couples in the same minor set are neighbors. Minor sets originate at the head of the set, starting with the topmost dancers as the ones (the active couple or actives); the other couple are twos (or inactives). The ones are said to be above their neighboring twos; twos are below. If there is an uneven number of couples dancing, the bottom-most couple will wait out the first time through the dance.
There are four common ways of arranging couples in the minor sets: proper, improper, Becket, and triple formations. Traditionally, most dances were in the proper formation, with all the gents in one line and all the ladies in the other. Until the end of the nineteenth century, minor sets were most commonly triples. In the twentieth century, duple-minor dances became more common. Since the mid twentieth century, there has been a shift towards improper dances, in which gents and ladies alternate on each side of the set, being the most common formation. Triple dances have also lost popularity in modern contras, while Becket formation, in which dancers stand next to their partners, facing another couple, is a modern innovation.
A fundamental aspect of contra dancing is that, during a single dance, each dancer has one partner, but interacts with many different people. During a single dance, the same pattern is repeated over and over (one time through lasts roughly 30 seconds), but each time, a pair of dancers will dance with new neighbors (moving on to new neighbors is called progressing). Dancers do not need to memorize these patterns in advance, since the dance leader, or caller, will generally explain the pattern for this dance before the music begins, and give people a chance to walk through the pattern so dancers can learn the moves. The walk through also helps dancers understand how the dance pattern leads them toward new people each time. Once the music starts, the caller continues to describe each move until the dancers are comfortable with that dance pattern. The dance progression is built into the contra dance pattern as continuous motion with the music, and does not interrupt the dancing. While all dancers in the room are part of the same dance pattern, half of the couples in the room are moving toward the band at any moment and half are moving away, so when everybody steps forward, they find new people to dance with. Once a couple reaches the end of the set, they switch direction, dancing back along the set the other way.
A single dance runs around ten minutes, long enough to progress at least 15–20 times. If the sets are short to medium length the caller often tries to run the dance until each couple has danced with every other couple both as a one and a two and returned to where they started. A typical room of contra dancers may include about 120 people; but this varies from 30 people in smaller towns, to over 300 people in cities like Washington DC, Los Angeles, or New York. With longer sets (more than 60 people), one dance typically does not allow dancing with every dancer in the group.
Contra dance choreography specifies the dance formation, the figures, and the sequence of those figures in a dance. Contra dance figures (with a few exceptions) do not have defined footwork; within the limits of the music and the comfort of their fellow dancers, individuals move according to their own taste.
Most contra dances consist of a sequence of about 6 to 12 individual figures, prompted by the caller in time to the music as the figures are danced. As the sequence repeats, the caller may cut down his or her prompting, and eventually drop out, leaving the dancers to each other and the music.
A figure is a pattern of movement that typically takes eight counts, although figures with four or 16 counts are also common. Each dance is a collection of figures assembled to allow the dancers to progress along the set (see "Progression", above).
A count (as used above) is one half of a musical measure, such as one quarter note in 4 time or three eighth notes in 8 time. A count may also be called a step, as contra dance is a walking form, and each count of a dance typically matches a single physical step in a figure.
Typical contra dance choreography comprises four parts, each 16 counts (8 measures) long. The parts are called A1, A2, B1 and B2. This nomenclature stems from the music: Most contra dance tunes (as written) have two parts (A and B), each 8 measures long, and each fitting one part of the dance. The A and B parts are each played twice in a row, hence, A1, A2, B1, B2. While the same music is generally played in, for example, parts A1 and A2, distinct choreography is followed in those parts. Thus, a contra dance is typically 64 counts, and goes with a 32 measure tune. Tunes of this form are called "square"; tunes that deviate from this form are called "crooked".
Sample contra dances:
Many modern contra dances have these characteristics:
An event which consists primarily (or solely) of dances in this style is sometimes referred to as a "modern urban contra dance".
The most common contra dance repertoire is rooted in the Anglo-Celtic tradition as it developed in North America. Irish, Scottish, French Canadian, and Old-time tunes are common, and Klezmer tunes have also been used. The old-time repertoire includes very few of the jigs common in the others.
Tunes used for a contra dance are nearly always "square" 64-beat tunes, in which one time through the tune is each of two 16-beat parts played twice (this is notated AABB). However, any 64-beat tune will do; for instance, three 8-beat parts could be played AABB AACC, or two 8-beat parts and one 16-beat part could be played AABB CC. Tunes not 64 beats long are called "crooked" and are almost never used for contra dancing, although a few crooked dances have been written as novelties. Contra tunes are played at a narrow range of tempos, between 108 and 132 bpm.
Fiddles are considered to be the primary melody instrument in contra dancing, though other stringed instruments can also be used, such as the mandolin or banjo, in addition to a few wind instruments, for example, the accordion. The piano, guitar, and double bass are frequently found in the rhythm section of a contra dance band. Occasionally, percussion instruments are also used in contra dancing, such as the Irish bodhran or less frequently, the dumbek or washboard. The last few years have seen some of the bands incorporate the Quebecois practice of tapping feet on a board while playing an instrument (often the fiddle).
Until the 1970s it was traditional to play a single tune for the duration of a contra dance (about 5 to 10 minutes). Since then, contra dance musicians have typically played tunes in sets of two or three related (and sometimes contrasting) tunes, though single-tune dances are again becoming popular with some northeastern bands. In the Celtic repertoires it is common to change keys with each tune. A set might start with a tune in G, switch to a tune in D, and end with a tune in Bm. Here, D is related to G as its dominant (5th), while D and Bm share a key signature of two sharps. In the old-time tradition the musicians will either play the same tune for the whole dance, or switch to tunes in the same key. This is because the tunings of the five-string banjo are key-specific. An old-time band might play a set of tunes in D, then use the time between dances to retune for a set of tunes in A. (Fiddlers also may take this opportunity to retune; tune- or key-specific fiddle tunings are uncommon in American Anglo-Celtic traditions other than old-time.)
In the Celtic repertoires it is most common for bands to play sets of reels and sets of jigs. However, since the underlying beat structure of jigs and reels is the same (two "counts" per bar) bands will occasionally mix jigs and reels in a set.
Some of the most popular contra dance bands in recent years are Great Bear, Perpetual E-Motion, Buddy System, Crowfoot, Elixir, the Mean Lids, Nor'easter, Nova, Pete's Posse, the Stringrays, the Syncopaths, and Wild Asparagus.
In recent years, younger contra dancers have begun establishing "crossover contra" or "techno contra" – contra dancing to techno, hip-hop, and other modern forms of music. While challenging for DJs and callers, the fusion of contra patterns with moves from hip-hop, tango, and other forms of dance has made this form of contra dance a rising trend since 2008. Techno differs from other contra dancing in that it is usually done to recorded music, although there are some bands that play live for techno dances. Techno has become especially prevalent in Asheville, North Carolina, but regular techno contra dance series are spreading up the East Coast to locales such as Charlottesville, Virginia; Washington, D.C.; Amherst, Massachusetts; Greenfield, Massachusetts; and various North Carolina dance communities, with one-time or annual events cropping up in locations farther west, including California, Portland, Oregon, and Washington state. They also sometimes appear as late night events during contra dance weekends. In response to the demand for techno contra, a number of contra dance callers have developed repertoires of recorded songs to play that go well with particular contra dances; these callers are known as DJs. A kind of techno/traditional contra fusion has arisen, with at least one band, Buddy System, playing live music melded with synth sounds for techno contra dances. | [
{
"paragraph_id": 0,
"text": "Contra dance (also contradance, contra-dance and other variant spellings) is a form of folk dancing made up of long lines of couples. It has mixed origins from English country dance, Scottish country dance, and French dance styles in the 17th century. Sometimes described as New England folk dance or Appalachian folk dance, contra dances can be found around the world, but are most common in the United States (periodically held in nearly every state), Canada, and other Anglophone countries.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A contra dance event is a social dance that one can attend without a partner. The dancers form couples, and the couples form sets of two couples in long lines starting from the stage and going down the length of the dance hall. Throughout the course of a dance, couples progress up and down these lines, dancing with each other couple in the line. The dance is led by a caller who teaches the sequence of moves, called \"figures,\" in the dance before the music starts. In a single dance, a caller may include anywhere from six to twelve figures, which are repeated as couples progress up and down the lines. Each time through the dance takes 64 beats, after which the pattern is repeated. The essence of the dance is in following the pattern with your set and your line; since there is no required footwork, many people find contra dance easier to learn than other forms of social dancing.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Almost all contra dances are danced to live music. The music played includes, but is not limited to, Irish, Scottish, old-time, bluegrass and French-Canadian folk tunes. The fiddle is considered the core instrument, though other stringed instruments can be used, such as the guitar, banjo, bass and mandolin, as well as the piano, accordion, flute, clarinet and more. Techno contra dances are done to techno music, typically accompanied by DJ lighting. Music in a dance can consist of a single tune or a medley of tunes, and key changes during the course of a dance are common.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Many callers and bands perform for local contra dances, and some are hired to play for dances around the U.S. and Canada. Many dancers travel regionally (or even nationally) to contra dance weekends and week-long contra dance camps, where they can expect to find other dedicated and skilled dancers, callers, and bands.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Contra dance has European origins, and over 100 years of cultural influences from many different sources.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "At the end of the 17th century, English country dances were taken up by French dance masters. The French called these dances contredanses (which roughly translated by sound \"countrydance\" to \"contredanse\"), as indicated in a 1706 dance book called Recueil de Contredances. As time progressed, these dances returned to England and were spread and reinterpreted in the United States, and eventually the French form of the name came to be associated with the American folk dances, where they were alternatively called \"country dances\" or in some parts of New England such as New Hampshire, \"contradances\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Contra dances were fashionable in the United States and were considered one of the most popular social dances across class lines in the late 18th century, though these events were usually referred to as \"country dances\" until the 1780s, when the term contra dance became more common to describe these events. In the mid-19th century, group dances started to decline in popularity in favor of quadrilles, lancers, and couple dances such as the waltz and polka.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "By the late 19th century, contras were mostly confined to rural settings. This began to change with the square dance revival of the 1920s, pioneered by Henry Ford, founder of the Ford Motor Company, in part as a response in opposition to modern jazz influences in the United States. In the 1920s, Ford asked his friend Benjamin Lovett, a dance coordinator in Massachusetts, to come to Michigan to begin a dance program. Initially, Lovett could not as he was under contract at a local inn; consequently, Ford bought the property rights to the inn. Lovett and Ford initiated a dance program in Dearborn, Michigan that included several folk dances, including contras. Ford also published a book titled Good Morning: After a Sleep of Twenty-Five Years, Old-Fashioned Dancing Is Being Revived in 1926 detailing steps for some contra dances.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the 1930s and 1940s, the popularity of jazz, swing, and big band music caused contra dance to decline in several parts of the US; the tradition carried on primarily in towns within the northeastern portions of North America, such as Ohio, the Maritime provinces of Canada, and particularly in New England. Ralph Page almost single-handedly maintained the New England tradition until it was revitalized in the 1950s and 1960s, particularly by Ted Sannella and Dudley Laufman.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The New England contra dance tradition was also maintained in Vermont by the Ed Larkin Old Time Contra Dancers, formed by Edwin Loyal Larkin in 1934. The group Larkin founded is still performing, teaching the dances, and holding monthly open house dances in Tunbridge, Vermont.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "By then, early dance camps, retreats, and weekends had emerged, such as Pinewoods Camp, in Plymouth, Massachusetts, which became primarily a music and dance camp in 1933, and NEFFA, the New England Folk Festival, also in Massachusetts, which began in 1944. Pittsburgh Contra Dance celebrated its 100th anniversary in 2015. These and others continue to be popular and some offer other dances and activities besides contra dancing.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the 1970s, Sannella and other callers introduced dance moves from English Country Dance, such as heys and gypsies, to the contra dances. New dances, such as Shadrack's Delight by Tony Parkes, featured symmetrical dancing by all couples. (Previously, the actives and inactives – see Progression – had significantly different roles). Double progression dances, popularized by Herbie Gaudreau, added to the aerobic nature of the dances, and one caller, Gene Hubert, wrote a quadruple progression dance, Contra Madness. Becket formation was introduced, with partners starting the dance next to each other in the line instead of opposite each other. The Brattleboro Dawn Dance started in 1976, and continues to run semiannually.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the early 1980s, Tod Whittemore started the first Saturday dance in the Peterborough Town House, which remains one of the more popular regional dances. The Peterborough dance influenced Bob McQuillen, who became a notable musician in New England. As musicians and callers moved to other locations, they founded contra dances in Michigan, Washington, Oregon, California, Texas, and elsewhere.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Contra dances take place in more than 200 cities and towns across the U.S. (as of 2020), as well as other countries.",
"title": "Events"
},
{
"paragraph_id": 14,
"text": "Contra dance events are open to all, regardless of experience, unless explicitly labeled otherwise. It is common to see dancers with a wide range of ages, from children to the elderly. Most dancers are white and middle or upper-middle class. Contra dances are family-friendly, and alcohol consumption is not part of the culture. Many events offer beginner-level instructions prior to the dance. A typical evening of contra dance is three hours long, including an intermission. The event consists of a number of individual contra dances, each lasting about 15 minutes, and typically a band intermission with some waltzes, schottisches, polkas, or Swedish hambos. In some places, square dances are thrown into the mix, sometimes at the discretion of the caller. Music for the evening is typically performed by a live band, playing jigs and reels from Ireland, Scotland, Canada, or the USA. The tunes may range from traditional originating a century ago, to modern compositions including electric guitar, synth keyboard, and driving percussion – so long as the music fits the timing for contra dance patterns. Sometimes, a rock tune will be woven in.",
"title": "Events"
},
{
"paragraph_id": 15,
"text": "Generally, a leader, known as a caller, will teach each individual dance just before the music for that dance begins. During this introductory walk-through, participants learn the dance by walking through the steps and formations, following the caller's instructions. The caller gives the instructions orally, and sometimes augments them with demonstrations of steps by experienced dancers in the group. The walk-through usually proceeds in the order of the moves as they will be done with the music; in some dances, the caller may vary the order of moves during the dance, a fact that is usually explained as part of the caller's instructions.",
"title": "Events"
},
{
"paragraph_id": 16,
"text": "After the walk-through, the music begins and the dancers repeat that sequence some number of times before that dance ends, often 10 to 15 minutes, depending on the length of the contra lines. Calls are normally given at least the first few times through, and often for the last. At the end of each dance, the dancers thank their partners. The contra dance tradition in North America is to change partners for every dance, while in the United Kingdom typically people dance with the same partner the entire evening. One who attends an evening of contra dances in North America does not need to bring his or her own partner. In the short break between individual dances, the dancers invite each other to dance. Booking ahead by asking partner or partners ahead of time for each individual dance is common at some venues, but has been discouraged by some.",
"title": "Events"
},
{
"paragraph_id": 17,
"text": "Most contra dances do not have an expected dress code. No special outfits are worn, but comfortable and loose-fitting clothing that does not restrict movement is usually recommended. Women usually wear skirts or dresses as they are cooler than wearing trousers; some men also dance in kilts or skirts. Low heeled, broken-in, soft-soled, non-marking shoes, such as dance shoes, sneakers, or sandals, are recommended and, in some places, required. As dancing can be aerobic, dancers are sometimes encouraged to bring a change of clothes.",
"title": "Events"
},
{
"paragraph_id": 18,
"text": "As in any social dance, cooperation is vital to contra dancing. Since over the course of any single dance, individuals interact with not just their partners but everyone else in the set, contra dancing might be considered a group activity. As will necessarily be the case when beginners are welcomed in by more practiced dancers, mistakes are made; most dancers are willing to help beginners in learning the steps. However, because the friendly, social nature of the dances can be misinterpreted or even abused, some groups have created anti-harassment policies.",
"title": "Events"
},
{
"paragraph_id": 19,
"text": "Contra dances are arranged in long lines of couples. A pair of lines is called a set. Sets are generally arranged so they run the length of the hall, with the top of the set being the end closest to the band and caller, and the bottom of the set being the end farthest from the caller.",
"title": "Form"
},
{
"paragraph_id": 20,
"text": "Couples consist of two people, traditionally one male and one female, though same-sex pairs are increasingly common. Traditionally the dancers are referred to as the lady and gent, though various other terms have been used: some dances have used men and women, rejecting ladies and gents as elitist; others have used gender-neutral role terms including bares and bands, jets and rubies, and larks and ravens or robins. Couples interact primarily with an adjacent couple for each round of the dance. Each sub-group of two interacting couples is known to choreographers as a minor set and to dancers as a foursome or hands four. Couples in the same minor set are neighbors. Minor sets originate at the head of the set, starting with the topmost dancers as the ones (the active couple or actives); the other couple are twos (or inactives). The ones are said to be above their neighboring twos; twos are below. If there is an uneven number of couples dancing, the bottom-most couple will wait out the first time through the dance.",
"title": "Form"
},
{
"paragraph_id": 21,
"text": "There are four common ways of arranging couples in the minor sets: proper, improper, Becket, and triple formations. Traditionally, most dances were in the proper formation, with all the gents in one line and all the ladies in the other. Until the end of the nineteenth century, minor sets were most commonly triples. In the twentieth century, duple-minor dances became more common. Since the mid twentieth century, there has been a shift towards improper dances, in which gents and ladies alternate on each side of the set, being the most common formation. Triple dances have also lost popularity in modern contras, while Becket formation, in which dancers stand next to their partners, facing another couple, is a modern innovation.",
"title": "Form"
},
{
"paragraph_id": 22,
"text": "A fundamental aspect of contra dancing is that, during a single dance, each dancer has one partner, but interacts with many different people. During a single dance, the same pattern is repeated over and over (one time through lasts roughly 30 seconds), but each time, a pair of dancers will dance with new neighbors (moving on to new neighbors is called progressing). Dancers do not need to memorize these patterns in advance, since the dance leader, or caller, will generally explain the pattern for this dance before the music begins, and give people a chance to walk through the pattern so dancers can learn the moves. The walk through also helps dancers understand how the dance pattern leads them toward new people each time. Once the music starts, the caller continues to describe each move until the dancers are comfortable with that dance pattern. The dance progression is built into the contra dance pattern as continuous motion with the music, and does not interrupt the dancing. While all dancers in the room are part of the same dance pattern, half of the couples in the room are moving toward the band at any moment and half are moving away, so when everybody steps forward, they find new people to dance with. Once a couple reaches the end of the set, they switch direction, dancing back along the set the other way.",
"title": "Form"
},
{
"paragraph_id": 23,
"text": "A single dance runs around ten minutes, long enough to progress at least 15–20 times. If the sets are short to medium length the caller often tries to run the dance until each couple has danced with every other couple both as a one and a two and returned to where they started. A typical room of contra dancers may include about 120 people; but this varies from 30 people in smaller towns, to over 300 people in cities like Washington DC, Los Angeles, or New York. With longer sets (more than 60 people), one dance typically does not allow dancing with every dancer in the group.",
"title": "Form"
},
{
"paragraph_id": 24,
"text": "Contra dance choreography specifies the dance formation, the figures, and the sequence of those figures in a dance. Contra dance figures (with a few exceptions) do not have defined footwork; within the limits of the music and the comfort of their fellow dancers, individuals move according to their own taste.",
"title": "Choreography"
},
{
"paragraph_id": 25,
"text": "Most contra dances consist of a sequence of about 6 to 12 individual figures, prompted by the caller in time to the music as the figures are danced. As the sequence repeats, the caller may cut down his or her prompting, and eventually drop out, leaving the dancers to each other and the music.",
"title": "Choreography"
},
{
"paragraph_id": 26,
"text": "A figure is a pattern of movement that typically takes eight counts, although figures with four or 16 counts are also common. Each dance is a collection of figures assembled to allow the dancers to progress along the set (see \"Progression\", above).",
"title": "Choreography"
},
{
"paragraph_id": 27,
"text": "A count (as used above) is one half of a musical measure, such as one quarter note in 4 time or three eighth notes in 8 time. A count may also be called a step, as contra dance is a walking form, and each count of a dance typically matches a single physical step in a figure.",
"title": "Choreography"
},
{
"paragraph_id": 28,
"text": "Typical contra dance choreography comprises four parts, each 16 counts (8 measures) long. The parts are called A1, A2, B1 and B2. This nomenclature stems from the music: Most contra dance tunes (as written) have two parts (A and B), each 8 measures long, and each fitting one part of the dance. The A and B parts are each played twice in a row, hence, A1, A2, B1, B2. While the same music is generally played in, for example, parts A1 and A2, distinct choreography is followed in those parts. Thus, a contra dance is typically 64 counts, and goes with a 32 measure tune. Tunes of this form are called \"square\"; tunes that deviate from this form are called \"crooked\".",
"title": "Choreography"
},
{
"paragraph_id": 29,
"text": "Sample contra dances:",
"title": "Choreography"
},
{
"paragraph_id": 30,
"text": "Many modern contra dances have these characteristics:",
"title": "Choreography"
},
{
"paragraph_id": 31,
"text": "An event which consists primarily (or solely) of dances in this style is sometimes referred to as a \"modern urban contra dance\".",
"title": "Choreography"
},
{
"paragraph_id": 32,
"text": "The most common contra dance repertoire is rooted in the Anglo-Celtic tradition as it developed in North America. Irish, Scottish, French Canadian, and Old-time tunes are common, and Klezmer tunes have also been used. The old-time repertoire includes very few of the jigs common in the others.",
"title": "Music"
},
{
"paragraph_id": 33,
"text": "Tunes used for a contra dance are nearly always \"square\" 64-beat tunes, in which one time through the tune is each of two 16-beat parts played twice (this is notated AABB). However, any 64-beat tune will do; for instance, three 8-beat parts could be played AABB AACC, or two 8-beat parts and one 16-beat part could be played AABB CC. Tunes not 64 beats long are called \"crooked\" and are almost never used for contra dancing, although a few crooked dances have been written as novelties. Contra tunes are played at a narrow range of tempos, between 108 and 132 bpm.",
"title": "Music"
},
{
"paragraph_id": 34,
"text": "Fiddles are considered to be the primary melody instrument in contra dancing, though other stringed instruments can also be used, such as the mandolin or banjo, in addition to a few wind instruments, for example, the accordion. The piano, guitar, and double bass are frequently found in the rhythm section of a contra dance band. Occasionally, percussion instruments are also used in contra dancing, such as the Irish bodhran or less frequently, the dumbek or washboard. The last few years have seen some of the bands incorporate the Quebecois practice of tapping feet on a board while playing an instrument (often the fiddle).",
"title": "Music"
},
{
"paragraph_id": 35,
"text": "Until the 1970s it was traditional to play a single tune for the duration of a contra dance (about 5 to 10 minutes). Since then, contra dance musicians have typically played tunes in sets of two or three related (and sometimes contrasting) tunes, though single-tune dances are again becoming popular with some northeastern bands. In the Celtic repertoires it is common to change keys with each tune. A set might start with a tune in G, switch to a tune in D, and end with a tune in Bm. Here, D is related to G as its dominant (5th), while D and Bm share a key signature of two sharps. In the old-time tradition the musicians will either play the same tune for the whole dance, or switch to tunes in the same key. This is because the tunings of the five-string banjo are key-specific. An old-time band might play a set of tunes in D, then use the time between dances to retune for a set of tunes in A. (Fiddlers also may take this opportunity to retune; tune- or key-specific fiddle tunings are uncommon in American Anglo-Celtic traditions other than old-time.)",
"title": "Music"
},
{
"paragraph_id": 36,
"text": "In the Celtic repertoires it is most common for bands to play sets of reels and sets of jigs. However, since the underlying beat structure of jigs and reels is the same (two \"counts\" per bar) bands will occasionally mix jigs and reels in a set.",
"title": "Music"
},
{
"paragraph_id": 37,
"text": "Some of the most popular contra dance bands in recent years are Great Bear, Perpetual E-Motion, Buddy System, Crowfoot, Elixir, the Mean Lids, Nor'easter, Nova, Pete's Posse, the Stringrays, the Syncopaths, and Wild Asparagus.",
"title": "Music"
},
{
"paragraph_id": 38,
"text": "In recent years, younger contra dancers have begun establishing \"crossover contra\" or \"techno contra\" – contra dancing to techno, hip-hop, and other modern forms of music. While challenging for DJs and callers, the fusion of contra patterns with moves from hip-hop, tango, and other forms of dance has made this form of contra dance a rising trend since 2008. Techno differs from other contra dancing in that it is usually done to recorded music, although there are some bands that play live for techno dances. Techno has become especially prevalent in Asheville, North Carolina, but regular techno contra dance series are spreading up the East Coast to locales such as Charlottesville, Virginia; Washington, D.C.; Amherst, Massachusetts; Greenfield, Massachusetts; and various North Carolina dance communities, with one-time or annual events cropping up in locations farther west, including California, Portland, Oregon, and Washington state. They also sometimes appear as late night events during contra dance weekends. In response to the demand for techno contra, a number of contra dance callers have developed repertoires of recorded songs to play that go well with particular contra dances; these callers are known as DJs. A kind of techno/traditional contra fusion has arisen, with at least one band, Buddy System, playing live music melded with synth sounds for techno contra dances.",
"title": "Music"
}
] | Contra dance is a form of folk dancing made up of long lines of couples. It has mixed origins from English country dance, Scottish country dance, and French dance styles in the 17th century. Sometimes described as New England folk dance or Appalachian folk dance, contra dances can be found around the world, but are most common in the United States, Canada, and other Anglophone countries. A contra dance event is a social dance that one can attend without a partner. The dancers form couples, and the couples form sets of two couples in long lines starting from the stage and going down the length of the dance hall. Throughout the course of a dance, couples progress up and down these lines, dancing with each other couple in the line. The dance is led by a caller who teaches the sequence of moves, called "figures," in the dance before the music starts. In a single dance, a caller may include anywhere from six to twelve figures, which are repeated as couples progress up and down the lines. Each time through the dance takes 64 beats, after which the pattern is repeated. The essence of the dance is in following the pattern with your set and your line; since there is no required footwork, many people find contra dance easier to learn than other forms of social dancing. Almost all contra dances are danced to live music. The music played includes, but is not limited to, Irish, Scottish, old-time, bluegrass and French-Canadian folk tunes. The fiddle is considered the core instrument, though other stringed instruments can be used, such as the guitar, banjo, bass and mandolin, as well as the piano, accordion, flute, clarinet and more. Techno contra dances are done to techno music, typically accompanied by DJ lighting. Music in a dance can consist of a single tune or a medley of tunes, and key changes during the course of a dance are common. Many callers and bands perform for local contra dances, and some are hired to play for dances around the U.S. and Canada. Many dancers travel regionally to contra dance weekends and week-long contra dance camps, where they can expect to find other dedicated and skilled dancers, callers, and bands. | 2001-08-01T00:00:54Z | 2023-12-03T11:19:58Z | [
"Template:Cite news",
"Template:Refend",
"Template:Authority control",
"Template:As of",
"Template:Music",
"Template:Reflist",
"Template:Cite book",
"Template:Short description",
"Template:Cite web",
"Template:Dead link",
"Template:Webarchive",
"Template:Dance",
"Template:Sfn",
"Template:Cite journal",
"Template:External links",
"Template:Commons category",
"Template:Main",
"Template:Cite encyclopedia",
"Template:Refbegin"
] | https://en.wikipedia.org/wiki/Contra_dance |
5,413 | Coin collecting | Coin collecting is the collecting of coins or other forms of minted legal tender. Coins of interest to collectors include beautiful, rare, and historically significant pieces. Collectors may be interested, for example, in complete sets of a particular design or denomination, coins that were in circulation for only a brief time, or coins with errors. Coin collecting can be differentiated from numismatics, in that the latter is the systematic study of currency as a whole, though the two disciplines are closely interlinked.
Many factors determine a coin's value including grade, rarity, and popularity. Commercial organizations offer grading services and will grade, authenticate, attribute, and encapsulate most coins.
People have hoarded coins for their bullion value for as long as coins have been minted. However, the collection of coins for their artistic value was a later development. Evidence from the archaeological and historical record of Ancient Rome and medieval Mesopotamia indicates that coins were collected and catalogued by scholars and state treasuries. It also seems probable that individual citizens collected old, exotic or commemorative coins as an affordable, portable form of art. According to Suetonius in his De vita Caesarum (The Lives of the Twelve Caesars), written in the first century AD, the emperor Augustus sometimes presented old and exotic coins to friends and courtiers during festivals and other special occasions. While the literary sources are scarce, it's evident that collecting of ancient coins persisted in the Western World during the Middle Ages among rulers and high nobility.
Contemporary coin collecting and appreciation began around the fourteenth century. During the Renaissance, it became a fad among some members of the privileged classes, especially kings and queens. The Italian scholar and poet Petrarch is credited with being the pursuit's first and most famous aficionado. Following his lead, many European kings, princes, and other nobility kept collections of ancient coins. Some notable collectors were Pope Boniface VIII, Emperor Maximilian I of the Holy Roman Empire, Louis XIV of France, Ferdinand I of Spain and Holy Roman Emperor, Henry IV of France and Elector Joachim II of Brandenburg, who started the Berlin Coin Cabinet (German: Münzkabinett Berlin). Perhaps because only the very wealthy could afford the pursuit, in Renaissance times coin collecting became known as the "Hobby of Kings."
During the 17th and 18th centuries coin collecting remained a pursuit of the well-to-do. But rational, Enlightenment thinking led to a more systematic approach to accumulation and study. Numismatics as an academic discipline emerged in these centuries at the same time as a growing middle class, eager to prove their wealth and sophistication, began to collect coins. During the 19th and 20th centuries, coin collecting increased further in popularity. The market for coins expanded to include not only antique coins, but foreign or otherwise exotic currency. Coin shows, trade associations, and regulatory bodies emerged during these decades. The first international convention for coin collectors was held 15–18 August 1962, in Detroit, Michigan, and was sponsored by the American Numismatic Association and the Royal Canadian Numismatic Association. Attendance was estimated at 40,000. As one of the oldest and most popular world pastimes, coin collecting is now often referred to as the "King of Hobbies".
The motivations for collecting vary. Possibly the most common type of collectors are the hobbyists, who amass a collection primarily for the pleasure of it without the intention of making a profit.
Another frequent reason for purchasing coins is as an investment. As with stamps, precious metals, or other commodities, coin prices vary based on supply and demand. Prices drop for coins that are not in long-term demand, and increase along with a coin's perceived or intrinsic value. Investors buy with the expectation that the value of their purchase will increase over the long term. As with all types of investment, the principle of caveat emptor applies, and study is recommended before buying. Likewise, as with most collectibles, a coin collection does not produce income until it is sold, and may even incur costs (for example, the cost of safe deposit box storage) in the interim.
Some people collect coins for patriotic reasons and mints from various countries create coins specifically for patriotic collectors. One example of a patriotic coin was minted in 1813 by the United Provinces of the Rio de la Plata. One of the first pieces of legislation the new country enacted (after the revolution that freed it from Spanish rule) was to mint coins to replace the Spanish currency that had been in use. Another example is the U.S. 2022 Purple Heart Commemorative Coin Program.
Some coin collectors are generalists and accumulate examples from a broad variety of historical or geographically significant coins, but most collectors focus on a narrower, specialist interest. For example, some collectors focus on coins based on a common theme, such as coins from a country (often the collector's own), a coin each year from a series, or coins with a common mint mark.
There are also completists who seek an example of every type of coin within a certain category. One of the most famous of this type of collector is Louis E. Eliasberg, the only collector thus far to assemble a complete set of known coins of the United States. Foreign coin collecting is another type of collection that numismatics enjoy collecting.
Coin hoarders are similar to investors in the sense that they accumulate coins for potential long-term profit. However, they typically do not take into account aesthetic considerations. This is most common with coins whose metal value exceeds their spending value.
Speculators, be they amateurs or commercial buyers, may purchase coins in bulk or in small batches, and often act with the expectation of delayed profit. They may wish to take advantage of a spike in demand for a particular coin (for example, during the annual release of Canadian numismatic collectibles from the Royal Canadian Mint). The speculator might hope to buy the coin in large lots and sell at a profit within weeks or months. Speculators may also buy common circulation coins for their intrinsic metal value. Coins without collectible value may be melted down or distributed as bullion for commercial purposes. Typically they purchase coins that are composed of rare or precious metals, or coins that have a high purity of a specific metal.
A final type of collector is the inheritor, an accidental collector who acquires coins from another person as part of an inheritance. The inheritor type may not necessarily have an interest in or know anything about numismatics at the time of the acquisition.
In coin collecting, the condition of a coin (its grade) is key to its value; a high-quality example with minimal wear is often worth many times more than a poor example. Collectors have created systems to describe the overall condition of coins. Any damage, such as wear or cleaning, can substantially decrease a coin's value.
By the mid 20th century, with the growing market for rare coins, the American Numismatic Association helps identify most coins in North America, numbering coins from 1 (poor) to 70 (mint state), and setting aside a separate category for proof coinage. This system is often shunned by coin experts in Europe and elsewhere, who prefer to use adjectival grades. Nevertheless, most grading systems use similar terminology, and values and remain mutually intelligible.
Third-party grading (TPG), aka coin certification services, emerged in the 1980s with the goals of standardizing grading, exposing alterations, and eliminating counterfeits. For tiered fees, certification services grade, authenticate, attribute, and encapsulate coins in clear plastic holders.
Coin certification has greatly reduced the number of counterfeits and grossly over graded coins, and improved buyer confidence. Certification services can sometimes be controversial because grading is subjective; coins may be graded differently by different services or even upon resubmission to the same service. The numeric grade alone does not represent all of a coin's characteristics, such as toning, strike, brightness, color, luster, and attractiveness. Due to potentially large differences in value over slight differences in a coin's condition, some submitters will repeatedly resubmit a coin to a grading service in the hope of receiving a higher grade. Because fees are charged for certification, submitters must funnel money away from purchasing additional coins.
Coin collector clubs offer a variety of benefits to members. They usually serve as a source of information and unification of people interested in coins. Collector clubs are popular both offline and online. | [
{
"paragraph_id": 0,
"text": "Coin collecting is the collecting of coins or other forms of minted legal tender. Coins of interest to collectors include beautiful, rare, and historically significant pieces. Collectors may be interested, for example, in complete sets of a particular design or denomination, coins that were in circulation for only a brief time, or coins with errors. Coin collecting can be differentiated from numismatics, in that the latter is the systematic study of currency as a whole, though the two disciplines are closely interlinked.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Many factors determine a coin's value including grade, rarity, and popularity. Commercial organizations offer grading services and will grade, authenticate, attribute, and encapsulate most coins.",
"title": ""
},
{
"paragraph_id": 2,
"text": "People have hoarded coins for their bullion value for as long as coins have been minted. However, the collection of coins for their artistic value was a later development. Evidence from the archaeological and historical record of Ancient Rome and medieval Mesopotamia indicates that coins were collected and catalogued by scholars and state treasuries. It also seems probable that individual citizens collected old, exotic or commemorative coins as an affordable, portable form of art. According to Suetonius in his De vita Caesarum (The Lives of the Twelve Caesars), written in the first century AD, the emperor Augustus sometimes presented old and exotic coins to friends and courtiers during festivals and other special occasions. While the literary sources are scarce, it's evident that collecting of ancient coins persisted in the Western World during the Middle Ages among rulers and high nobility.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Contemporary coin collecting and appreciation began around the fourteenth century. During the Renaissance, it became a fad among some members of the privileged classes, especially kings and queens. The Italian scholar and poet Petrarch is credited with being the pursuit's first and most famous aficionado. Following his lead, many European kings, princes, and other nobility kept collections of ancient coins. Some notable collectors were Pope Boniface VIII, Emperor Maximilian I of the Holy Roman Empire, Louis XIV of France, Ferdinand I of Spain and Holy Roman Emperor, Henry IV of France and Elector Joachim II of Brandenburg, who started the Berlin Coin Cabinet (German: Münzkabinett Berlin). Perhaps because only the very wealthy could afford the pursuit, in Renaissance times coin collecting became known as the \"Hobby of Kings.\"",
"title": "History"
},
{
"paragraph_id": 4,
"text": "During the 17th and 18th centuries coin collecting remained a pursuit of the well-to-do. But rational, Enlightenment thinking led to a more systematic approach to accumulation and study. Numismatics as an academic discipline emerged in these centuries at the same time as a growing middle class, eager to prove their wealth and sophistication, began to collect coins. During the 19th and 20th centuries, coin collecting increased further in popularity. The market for coins expanded to include not only antique coins, but foreign or otherwise exotic currency. Coin shows, trade associations, and regulatory bodies emerged during these decades. The first international convention for coin collectors was held 15–18 August 1962, in Detroit, Michigan, and was sponsored by the American Numismatic Association and the Royal Canadian Numismatic Association. Attendance was estimated at 40,000. As one of the oldest and most popular world pastimes, coin collecting is now often referred to as the \"King of Hobbies\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The motivations for collecting vary. Possibly the most common type of collectors are the hobbyists, who amass a collection primarily for the pleasure of it without the intention of making a profit.",
"title": "Motivations"
},
{
"paragraph_id": 6,
"text": "Another frequent reason for purchasing coins is as an investment. As with stamps, precious metals, or other commodities, coin prices vary based on supply and demand. Prices drop for coins that are not in long-term demand, and increase along with a coin's perceived or intrinsic value. Investors buy with the expectation that the value of their purchase will increase over the long term. As with all types of investment, the principle of caveat emptor applies, and study is recommended before buying. Likewise, as with most collectibles, a coin collection does not produce income until it is sold, and may even incur costs (for example, the cost of safe deposit box storage) in the interim.",
"title": "Motivations"
},
{
"paragraph_id": 7,
"text": "Some people collect coins for patriotic reasons and mints from various countries create coins specifically for patriotic collectors. One example of a patriotic coin was minted in 1813 by the United Provinces of the Rio de la Plata. One of the first pieces of legislation the new country enacted (after the revolution that freed it from Spanish rule) was to mint coins to replace the Spanish currency that had been in use. Another example is the U.S. 2022 Purple Heart Commemorative Coin Program.",
"title": "Motivations"
},
{
"paragraph_id": 8,
"text": "Some coin collectors are generalists and accumulate examples from a broad variety of historical or geographically significant coins, but most collectors focus on a narrower, specialist interest. For example, some collectors focus on coins based on a common theme, such as coins from a country (often the collector's own), a coin each year from a series, or coins with a common mint mark.",
"title": "Collector types"
},
{
"paragraph_id": 9,
"text": "There are also completists who seek an example of every type of coin within a certain category. One of the most famous of this type of collector is Louis E. Eliasberg, the only collector thus far to assemble a complete set of known coins of the United States. Foreign coin collecting is another type of collection that numismatics enjoy collecting.",
"title": "Collector types"
},
{
"paragraph_id": 10,
"text": "Coin hoarders are similar to investors in the sense that they accumulate coins for potential long-term profit. However, they typically do not take into account aesthetic considerations. This is most common with coins whose metal value exceeds their spending value.",
"title": "Collector types"
},
{
"paragraph_id": 11,
"text": "Speculators, be they amateurs or commercial buyers, may purchase coins in bulk or in small batches, and often act with the expectation of delayed profit. They may wish to take advantage of a spike in demand for a particular coin (for example, during the annual release of Canadian numismatic collectibles from the Royal Canadian Mint). The speculator might hope to buy the coin in large lots and sell at a profit within weeks or months. Speculators may also buy common circulation coins for their intrinsic metal value. Coins without collectible value may be melted down or distributed as bullion for commercial purposes. Typically they purchase coins that are composed of rare or precious metals, or coins that have a high purity of a specific metal.",
"title": "Collector types"
},
{
"paragraph_id": 12,
"text": "A final type of collector is the inheritor, an accidental collector who acquires coins from another person as part of an inheritance. The inheritor type may not necessarily have an interest in or know anything about numismatics at the time of the acquisition.",
"title": "Collector types"
},
{
"paragraph_id": 13,
"text": "In coin collecting, the condition of a coin (its grade) is key to its value; a high-quality example with minimal wear is often worth many times more than a poor example. Collectors have created systems to describe the overall condition of coins. Any damage, such as wear or cleaning, can substantially decrease a coin's value.",
"title": "Grade and value"
},
{
"paragraph_id": 14,
"text": "By the mid 20th century, with the growing market for rare coins, the American Numismatic Association helps identify most coins in North America, numbering coins from 1 (poor) to 70 (mint state), and setting aside a separate category for proof coinage. This system is often shunned by coin experts in Europe and elsewhere, who prefer to use adjectival grades. Nevertheless, most grading systems use similar terminology, and values and remain mutually intelligible.",
"title": "Grade and value"
},
{
"paragraph_id": 15,
"text": "Third-party grading (TPG), aka coin certification services, emerged in the 1980s with the goals of standardizing grading, exposing alterations, and eliminating counterfeits. For tiered fees, certification services grade, authenticate, attribute, and encapsulate coins in clear plastic holders.",
"title": "Certification services"
},
{
"paragraph_id": 16,
"text": "Coin certification has greatly reduced the number of counterfeits and grossly over graded coins, and improved buyer confidence. Certification services can sometimes be controversial because grading is subjective; coins may be graded differently by different services or even upon resubmission to the same service. The numeric grade alone does not represent all of a coin's characteristics, such as toning, strike, brightness, color, luster, and attractiveness. Due to potentially large differences in value over slight differences in a coin's condition, some submitters will repeatedly resubmit a coin to a grading service in the hope of receiving a higher grade. Because fees are charged for certification, submitters must funnel money away from purchasing additional coins.",
"title": "Certification services"
},
{
"paragraph_id": 17,
"text": "Coin collector clubs offer a variety of benefits to members. They usually serve as a source of information and unification of people interested in coins. Collector clubs are popular both offline and online.",
"title": "Clubs"
}
] | Coin collecting is the collecting of coins or other forms of minted legal tender. Coins of interest to collectors include beautiful, rare, and historically significant pieces. Collectors may be interested, for example, in complete sets of a particular design or denomination, coins that were in circulation for only a brief time, or coins with errors. Coin collecting can be differentiated from numismatics, in that the latter is the systematic study of currency as a whole, though the two disciplines are closely interlinked. Many factors determine a coin's value including grade, rarity, and popularity. Commercial organizations offer grading services and will grade, authenticate, attribute, and encapsulate most coins. | 2001-07-19T22:37:28Z | 2023-12-24T13:20:50Z | [
"Template:Short description",
"Template:Numismatics",
"Template:3D glasses",
"Template:Reflist",
"Template:Citation",
"Template:Use dmy dates",
"Template:Portal",
"Template:Cite news",
"Template:Coin collecting",
"Template:See also",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:About",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Coin_collecting |
5,415 | Crokinole | Crokinole (/ˈkroʊkɪnoʊl/ KROH-ki-nohl) is a disk-flicking dexterity board game, possibly of Canadian origin, similar to the games of pitchnut, carrom, and pichenotte, with elements of shuffleboard and curling reduced to table-top size. Players take turns shooting discs across the circular playing surface, trying to land their discs in the higher-scoring regions of the board, particularly the recessed center hole of 20 points, while also attempting to knock opposing discs off the board, and into the 'ditch'. In crokinole, the shooting is generally towards the center of the board, unlike carroms and pitchnut, where the shooting is towards the four outer corner pockets, as in pool. Crokinole is also played using cue sticks, and there is a special category for cue stick participants at the World Crokinole Championships in Tavistock, Ontario, Canada.
Board dimensions vary with a playing surface typically of polished wood or laminate approximately 26 inches (660 mm) in diameter. The arrangement is 3 concentric rings worth 5, 10, and 15 points as you move in from the outside. There is a shallow 20-point hole at the center. The inner 15-point ring is guarded with 8 small bumpers or posts. The outer ring of the board is divided into four quadrants. The outer edge of the board is raised slightly to keep errant shots from flying out, with a gutter between the playing surface and the edge to collect discarded pieces. Crokinole boards are typically octagonal or round in shape. The wooden discs are roughly checker-sized, slightly smaller in diameter than the board's central hole, and typically have one side slightly concave and one side slightly convex, mainly due to the inherent features of wood, more than a planned design. Alternatively, the game may be played with ring-shaped pieces with a central hole.
The use of any lubricating powder in crokinole is controversial, with some purists reviling the practice.
Powder is sometimes used to ensure pieces slide smoothly on the surface. Boric acid was popular for a long time, but is now considered toxic and has been replaced with safer substitutes. The EU has classified Boric acid as a "Serious Health Hazard". In the UK, many players use a version of anti-set-off spray powder, from the printing industry, which has specific electrostatic properties, with particles of 50-micrometre diameter (1.97×10 in). The powder is made of pure food-grade plant/vegetable starch.
The World Crokinole Championships in Tavistock, Ontario, Canada, states: "The WCC waxes boards, as required, with paste wax. On tournament day powdered shuffleboard wax (CAPO fast speed, yellow and white container) is placed in the ditch. Only tournament organizers will apply quality granular shuffleboard wax. Wax will be placed in the ditch area so that players can rub their discs in the wax prior to shooting, if they desire. Contestants are not allowed to apply lubricants of any type to the board. Absolutely no other lubricant will be allowed".
Crokinole is most commonly played by two players, or by four players in teams of two, with partners sitting across the board from each other. Players take turns flicking their discs from the outer edge of their quadrant of the board onto the playfield. Shooting is usually done by flicking the disc with a finger, though sometimes small cue sticks may be used. If there are any enemy discs on the board, a player must make contact, directly or indirectly, with an enemy disc during the shot. If unsuccessful, the shot disc is "fouled" and removed from the board, along with any of the player's other discs that were moved during the shot.
When there are no enemy discs on the board, many (but not all) rules also state that a player must shoot for the centre of the board, and a shot disc must finish either completely inside the 15-point guarded ring line, or (depending on the specifics of the rules) be inside or touching this line. This is often called the "no hiding" rule, since it prevents players from placing their first shots where their opponent must traverse completely through the guarded centre ring to hit them and avoid fouling. When playing without this rule, a player may generally make any shot desired, and as long as a disc remains completely inside the outer line of the playfield, it remains on the board. During any shot, any disc that falls completely into the recessed central "20" hole (a.k.a. the "Toad" or "Dukie") is removed from play, and counts as twenty points for the owner of the disc at the end of the round, assuming the shot is valid.
Scoring occurs after all pieces (generally 12 per player or team) have been played, and is differential: i.e., the player or team with higher score is awarded the difference between the higher and lower scores for the round, thus only one team or player each round gains points. Play continues until a predetermined winning score is reached.
After 30 years of research, Wayne Kelly published his assessment of the first origins of crokinole, in The Crokinole Book, Third Edition, page 28, which leaves the door open to future research and discovery of the origins of the game of crokinole: "The earliest American crokinole board and reference to the game is M. B. Ross's patented New York board of 1880. The earliest Canadian reference is 1867 (Sports and Games in Canadian Life: 1700 to the Present by Howell and Howell, Toronto, MacMillan Company of Canada, 1969, p.61), and the oldest piece dated at 1875 by Ekhardt Wettlaufer. Could Ekhardt Wettlaufer have visited friends in New York state, noticed an unusual and entertaining parlour game being played, and upon arrival at home, made an imitation as a gift for his son? After all, he was a talented, and no doubt resourceful, painter and woodworker. Or was it the other way around? Did Mr. M. B. Ross travel to Ontario, take note of a quaint piece of rural folk art, and upon return to New York, put his American entrepreneurial skills to work - complete with patent name - on his new crokinole board? As the trail is more than 100 years old and no other authoritative source can be found, it appears, at the moment, that Eckhardt Wettlaufer or M. B. Ross are as close as we can get to answering the question WHO (made the first crokinole board.)"
The earliest known crokinole board was made by craftsman Eckhardt Wettlaufer in 1876 in Perth County, Ontario, Canada. It is said Wettlaufer crafted the board as a fifth birthday present for his son Adam, which is now part of the collection at the Joseph Schneider Haus, a national historic site in Kitchener, Ontario, with a focus on Germanic folk art. Several other home-made boards dating from southwestern Ontario in the 1870s have been discovered since the 1990s. A board game similar to crokinole was patented on 20 April 1880 by Joshua K. Ingalls (US Patent No. 226,615)
Crokinole is often believed to be of Mennonite or Amish origins, but there is no factual data to support such a claim. The reason for this misconception may be due to its popularity in Mennonite and Amish groups. The game was viewed as a rather innocuous pastime – unlike the perception that diversions such as card playing or dancing were considered "works of the Devil" as held by many 19th-century Protestant groups. The oldest roots of crokinole, from the 1860s, suggest the British and South Asian games, such as carrom, are the most likely antecedents of what became crokinole.
In 2006, a documentary film called Crokinole was released. The world premiere occurred at the Princess Cinema in Waterloo, Ontario, in early 2006. The movie follows some of the competitors of the 2004 World Crokinole Championship as they prepare for the event.
The name "crokinole" derives from croquignole, a French word today designating:
It also used to designate the action of flicking with the finger (Molière, Le malade imaginaire; or Voltaire, Lettre à Frédéric II Roi de Prusse; etc.), and this seems the most likely origin of the name of the game. Croquignole was also a synonym of pichenotte, a word that gave its name to the different but related games of pichenotte and pitchnut.
From The Crokinole Book 3rd Edition by Wayne S. Kelly "Is it possible that the English word 'crokinole' is simply an etymological offspring of the French word 'croquignole'? It would appear so for the following reasons. Going back to the entry for Crokinole in Webster's Third New International Dictionary, within the etymological brackets, it says: [French croquignole, fillip]. This is a major clue. The word fillip, according to Webster's, has two definitions: "1. a blow or gesture made by the sudden forcible release of a finger curled up against the thumb; a short sharp blow. 2. to strike by holding the nail of a finger curled up against the ball of the thumb and then suddenly releasing it from that position". So it seems evident, then, that our game of crokinole derives its name from the verb form (of croquignole) defining the principle action in the game, that of flicking or 'filliping' a playing piece across the board".
The word Crokinole is generally acknowledged to have been derived from the French Canadian word "Croquignole", a word with several meanings, such as fillip, snap, biscuit, bun and a woman's wavy hairstyle popular at the turn of the century. The US state of New York shares border crossings with both of the Canadian provinces of Ontario and Quebec, all three of which are popular "hotbeds" of Crokinole playing.
Crokinole is called knipsbrat ('flick-board') (and occasionally knipsdesh (flick-table)) in the Plautdietsch spoken by Russian Mennonites.
The World Crokinole Championship (WCC) tournament has been held annually since 1999 on the first Saturday of June in Tavistock, Ontario. Tavistock was chosen as the host city because it was the home of Eckhardt Wettlaufer, the maker of the earliest known board. The tournament has seen registration from every Canadian province, several American states, Germany, Australia, Spain and the UK.
The WCC singles competition begins with a qualifying round in which competitors play 10 matches against randomly assigned opponents. The qualifying round is played in a large randomly determined competition. At the end of the opening round, the top 16 competitors move on to the playoffs. The top four in the playoffs advance to a final round robin to play each other, and the top two compete in the finals. The WCC doubles competition begins with a qualifying round of 8 matches against randomly assigned opponents with the top six teams advancing to a playoff round robin to determine the champions.
The WCC has multiple divisions, including a singles finger-shooting category for competitive players (adult singles), novices (recreational), and younger players (intermediate, 11–14 yrs; junior, 6–10 yrs), as well as a division for cue-shooters (cues singles). The WCC also awards a prize for the top 20-hole shooter in the qualifying round of competitive singles, recreational singles, cues singles, intermediate singles, and in the junior singles. The tournament also holds doubles divisions for competitive fingers-shooting (competitive doubles), novices (recreational doubles), younger players (youth doubles, 6–16yrs), and cues-shooting (cues doubles).
The official board builder of the World Crokinole Championships is Jeremy Tracey.
The National Crokinole Association (NCA) is an association that supports existing, and the development of new, crokinole clubs and tournaments. While the majority of NCA events are based in Ontario, Canada, the NCA has held sanctioned events in the Canadian provinces of PEI and BC, as well as in New York State.
The collection of NCA tournaments is referred to as the NCA Tour. Each NCA Tour season begins at the Tavistock World Crokinole Championships in June, and concludes at the Ontario Singles Crokinole Championship in May of the following years. The results of each tournament award points for each player, as they compete for their season-ending ranking classification. | [
{
"paragraph_id": 0,
"text": "Crokinole (/ˈkroʊkɪnoʊl/ KROH-ki-nohl) is a disk-flicking dexterity board game, possibly of Canadian origin, similar to the games of pitchnut, carrom, and pichenotte, with elements of shuffleboard and curling reduced to table-top size. Players take turns shooting discs across the circular playing surface, trying to land their discs in the higher-scoring regions of the board, particularly the recessed center hole of 20 points, while also attempting to knock opposing discs off the board, and into the 'ditch'. In crokinole, the shooting is generally towards the center of the board, unlike carroms and pitchnut, where the shooting is towards the four outer corner pockets, as in pool. Crokinole is also played using cue sticks, and there is a special category for cue stick participants at the World Crokinole Championships in Tavistock, Ontario, Canada.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Board dimensions vary with a playing surface typically of polished wood or laminate approximately 26 inches (660 mm) in diameter. The arrangement is 3 concentric rings worth 5, 10, and 15 points as you move in from the outside. There is a shallow 20-point hole at the center. The inner 15-point ring is guarded with 8 small bumpers or posts. The outer ring of the board is divided into four quadrants. The outer edge of the board is raised slightly to keep errant shots from flying out, with a gutter between the playing surface and the edge to collect discarded pieces. Crokinole boards are typically octagonal or round in shape. The wooden discs are roughly checker-sized, slightly smaller in diameter than the board's central hole, and typically have one side slightly concave and one side slightly convex, mainly due to the inherent features of wood, more than a planned design. Alternatively, the game may be played with ring-shaped pieces with a central hole.",
"title": "Equipment"
},
{
"paragraph_id": 2,
"text": "The use of any lubricating powder in crokinole is controversial, with some purists reviling the practice.",
"title": "Equipment"
},
{
"paragraph_id": 3,
"text": "Powder is sometimes used to ensure pieces slide smoothly on the surface. Boric acid was popular for a long time, but is now considered toxic and has been replaced with safer substitutes. The EU has classified Boric acid as a \"Serious Health Hazard\". In the UK, many players use a version of anti-set-off spray powder, from the printing industry, which has specific electrostatic properties, with particles of 50-micrometre diameter (1.97×10 in). The powder is made of pure food-grade plant/vegetable starch.",
"title": "Equipment"
},
{
"paragraph_id": 4,
"text": "The World Crokinole Championships in Tavistock, Ontario, Canada, states: \"The WCC waxes boards, as required, with paste wax. On tournament day powdered shuffleboard wax (CAPO fast speed, yellow and white container) is placed in the ditch. Only tournament organizers will apply quality granular shuffleboard wax. Wax will be placed in the ditch area so that players can rub their discs in the wax prior to shooting, if they desire. Contestants are not allowed to apply lubricants of any type to the board. Absolutely no other lubricant will be allowed\".",
"title": "Equipment"
},
{
"paragraph_id": 5,
"text": "Crokinole is most commonly played by two players, or by four players in teams of two, with partners sitting across the board from each other. Players take turns flicking their discs from the outer edge of their quadrant of the board onto the playfield. Shooting is usually done by flicking the disc with a finger, though sometimes small cue sticks may be used. If there are any enemy discs on the board, a player must make contact, directly or indirectly, with an enemy disc during the shot. If unsuccessful, the shot disc is \"fouled\" and removed from the board, along with any of the player's other discs that were moved during the shot.",
"title": "Gameplay"
},
{
"paragraph_id": 6,
"text": "When there are no enemy discs on the board, many (but not all) rules also state that a player must shoot for the centre of the board, and a shot disc must finish either completely inside the 15-point guarded ring line, or (depending on the specifics of the rules) be inside or touching this line. This is often called the \"no hiding\" rule, since it prevents players from placing their first shots where their opponent must traverse completely through the guarded centre ring to hit them and avoid fouling. When playing without this rule, a player may generally make any shot desired, and as long as a disc remains completely inside the outer line of the playfield, it remains on the board. During any shot, any disc that falls completely into the recessed central \"20\" hole (a.k.a. the \"Toad\" or \"Dukie\") is removed from play, and counts as twenty points for the owner of the disc at the end of the round, assuming the shot is valid.",
"title": "Gameplay"
},
{
"paragraph_id": 7,
"text": "Scoring occurs after all pieces (generally 12 per player or team) have been played, and is differential: i.e., the player or team with higher score is awarded the difference between the higher and lower scores for the round, thus only one team or player each round gains points. Play continues until a predetermined winning score is reached.",
"title": "Gameplay"
},
{
"paragraph_id": 8,
"text": "After 30 years of research, Wayne Kelly published his assessment of the first origins of crokinole, in The Crokinole Book, Third Edition, page 28, which leaves the door open to future research and discovery of the origins of the game of crokinole: \"The earliest American crokinole board and reference to the game is M. B. Ross's patented New York board of 1880. The earliest Canadian reference is 1867 (Sports and Games in Canadian Life: 1700 to the Present by Howell and Howell, Toronto, MacMillan Company of Canada, 1969, p.61), and the oldest piece dated at 1875 by Ekhardt Wettlaufer. Could Ekhardt Wettlaufer have visited friends in New York state, noticed an unusual and entertaining parlour game being played, and upon arrival at home, made an imitation as a gift for his son? After all, he was a talented, and no doubt resourceful, painter and woodworker. Or was it the other way around? Did Mr. M. B. Ross travel to Ontario, take note of a quaint piece of rural folk art, and upon return to New York, put his American entrepreneurial skills to work - complete with patent name - on his new crokinole board? As the trail is more than 100 years old and no other authoritative source can be found, it appears, at the moment, that Eckhardt Wettlaufer or M. B. Ross are as close as we can get to answering the question WHO (made the first crokinole board.)\"",
"title": "History of the game"
},
{
"paragraph_id": 9,
"text": "The earliest known crokinole board was made by craftsman Eckhardt Wettlaufer in 1876 in Perth County, Ontario, Canada. It is said Wettlaufer crafted the board as a fifth birthday present for his son Adam, which is now part of the collection at the Joseph Schneider Haus, a national historic site in Kitchener, Ontario, with a focus on Germanic folk art. Several other home-made boards dating from southwestern Ontario in the 1870s have been discovered since the 1990s. A board game similar to crokinole was patented on 20 April 1880 by Joshua K. Ingalls (US Patent No. 226,615)",
"title": "History of the game"
},
{
"paragraph_id": 10,
"text": "Crokinole is often believed to be of Mennonite or Amish origins, but there is no factual data to support such a claim. The reason for this misconception may be due to its popularity in Mennonite and Amish groups. The game was viewed as a rather innocuous pastime – unlike the perception that diversions such as card playing or dancing were considered \"works of the Devil\" as held by many 19th-century Protestant groups. The oldest roots of crokinole, from the 1860s, suggest the British and South Asian games, such as carrom, are the most likely antecedents of what became crokinole.",
"title": "History of the game"
},
{
"paragraph_id": 11,
"text": "In 2006, a documentary film called Crokinole was released. The world premiere occurred at the Princess Cinema in Waterloo, Ontario, in early 2006. The movie follows some of the competitors of the 2004 World Crokinole Championship as they prepare for the event.",
"title": "History of the game"
},
{
"paragraph_id": 12,
"text": "The name \"crokinole\" derives from croquignole, a French word today designating:",
"title": "Origins of the name"
},
{
"paragraph_id": 13,
"text": "It also used to designate the action of flicking with the finger (Molière, Le malade imaginaire; or Voltaire, Lettre à Frédéric II Roi de Prusse; etc.), and this seems the most likely origin of the name of the game. Croquignole was also a synonym of pichenotte, a word that gave its name to the different but related games of pichenotte and pitchnut.",
"title": "Origins of the name"
},
{
"paragraph_id": 14,
"text": "From The Crokinole Book 3rd Edition by Wayne S. Kelly \"Is it possible that the English word 'crokinole' is simply an etymological offspring of the French word 'croquignole'? It would appear so for the following reasons. Going back to the entry for Crokinole in Webster's Third New International Dictionary, within the etymological brackets, it says: [French croquignole, fillip]. This is a major clue. The word fillip, according to Webster's, has two definitions: \"1. a blow or gesture made by the sudden forcible release of a finger curled up against the thumb; a short sharp blow. 2. to strike by holding the nail of a finger curled up against the ball of the thumb and then suddenly releasing it from that position\". So it seems evident, then, that our game of crokinole derives its name from the verb form (of croquignole) defining the principle action in the game, that of flicking or 'filliping' a playing piece across the board\".",
"title": "Origins of the name"
},
{
"paragraph_id": 15,
"text": "The word Crokinole is generally acknowledged to have been derived from the French Canadian word \"Croquignole\", a word with several meanings, such as fillip, snap, biscuit, bun and a woman's wavy hairstyle popular at the turn of the century. The US state of New York shares border crossings with both of the Canadian provinces of Ontario and Quebec, all three of which are popular \"hotbeds\" of Crokinole playing.",
"title": "Origins of the name"
},
{
"paragraph_id": 16,
"text": "Crokinole is called knipsbrat ('flick-board') (and occasionally knipsdesh (flick-table)) in the Plautdietsch spoken by Russian Mennonites.",
"title": "Origins of the name"
},
{
"paragraph_id": 17,
"text": "The World Crokinole Championship (WCC) tournament has been held annually since 1999 on the first Saturday of June in Tavistock, Ontario. Tavistock was chosen as the host city because it was the home of Eckhardt Wettlaufer, the maker of the earliest known board. The tournament has seen registration from every Canadian province, several American states, Germany, Australia, Spain and the UK.",
"title": "World Crokinole Championship"
},
{
"paragraph_id": 18,
"text": "The WCC singles competition begins with a qualifying round in which competitors play 10 matches against randomly assigned opponents. The qualifying round is played in a large randomly determined competition. At the end of the opening round, the top 16 competitors move on to the playoffs. The top four in the playoffs advance to a final round robin to play each other, and the top two compete in the finals. The WCC doubles competition begins with a qualifying round of 8 matches against randomly assigned opponents with the top six teams advancing to a playoff round robin to determine the champions.",
"title": "World Crokinole Championship"
},
{
"paragraph_id": 19,
"text": "The WCC has multiple divisions, including a singles finger-shooting category for competitive players (adult singles), novices (recreational), and younger players (intermediate, 11–14 yrs; junior, 6–10 yrs), as well as a division for cue-shooters (cues singles). The WCC also awards a prize for the top 20-hole shooter in the qualifying round of competitive singles, recreational singles, cues singles, intermediate singles, and in the junior singles. The tournament also holds doubles divisions for competitive fingers-shooting (competitive doubles), novices (recreational doubles), younger players (youth doubles, 6–16yrs), and cues-shooting (cues doubles).",
"title": "World Crokinole Championship"
},
{
"paragraph_id": 20,
"text": "The official board builder of the World Crokinole Championships is Jeremy Tracey.",
"title": "World Crokinole Championship"
},
{
"paragraph_id": 21,
"text": "The National Crokinole Association (NCA) is an association that supports existing, and the development of new, crokinole clubs and tournaments. While the majority of NCA events are based in Ontario, Canada, the NCA has held sanctioned events in the Canadian provinces of PEI and BC, as well as in New York State.",
"title": "National Crokinole Association"
},
{
"paragraph_id": 22,
"text": "The collection of NCA tournaments is referred to as the NCA Tour. Each NCA Tour season begins at the Tavistock World Crokinole Championships in June, and concludes at the Ontario Singles Crokinole Championship in May of the following years. The results of each tournament award points for each player, as they compete for their season-ending ranking classification.",
"title": "National Crokinole Association"
}
] | Crokinole is a disk-flicking dexterity board game, possibly of Canadian origin, similar to the games of pitchnut, carrom, and pichenotte, with elements of shuffleboard and curling reduced to table-top size. Players take turns shooting discs across the circular playing surface, trying to land their discs in the higher-scoring regions of the board, particularly the recessed center hole of 20 points, while also attempting to knock opposing discs off the board, and into the 'ditch'. In crokinole, the shooting is generally towards the center of the board, unlike carroms and pitchnut, where the shooting is towards the four outer corner pockets, as in pool. Crokinole is also played using cue sticks, and there is a special category for cue stick participants at the World Crokinole Championships in Tavistock, Ontario, Canada. | 2001-04-13T17:47:16Z | 2023-12-31T10:10:20Z | [
"Template:Multiple issues",
"Template:Use dmy dates",
"Template:Unreliable source?",
"Template:Cite book",
"Template:Main world championships",
"Template:Short description",
"Template:IPAc-en",
"Template:Respell",
"Template:Primary source",
"Template:Cite web",
"Template:Cite news",
"Template:More citations needed",
"Template:Lang",
"Template:Reflist",
"Template:Cite video",
"Template:Bgg",
"Template:Infobox game",
"Template:Convert",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Crokinole |
5,416 | Capitalism | Capitalism is an economic system based on the private ownership of the means of production and their operation for profit. Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor. In a market economy, decision-making and investments are determined by owners of wealth, property, or ability to maneuver capital or production ability in capital and financial markets—whereas prices and the distribution of goods and services are mainly determined by competition in goods and services markets.
Economists, historians, political economists, and sociologists have adopted different perspectives in their analyses of capitalism and have recognized various forms of it in practice. These include laissez-faire or free-market capitalism, anarcho-capitalism, state capitalism, and welfare capitalism. Different forms of capitalism feature varying degrees of free markets, public ownership, obstacles to free competition, and state-sanctioned social policies. The degree of competition in markets and the role of intervention and regulation, as well as the scope of state ownership, vary across different models of capitalism. The extent to which different markets are free and the rules defining private property are matters of politics and policy. Most of the existing capitalist economies are mixed economies that combine elements of free markets with state intervention and in some cases economic planning.
Capitalism in its modern form emerged from agrarianism in 16th century England and mercantilist practices by European countries in the 16th to 18th centuries. The Industrial Revolution of the 18th century established capitalism as a dominant mode of production, characterized by factory work and a complex division of labor. Through the process of globalization, capitalism spread across the world in the 19th and 20th centuries, especially before World War I and after the end of the Cold War. During the 19th century, capitalism was largely unregulated by the state, but became more regulated in the post-World War II period through Keynesianism, followed by a return of more unregulated capitalism starting in the 1980s through neoliberalism.
Market economies have existed under many forms of government and in many different times, places and cultures. Modern industrial capitalist societies developed in Western Europe in a process that led to the Industrial Revolution. Economic growth is a characteristic tendency of capitalist economies.
The term "capitalist", meaning an owner of capital, appears earlier than the term "capitalism" and dates to the mid-17th century. "Capitalism" is derived from capital, which evolved from capitale, a late Latin word based on caput, meaning "head"—which is also the origin of "chattel" and "cattle" in the sense of movable property (only much later to refer only to livestock). Capitale emerged in the 12th to 13th centuries to refer to funds, stock of merchandise, sum of money or money carrying interest. By 1283, it was used in the sense of the capital assets of a trading firm and was often interchanged with other words—wealth, money, funds, goods, assets, property and so on.
The Hollantse (German: holländische) Mercurius uses "capitalists" in 1633 and 1654 to refer to owners of capital. In French, Étienne Clavier referred to capitalistes in 1788, four years before its first recorded English usage by Arthur Young in his work Travels in France (1792). In his Principles of Political Economy and Taxation (1817), David Ricardo referred to "the capitalist" many times. English poet Samuel Taylor Coleridge used "capitalist" in his work Table Talk (1823). Pierre-Joseph Proudhon used the term in his first work, What is Property? (1840), to refer to the owners of capital. Benjamin Disraeli used the term in his 1845 work Sybil.
The initial use of the term "capitalism" in its modern sense is attributed to Louis Blanc in 1850 ("What I call 'capitalism' that is to say the appropriation of capital by some to the exclusion of others") and Pierre-Joseph Proudhon in 1861 ("Economic and social regime in which capital, the source of income, does not generally belong to those who make it work through their labor"). Karl Marx frequently referred to the "capital" and to the "capitalist mode of production" in Das Kapital (1867). Marx did not use the form capitalism but instead used capital, capitalist and capitalist mode of production, which appear frequently. Due to the word being coined by socialist critics of capitalism, economist and historian Robert Hessen stated that the term "capitalism" itself is a term of disparagement and a misnomer for economic individualism. Bernard Harcourt agrees with the statement that the term is a misnomer, adding that it misleadingly suggests that there is such as a thing as "capital" that inherently functions in certain ways and is governed by stable economic laws of its own.
In the English language, the term "capitalism" first appears, according to the Oxford English Dictionary (OED), in 1854, in the novel The Newcomes by novelist William Makepeace Thackeray, where the word meant "having ownership of capital". Also according to the OED, Carl Adolph Douai, a German American socialist and abolitionist, used the term "private capitalism" in 1863.
There is no universally agreed upon definition of capitalism; it is unclear whether or not capitalism characterizes an entire society, a specific type of social order, or crucial components or elements of a society. Societies officially founded in opposition to capitalism (such as the Soviet Union) have sometimes been argued to actually exhibit characteristics of capitalism. Nancy Fraser describes usage of the term "capitalism" by many authors as "mainly rhetorical, functioning less as an actual concept than as a gesture toward the need for a concept". Scholars who are uncritical of capitalism rarely actually use the term "capitalism". Some doubt that the term "capitalism" possesses valid scientific dignity, and it is generally not discussed in mainstream economics, with economist Daron Acemoglu suggesting that the term "capitalism" should be abandoned entirely. Consequently, understanding of the concept of capitalism tends to be heavily influenced by opponents of capitalism and by the followers and critics of Karl Marx.
Capitalism, in its modern form, can be traced to the emergence of agrarian capitalism and mercantilism in the early Renaissance, in city-states like Florence. Capital has existed incipiently on a small scale for centuries in the form of merchant, renting and lending activities and occasionally as small-scale industry with some wage labor. Simple commodity exchange and consequently simple commodity production, which is the initial basis for the growth of capital from trade, have a very long history. During the Islamic Golden Age, Arabs promulgated capitalist economic policies such as free trade and banking. Their use of Indo-Arabic numerals facilitated bookkeeping. These innovations migrated to Europe through trade partners in cities such as Venice and Pisa. Italian mathematicians traveled the Mediterranean talking to Arab traders and returned to popularize the use of Indo-Arabic numerals in Europe.
The economic foundations of the feudal agricultural system began to shift substantially in 16th-century England as the manorial system had broken down and land began to become concentrated in the hands of fewer landlords with increasingly large estates. Instead of a serf-based system of labor, workers were increasingly employed as part of a broader and expanding money-based economy. The system put pressure on both landlords and tenants to increase the productivity of agriculture to make profit; the weakened coercive power of the aristocracy to extract peasant surpluses encouraged them to try better methods, and the tenants also had incentive to improve their methods in order to flourish in a competitive labor market. Terms of rent for land were becoming subject to economic market forces rather than to the previous stagnant system of custom and feudal obligation.
The economic doctrine prevailing from the 16th to the 18th centuries is commonly called mercantilism. This period, the Age of Discovery, was associated with the geographic exploration of foreign lands by merchant traders, especially from England and the Low Countries. Mercantilism was a system of trade for profit, although commodities were still largely produced by non-capitalist methods. Most scholars consider the era of merchant capitalism and mercantilism as the origin of modern capitalism, although Karl Polanyi argued that the hallmark of capitalism is the establishment of generalized markets for what he called the "fictitious commodities", i.e. land, labor and money. Accordingly, he argued that "not until 1834 was a competitive labor market established in England, hence industrial capitalism as a social system cannot be said to have existed before that date".
England began a large-scale and integrative approach to mercantilism during the Elizabethan Era (1558–1603). A systematic and coherent explanation of balance of trade was made public through Thomas Mun's argument England's Treasure by Forraign Trade, or the Balance of our Forraign Trade is The Rule of Our Treasure. It was written in the 1620s and published in 1664.
European merchants, backed by state controls, subsidies and monopolies, made most of their profits by buying and selling goods. In the words of Francis Bacon, the purpose of mercantilism was "the opening and well-balancing of trade; the cherishing of manufacturers; the banishing of idleness; the repressing of waste and excess by sumptuary laws; the improvement and husbanding of the soil; the regulation of prices...".
After the period of the proto-industrialization, the British East India Company and the Dutch East India Company, after massive contributions from the Mughal Bengal, inaugurated an expansive era of commerce and trade. These companies were characterized by their colonial and expansionary powers given to them by nation-states. During this era, merchants, who had traded under the previous stage of mercantilism, invested capital in the East India Companies and other colonies, seeking a return on investment.
In the mid-18th century a group of economic theorists, led by David Hume (1711–1776) and Adam Smith (1723–1790), challenged fundamental mercantilist doctrines—such as the belief that the world's wealth remained constant and that a state could only increase its wealth at the expense of another state.
During the Industrial Revolution, industrialists replaced merchants as a dominant factor in the capitalist system and effected the decline of the traditional handicraft skills of artisans, guilds and journeymen. Industrial capitalism marked the development of the factory system of manufacturing, characterized by a complex division of labor between and within work process and the routine of work tasks; and eventually established the domination of the capitalist mode of production.
Industrial Britain eventually abandoned the protectionist policy formerly prescribed by mercantilism. In the 19th century, Richard Cobden (1804–1865) and John Bright (1811–1889), who based their beliefs on the Manchester School, initiated a movement to lower tariffs. In the 1840s Britain adopted a less protectionist policy, with the 1846 repeal of the Corn Laws and the 1849 repeal of the Navigation Acts. Britain reduced tariffs and quotas, in line with David Ricardo's advocacy of free trade.
Broader processes of globalization carried capitalism across the world. By the beginning of the nineteenth century, a series of loosely connected market systems had come together as a relatively integrated global system, in turn intensifying processes of economic and other globalization. Late in the 20th century, capitalism overcame a challenge by centrally-planned economies and is now the encompassing system worldwide, with the mixed economy as its dominant form in the industrialized Western world.
Industrialization allowed cheap production of household items using economies of scale, while rapid population growth created sustained demand for commodities. The imperialism of the 18th-century decisively shaped globalization.
After the First and Second Opium Wars (1839–60) and the completion of the British conquest of India by 1858, vast populations of Asia became consumers of European exports. Europeans colonized areas of sub-Saharan Africa and the Pacific islands. Colonisation by Europeans, notably of sub-Saharan Africa, yielded valuable natural resources such as rubber, diamonds and coal and helped fuel trade and investment between the European imperial powers, their colonies and the United States:
The inhabitant of London could order by telephone, sipping his morning tea, the various products of the whole earth, and reasonably expect their early delivery upon his doorstep. Militarism and imperialism of racial and cultural rivalries were little more than the amusements of his daily newspaper. What an extraordinary episode in the economic progress of man was that age which came to an end in August 1914.
From the 1870s to the early 1920s, the global financial system was mainly tied to the gold standard. The United Kingdom first formally adopted this standard in 1821. Soon to follow were Canada in 1853, Newfoundland in 1865, the United States and Germany (de jure) in 1873. New technologies, such as the telegraph, the transatlantic cable, the radiotelephone, the steamship and railways allowed goods and information to move around the world to an unprecedented degree.
In the United States, the term "capitalist" primarily referred to powerful businessmen until the 1920s due to widespread societal skepticism and criticism of capitalism and its most ardent supporters.
Contemporary capitalist societies developed in the West from 1950 to the present and this type of system continues throughout the world—relevant examples started in the United States after the 1950s, France after the 1960s, Spain after the 1970s, Poland after 2015, and others. At this stage capitalist markets are considered developed and characterized by developed private and public markets for equity and debt, a high standard of living (as characterized by the World Bank and the IMF), large institutional investors and a well-funded banking system. A significant managerial class has emerged and decides on a significant proportion of investments and other decisions. A different future than that envisioned by Marx has started to emerge—explored and described by Anthony Crosland in the United Kingdom in his 1956 book The Future of Socialism and by John Kenneth Galbraith in North America in his 1958 book The Affluent Society, 90 years after Marx's research on the state of capitalism in 1867.
The postwar boom ended in the late 1960s and early 1970s and the economic situation grew worse with the rise of stagflation. Monetarism, a modification of Keynesianism that is more compatible with laissez-faire analyses, gained increasing prominence in the capitalist world, especially under the years in office of Ronald Reagan in the United States (1981–1989) and of Margaret Thatcher in the United Kingdom (1979–1990). Public and political interest began shifting away from the so-called collectivist concerns of Keynes's managed capitalism to a focus on individual choice, called "remarketized capitalism".
The end of the Cold War and the dissolution of the Soviet Union allowed for capitalism to become a truly global system in a way not seen since before World War I. The development of the neoliberal global economy would have been impossible without the fall of communism.
Harvard Kennedy School economist Dani Rodrik distinguishes between three historical variants of capitalism:
The relationship between democracy and capitalism is a contentious area in theory and in popular political movements. The extension of adult-male suffrage in 19th-century Britain occurred along with the development of industrial capitalism and representative democracy became widespread at the same time as capitalism, leading capitalists to posit a causal or mutual relationship between them. However, according to some authors in the 20th-century, capitalism also accompanied a variety of political formations quite distinct from liberal democracies, including fascist regimes, absolute monarchies and single-party states. Democratic peace theory asserts that democracies seldom fight other democracies, but others suggest this may be because of political similarity or stability, rather than because they are "democratic" or "capitalist". Critics argue that though economic growth under capitalism has led to democracy, it may not do so in the future as authoritarian régimes have been able to manage economic growth using some of capitalism's competitive principles without making concessions to greater political freedom.
Political scientists Torben Iversen and David Soskice see democracy and capitalism as mutually supportive. Robert Dahl argued in On Democracy that capitalism was beneficial for democracy because economic growth and a large middle class were good for democracy. He also argued that a market economy provided a substitute for government control of the economy, which reduces the risks of tyranny and authoritarianism.
In his book The Road to Serfdom (1944), Friedrich Hayek (1899–1992) asserted that the free-market understanding of economic freedom as present in capitalism is a requisite of political freedom. He argued that the market mechanism is the only way of deciding what to produce and how to distribute the items without using coercion. Milton Friedman and Ronald Reagan also promoted this view. Friedman claimed that centralized economic operations are always accompanied by political repression. In his view, transactions in a market economy are voluntary and that the wide diversity that voluntary activity permits is a fundamental threat to repressive political leaders and greatly diminishes their power to coerce. Some of Friedman's views were shared by John Maynard Keynes, who believed that capitalism was vital for freedom to survive and thrive. Freedom House, an American think-tank that conducts international research on, and advocates for, democracy, political freedom and human rights, has argued that "there is a high and statistically significant correlation between the level of political freedom as measured by Freedom House and economic freedom as measured by the Wall Street Journal/Heritage Foundation survey".
In Capital in the Twenty-First Century (2013), Thomas Piketty of the Paris School of Economics asserted that inequality is the inevitable consequence of economic growth in a capitalist economy and the resulting concentration of wealth can destabilize democratic societies and undermine the ideals of social justice upon which they are built.
States with capitalistic economic systems have thrived under political regimes deemed to be authoritarian or oppressive. Singapore has a successful open market economy as a result of its competitive, business-friendly climate and robust rule of law. Nonetheless, it often comes under fire for its style of government which, though democratic and consistently one of the least corrupt, operates largely under a one-party rule. Furthermore, it does not vigorously defend freedom of expression as evidenced by its government-regulated press, and its penchant for upholding laws protecting ethnic and religious harmony, judicial dignity and personal reputation. The private (capitalist) sector in the People's Republic of China has grown exponentially and thrived since its inception, despite having an authoritarian government. Augusto Pinochet's rule in Chile led to economic growth and high levels of inequality by using authoritarian means to create a safe environment for investment and capitalism. Similarly, Suharto's authoritarian reign and extirpation of the Communist Party of Indonesia allowed for the expansion of capitalism in Indonesia.
The term "capitalism" in its modern sense is often attributed to Karl Marx. In his Das Kapital, Marx analyzed the "capitalist mode of production" using a method of understanding today known as Marxism. However, Marx himself rarely used the term "capitalism" while it was used twice in the more political interpretations of his work, primarily authored by his collaborator Friedrich Engels. In the 20th century, defenders of the capitalist system often replaced the term "capitalism" with phrases such as free enterprise and private enterprise and replaced "capitalist" with rentier and investor in reaction to the negative connotations associated with capitalism.
In general, capitalism as an economic system and mode of production can be summarized by the following:
In free market and laissez-faire forms of capitalism, markets are used most extensively with minimal or no regulation over the pricing mechanism. In mixed economies, which are almost universal today, markets continue to play a dominant role, but they are regulated to some extent by the state in order to correct market failures, promote social welfare, conserve natural resources, fund defense and public safety or other rationale. In state capitalist systems, markets are relied upon the least, with the state relying heavily on state-owned enterprises or indirect economic planning to accumulate capital.
Competition arises when more than one producer is trying to sell the same or similar products to the same buyers. Adherents of the capitalist theory believe that competition leads to innovation and more affordable prices. Monopolies or cartels can develop, especially if there is no competition. A monopoly occurs when a firm has exclusivity over a market. Hence, the firm can engage in rent seeking behaviors such as limiting output and raising prices because it has no fear of competition.
Governments have implemented legislation for the purpose of preventing the creation of monopolies and cartels. In 1890, the Sherman Antitrust Act became the first legislation passed by the United States Congress to limit monopolies.
Wage labor, usually referred to as paid work, paid employment, or paid labor, refers to the socioeconomic relationship between a worker and an employer in which the worker sells their labor power under a formal or informal employment contract. These transactions usually occur in a labor market where wages or salaries are market-determined.
In exchange for the money paid as wages (usual for short-term work-contracts) or salaries (in permanent employment contracts), the work product generally becomes the undifferentiated property of the employer. A wage laborer is a person whose primary means of income is from the selling of their labor in this way.
The profit motive, in the theory of capitalism, is the desire to earn income in the form of profit. Stated differently, the reason for a business's existence is to turn a profit. The profit motive functions according to rational choice theory, or the theory that individuals tend to pursue what is in their own best interests. Accordingly, businesses seek to benefit themselves and/or their shareholders by maximizing profit.
In capitalist theoretics, the profit motive is said to ensure that resources are being allocated efficiently. For instance, Austrian economist Henry Hazlitt explains: "If there is no profit in making an article, it is a sign that the labor and capital devoted to its production are misdirected: the value of the resources that must be used up in making the article is greater than the value of the article itself".
Socialist theorists note that, unlike merchantilists, capitalists accumulate their profits while expecting their profit rates to remain the same. This causes problems as earnings in the rest of society do not increase in the same proportion.
The relationship between the state, its formal mechanisms, and capitalist societies has been debated in many fields of social and political theory, with active discussion since the 19th century. Hernando de Soto is a contemporary Peruvian economist who has argued that an important characteristic of capitalism is the functioning state protection of property rights in a formal property system where ownership and transactions are clearly recorded.
According to de Soto, this is the process by which physical assets are transformed into capital, which in turn may be used in many more ways and much more efficiently in the market economy. A number of Marxian economists have argued that the Enclosure Acts in England and similar legislation elsewhere were an integral part of capitalist primitive accumulation and that specific legal frameworks of private land ownership have been integral to the development of capitalism.
Private property rights are not absolute, as in many countries the state has the power to seize private property, typically for public use, under the powers of eminent domain.
In capitalist economics, market competition is the rivalry among sellers trying to achieve such goals as increasing profits, market share and sales volume by varying the elements of the marketing mix: price, product, distribution and promotion. Merriam-Webster defines competition in business as "the effort of two or more parties acting independently to secure the business of a third party by offering the most favourable terms". It was described by Adam Smith in The Wealth of Nations (1776) and later economists as allocating productive resources to their most highly valued uses and encouraging efficiency. Smith and other classical economists before Antoine Augustine Cournot were referring to price and non-price rivalry among producers to sell their goods on best terms by bidding of buyers, not necessarily to a large number of sellers nor to a market in final equilibrium. Competition is widespread throughout the market process. It is a condition where "buyers tend to compete with other buyers, and sellers tend to compete with other sellers". In offering goods for exchange, buyers competitively bid to purchase specific quantities of specific goods which are available, or might be available if sellers were to choose to offer such goods. Similarly, sellers bid against other sellers in offering goods on the market, competing for the attention and exchange resources of buyers. Competition results from scarcity, as it is not possible to satisfy all conceivable human wants, and occurs as people try to meet the criteria being used to determine allocation.
In the works of Adam Smith, the idea of capitalism is made possible through competition which creates growth. Although capitalism has not entered mainstream economics at the time of Smith, it is vital to the construction of his ideal society. One of the foundational blocks of capitalism is competition. Smith believed that a prosperous society is one where "everyone should be free to enter and leave the market and change trades as often as he pleases." He believed that the freedom to act in one's self-interest is essential for the success of a capitalist society. The fear arises that if all participants focus on their own goals, society's well-being will be water under the bridge. Smith maintains that despite the concerns of intellectuals, "global trends will hardly be altered if they refrain from pursuing their personal ends." He insisted that the actions of a few participants cannot alter the course of society. Instead, Smith maintained that they should focus on personal progress instead and that this will result in overall growth to the whole.
Competition between participants, "who are all endeavoring to justle one another out of employment, obliges every man to endeavor to execute his work" through competition towards growth.
Economic growth is a characteristic tendency of capitalist economies.
The capitalist mode of production refers to the systems of organising production and distribution within capitalist societies. Private money-making in various forms (renting, banking, merchant trade, production for profit and so on) preceded the development of the capitalist mode of production as such.
The term capitalist mode of production is defined by private ownership of the means of production, extraction of surplus value by the owning class for the purpose of capital accumulation, wage-based labor and, at least as far as commodities are concerned, being market-based.
Capitalism in the form of money-making activity has existed in the shape of merchants and money-lenders who acted as intermediaries between consumers and producers engaging in simple commodity production (hence the reference to "merchant capitalism") since the beginnings of civilisation. What is specific about the "capitalist mode of production" is that most of the inputs and outputs of production are supplied through the market (i.e. they are commodities) and essentially all production is in this mode. By contrast, in flourishing feudalism most or all of the factors of production, including labor, are owned by the feudal ruling class outright and the products may also be consumed without a market of any kind, it is production for use within the feudal social unit and for limited trade. This has the important consequence that, under capitalism, the whole organisation of the production process is reshaped and re-organised to conform with economic rationality as bounded by capitalism, which is expressed in price relationships between inputs and outputs (wages, non-labor factor costs, sales and profits) rather than the larger rational context faced by society overall—that is, the whole process is organised and re-shaped in order to conform to "commercial logic". Essentially, capital accumulation comes to define economic rationality in capitalist production.
A society, region or nation is capitalist if the predominant source of incomes and products being distributed is capitalist activity, but even so this does not yet mean necessarily that the capitalist mode of production is dominant in that society.
Mixed economies rely on the nation they are in to provide some goods or services, while the free market produces and maintains the rest.
Government agencies regulate the standards of service in many industries, such as airlines and broadcasting, as well as financing a wide range of programs. In addition, the government regulates the flow of capital and uses financial tools such as the interest rate to control such factors as inflation and unemployment.
In capitalist economic structures, supply and demand is an economic model of price determination in a market. It postulates that in a perfectly competitive market, the unit price for a particular good will vary until it settles at a point where the quantity demanded by consumers (at the current price) will equal the quantity supplied by producers (at the current price), resulting in an economic equilibrium for price and quantity.
The "basic laws" of supply and demand, as described by David Besanko and Ronald Braeutigam, are the following four:
A supply schedule is a table that shows the relationship between the price of a good and the quantity supplied.
A demand schedule, depicted graphically as the demand curve, represents the amount of some goods that buyers are willing and able to purchase at various prices, assuming all determinants of demand other than the price of the good in question, such as income, tastes and preferences, the price of substitute goods and the price of complementary goods, remain the same. According to the law of demand, the demand curve is almost always represented as downward sloping, meaning that as price decreases, consumers will buy more of the good.
Just like the supply curves reflect marginal cost curves, demand curves are determined by marginal utility curves.
In the context of supply and demand, economic equilibrium refers to a state where economic forces such as supply and demand are balanced and in the absence of external influences the (equilibrium) values of economic variables will not change. For example, in the standard text-book model of perfect competition equilibrium occurs at the point at which quantity demanded and quantity supplied are equal. Market equilibrium, in this case, refers to a condition where a market price is established through competition such that the amount of goods or services sought by buyers is equal to the amount of goods or services produced by sellers. This price is often called the competitive price or market clearing price and will tend not to change unless demand or supply changes.
Partial equilibrium, as the name suggests, takes into consideration only a part of the market to attain equilibrium. Jain proposes (attributed to George Stigler): "A partial equilibrium is one which is based on only a restricted range of data, a standard example is price of a single product, the prices of all other products being held fixed during the analysis".
According to Hamid S. Hosseini, the "power of supply and demand" was discussed to some extent by several early Muslim scholars, such as fourteenth century Mamluk scholar Ibn Taymiyyah, who wrote: "If desire for goods increases while its availability decreases, its price rises. On the other hand, if availability of the good increases and the desire for it decreases, the price comes down".
John Locke's 1691 work Some Considerations on the Consequences of the Lowering of Interest and the Raising of the Value of Money includes an early and clear description of supply and demand and their relationship. In this description, demand is rent: "The price of any commodity rises or falls by the proportion of the number of buyer and sellers" and "that which regulates the price... [of goods] is nothing else but their quantity in proportion to their rent".
David Ricardo titled one chapter of his 1817 work Principles of Political Economy and Taxation "On the Influence of Demand and Supply on Price". In Principles of Political Economy and Taxation, Ricardo more rigorously laid down the idea of the assumptions that were used to build his ideas of supply and demand.
In his 1870 essay "On the Graphical Representation of Supply and Demand", Fleeming Jenkin in the course of "introduc[ing] the diagrammatic method into the English economic literature" published the first drawing of supply and demand curves therein, including comparative statics from a shift of supply or demand and application to the labor market. The model was further developed and popularized by Alfred Marshall in the 1890 textbook Principles of Economics.
There are many variants of capitalism in existence that differ according to country and region. They vary in their institutional makeup and by their economic policies. The common features among all the different forms of capitalism are that they are predominantly based on the private ownership of the means of production and the production of goods and services for profit; the market-based allocation of resources; and the accumulation of capital.
They include advanced capitalism, corporate capitalism, finance capitalism, free-market capitalism, mercantilism, social capitalism, state capitalism and welfare capitalism. Other theoretical variants of capitalism include anarcho-capitalism, community capitalism, humanistic capitalism, neo-capitalism, state monopoly capitalism, and technocapitalism.
Advanced capitalism is the situation that pertains to a society in which the capitalist model has been integrated and developed deeply and extensively for a prolonged period. Various writers identify Antonio Gramsci as an influential early theorist of advanced capitalism, even if he did not use the term himself. In his writings, Gramsci sought to explain how capitalism had adapted to avoid the revolutionary overthrow that had seemed inevitable in the 19th century. At the heart of his explanation was the decline of raw coercion as a tool of class power, replaced by use of civil society institutions to manipulate public ideology in the capitalists' favour.
Jürgen Habermas has been a major contributor to the analysis of advanced-capitalistic societies. Habermas observed four general features that characterise advanced capitalism:
Corporate capitalism is a free or mixed-market capitalist economy characterized by the dominance of hierarchical, bureaucratic corporations.
Finance capitalism is the subordination of processes of production to the accumulation of money profits in a financial system. In their critique of capitalism, Marxism and Leninism both emphasise the role of finance capital as the determining and ruling-class interest in capitalist society, particularly in the latter stages.
Rudolf Hilferding is credited with first bringing the term finance capitalism into prominence through Finance Capital, his 1910 study of the links between German trusts, banks and monopolies—a study subsumed by Vladimir Lenin into Imperialism, the Highest Stage of Capitalism (1917), his analysis of the imperialist relations of the great world powers. Lenin concluded that the banks at that time operated as "the chief nerve centres of the whole capitalist system of national economy". For the Comintern (founded in 1919), the phrase "dictatorship of finance capitalism" became a regular one.
Fernand Braudel would later point to two earlier periods when finance capitalism had emerged in human history—with the Genoese in the 16th century and with the Dutch in the 17th and 18th centuries—although at those points it developed from commercial capitalism. Giovanni Arrighi extended Braudel's analysis to suggest that a predominance of finance capitalism is a recurring, long-term phenomenon, whenever a previous phase of commercial/industrial capitalist expansion reaches a plateau.
A capitalist free-market economy is an economic system where prices for goods and services are set entirely by the forces of supply and demand and are expected, by its adherents, to reach their point of equilibrium without intervention by government policy. It typically entails support for highly competitive markets and private ownership of the means of production. Laissez-faire capitalism is a more extensive form of this free-market economy, but one in which the role of the state is limited to protecting property rights. In anarcho-capitalist theory, property rights are protected by private firms and market-generated law. According to anarcho-capitalists, this entails property rights without statutory law through market-generated tort, contract and property law, and self-sustaining private industry.
Fernand Braudel argued that free market exchange and capitalism are to some degree opposed; free market exchange involves transparent public transactions and a large number of equal competitors, while capitalism involves a small number of participants using their capital to control the market via private transactions, control of information, and limitation of competition.
Mercantilism is a nationalist form of early capitalism that came into existence approximately in the late 16th century. It is characterized by the intertwining of national business interests with state-interest and imperialism. Consequently, the state apparatus is used to advance national business interests abroad. An example of this is colonists living in America who were only allowed to trade with and purchase goods from their respective mother countries (e.g., Britain, France and Portugal). Mercantilism was driven by the belief that the wealth of a nation is increased through a positive balance of trade with other nations—it corresponds to the phase of capitalist development sometimes called the primitive accumulation of capital.
A social market economy is a free-market or mixed-market capitalist system, sometimes classified as a coordinated market economy, where government intervention in price formation is kept to a minimum, but the state provides significant services in areas such as social security, health care, unemployment benefits and the recognition of labor rights through national collective bargaining arrangements.
This model is prominent in Western and Northern European countries as well as Japan, albeit in slightly different configurations. The vast majority of enterprises are privately owned in this economic model.
Rhine capitalism is the contemporary model of capitalism and adaptation of the social market model that exists in continental Western Europe today.
State capitalism is a capitalist market economy dominated by state-owned enterprises, where the state enterprises are organized as commercial, profit-seeking businesses. The designation has been used broadly throughout the 20th century to designate a number of different economic forms, ranging from state-ownership in market economies to the command economies of the former Eastern Bloc. According to Aldo Musacchio, a professor at Harvard Business School, state capitalism is a system in which governments, whether democratic or autocratic, exercise a widespread influence on the economy either through direct ownership or various subsidies. Musacchio notes a number of differences between today's state capitalism and its predecessors. In his opinion, gone are the days when governments appointed bureaucrats to run companies: the world's largest state-owned enterprises are now traded on the public markets and kept in good health by large institutional investors. Contemporary state capitalism is associated with the East Asian model of capitalism, dirigisme and the economy of Norway. Alternatively, Merriam-Webster defines state capitalism as "an economic system in which private capitalism is modified by a varying degree of government ownership and control".
In Socialism: Utopian and Scientific, Friedrich Engels argued that state-owned enterprises would characterize the final stage of capitalism, consisting of ownership and management of large-scale production and communication by the bourgeois state. In his writings, Vladimir Lenin characterized the economy of Soviet Russia as state capitalist, believing state capitalism to be an early step toward the development of socialism.
Some economists and left-wing academics including Richard D. Wolff and Noam Chomsky, as well as many Marxist philosophers and revolutionaries such as Raya Dunayevskaya and C.L.R. James, argue that the economies of the former Soviet Union and Eastern Bloc represented a form of state capitalism because their internal organization within enterprises and the system of wage labor remained intact.
The term is not used by Austrian School economists to describe state ownership of the means of production. The economist Ludwig von Mises argued that the designation of state capitalism was a new label for the old labels of state socialism and planned economy and differed only in non-essentials from these earlier designations.
Welfare capitalism is capitalism that includes social welfare policies. Today, welfare capitalism is most often associated with the models of capitalism found in Central Mainland and Northern Europe such as the Nordic model, social market economy and Rhine capitalism. In some cases, welfare capitalism exists within a mixed economy, but welfare states can and do exist independently of policies common to mixed economies such as state interventionism and extensive regulation.
A mixed economy is a largely market-based capitalist economy consisting of both private and public ownership of the means of production and economic interventionism through macroeconomic policies intended to correct market failures, reduce unemployment and keep inflation low. The degree of intervention in markets varies among different countries. Some mixed economies such as France under dirigisme also featured a degree of indirect economic planning over a largely capitalist-based economy.
Most modern capitalist economies are defined as mixed economies to some degree, however French economist Thomas Piketty state that capitalist economies might shift to a much more laissez-faire approach in the near future.
Eco-capitalism, also known as "environmental capitalism" or (sometimes) "green capitalism", is the view that capital exists in nature as "natural capital" (ecosystems that have ecological yield) on which all wealth depends. Therefore, governments should use market-based policy-instruments (such as a carbon tax) to resolve environmental problems.
The term "Blue Greens" is often applied to those who espouse eco-capitalism. Eco-capitalism can be thought of as the right-wing equivalent to Red Greens.
Sustainable capitalism is a conceptual form of capitalism based upon sustainable practices that seek to preserve humanity and the planet, while reducing externalities and bearing a resemblance of capitalist economic policy. A capitalistic economy must expand to survive and find new markets to support this expansion. Capitalist systems are often destructive to the environment as well as certain individuals without access to proper representation. However, sustainability provides quite the opposite; it implies not only a continuation, but a replenishing of resources. Sustainability is often thought of to be related to environmentalism, and sustainable capitalism applies sustainable principles to economic governance and social aspects of capitalism as well.
The importance of sustainable capitalism has been more recently recognized, but the concept is not new. Changes to the current economic model would have heavy social environmental and economic implications and require the efforts of individuals, as well as compliance of local, state and federal governments. Controversy surrounds the concept as it requires an increase in sustainable practices and a marked decrease in current consumptive behaviors.
This is a concept of capitalism described in Al Gore and David Blood's manifesto for the Generation Investment Management to describe a long-term political, economic and social structure which would mitigate current threats to the planet and society. According to their manifesto, sustainable capitalism would integrate the environmental, social and governance (ESG) aspects into risk assessment in attempt to limit externalities. Most of the ideas they list are related to economic changes, and social aspects, but strikingly few are explicitly related to any environmental policy change.
The accumulation of capital is the process of "making money" or growing an initial sum of money through investment in production. Capitalism is based on the accumulation of capital, whereby financial capital is invested in order to make a profit and then reinvested into further production in a continuous process of accumulation. In Marxian economic theory, this dynamic is called the law of value. Capital accumulation forms the basis of capitalism, where economic activity is structured around the accumulation of capital, defined as investment in order to realize a financial profit. In this context, "capital" is defined as money or a financial asset invested for the purpose of making more money (whether in the form of profit, rent, interest, royalties, capital gain or some other kind of return).
In mainstream economics, accounting and Marxian economics, capital accumulation is often equated with investment of profit income or savings, especially in real capital goods. The concentration and centralisation of capital are two of the results of such accumulation. In modern macroeconomics and econometrics, the phrase "capital formation" is often used in preference to "accumulation", though the United Nations Conference on Trade and Development (UNCTAD) refers nowadays to "accumulation". The term "accumulation" is occasionally used in national accounts.
Wage labor refers to the sale of labor under a formal or informal employment contract to an employer. These transactions usually occur in a labor market where wages are market determined. In Marxist economics, these owners of the means of production and suppliers of capital are generally called capitalists. The description of the role of the capitalist has shifted, first referring to a useless intermediary between producers, then to an employer of producers, and finally to the owners of the means of production. Labor includes all physical and mental human resources, including entrepreneurial capacity and management skills, which are required to produce products and services. Production is the act of making goods or services by applying labor power.
Criticism of capitalism comes from various political and philosophical approaches, including anarchist, socialist, religious and nationalist viewpoints. Of those who oppose it or want to modify it, some believe that capitalism should be removed through revolution while others believe that it should be changed slowly through political reforms.
Prominent critiques of capitalism allege that it is inherently exploitative, alienating, unstable, unsustainable, and economically inefficient—and that it creates massive economic inequality, commodifies people, degrades the environment, is anti-democratic, and leads to an erosion of human rights because of its incentivization of imperialist expansion and war.
Other critics argue that such inequities are not due to the ethic-neutral construct of the economic system commonly known as capitalism, but to the ethics of those who shape and execute the system. For example, some contend that Milton Friedman's (human) ethic of 'maximizing shareholder value' creates a harmful form of capitalism, while a Millard Fuller or John Bogle (human) ethic of 'enough' creates a sustainable form. Equitable ethics and unified ethical decision-making is theorized to create a less damaging form of capitalism. | [
{
"paragraph_id": 0,
"text": "Capitalism is an economic system based on the private ownership of the means of production and their operation for profit. Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor. In a market economy, decision-making and investments are determined by owners of wealth, property, or ability to maneuver capital or production ability in capital and financial markets—whereas prices and the distribution of goods and services are mainly determined by competition in goods and services markets.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Economists, historians, political economists, and sociologists have adopted different perspectives in their analyses of capitalism and have recognized various forms of it in practice. These include laissez-faire or free-market capitalism, anarcho-capitalism, state capitalism, and welfare capitalism. Different forms of capitalism feature varying degrees of free markets, public ownership, obstacles to free competition, and state-sanctioned social policies. The degree of competition in markets and the role of intervention and regulation, as well as the scope of state ownership, vary across different models of capitalism. The extent to which different markets are free and the rules defining private property are matters of politics and policy. Most of the existing capitalist economies are mixed economies that combine elements of free markets with state intervention and in some cases economic planning.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Capitalism in its modern form emerged from agrarianism in 16th century England and mercantilist practices by European countries in the 16th to 18th centuries. The Industrial Revolution of the 18th century established capitalism as a dominant mode of production, characterized by factory work and a complex division of labor. Through the process of globalization, capitalism spread across the world in the 19th and 20th centuries, especially before World War I and after the end of the Cold War. During the 19th century, capitalism was largely unregulated by the state, but became more regulated in the post-World War II period through Keynesianism, followed by a return of more unregulated capitalism starting in the 1980s through neoliberalism.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Market economies have existed under many forms of government and in many different times, places and cultures. Modern industrial capitalist societies developed in Western Europe in a process that led to the Industrial Revolution. Economic growth is a characteristic tendency of capitalist economies.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term \"capitalist\", meaning an owner of capital, appears earlier than the term \"capitalism\" and dates to the mid-17th century. \"Capitalism\" is derived from capital, which evolved from capitale, a late Latin word based on caput, meaning \"head\"—which is also the origin of \"chattel\" and \"cattle\" in the sense of movable property (only much later to refer only to livestock). Capitale emerged in the 12th to 13th centuries to refer to funds, stock of merchandise, sum of money or money carrying interest. By 1283, it was used in the sense of the capital assets of a trading firm and was often interchanged with other words—wealth, money, funds, goods, assets, property and so on.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The Hollantse (German: holländische) Mercurius uses \"capitalists\" in 1633 and 1654 to refer to owners of capital. In French, Étienne Clavier referred to capitalistes in 1788, four years before its first recorded English usage by Arthur Young in his work Travels in France (1792). In his Principles of Political Economy and Taxation (1817), David Ricardo referred to \"the capitalist\" many times. English poet Samuel Taylor Coleridge used \"capitalist\" in his work Table Talk (1823). Pierre-Joseph Proudhon used the term in his first work, What is Property? (1840), to refer to the owners of capital. Benjamin Disraeli used the term in his 1845 work Sybil.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "The initial use of the term \"capitalism\" in its modern sense is attributed to Louis Blanc in 1850 (\"What I call 'capitalism' that is to say the appropriation of capital by some to the exclusion of others\") and Pierre-Joseph Proudhon in 1861 (\"Economic and social regime in which capital, the source of income, does not generally belong to those who make it work through their labor\"). Karl Marx frequently referred to the \"capital\" and to the \"capitalist mode of production\" in Das Kapital (1867). Marx did not use the form capitalism but instead used capital, capitalist and capitalist mode of production, which appear frequently. Due to the word being coined by socialist critics of capitalism, economist and historian Robert Hessen stated that the term \"capitalism\" itself is a term of disparagement and a misnomer for economic individualism. Bernard Harcourt agrees with the statement that the term is a misnomer, adding that it misleadingly suggests that there is such as a thing as \"capital\" that inherently functions in certain ways and is governed by stable economic laws of its own.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "In the English language, the term \"capitalism\" first appears, according to the Oxford English Dictionary (OED), in 1854, in the novel The Newcomes by novelist William Makepeace Thackeray, where the word meant \"having ownership of capital\". Also according to the OED, Carl Adolph Douai, a German American socialist and abolitionist, used the term \"private capitalism\" in 1863.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "There is no universally agreed upon definition of capitalism; it is unclear whether or not capitalism characterizes an entire society, a specific type of social order, or crucial components or elements of a society. Societies officially founded in opposition to capitalism (such as the Soviet Union) have sometimes been argued to actually exhibit characteristics of capitalism. Nancy Fraser describes usage of the term \"capitalism\" by many authors as \"mainly rhetorical, functioning less as an actual concept than as a gesture toward the need for a concept\". Scholars who are uncritical of capitalism rarely actually use the term \"capitalism\". Some doubt that the term \"capitalism\" possesses valid scientific dignity, and it is generally not discussed in mainstream economics, with economist Daron Acemoglu suggesting that the term \"capitalism\" should be abandoned entirely. Consequently, understanding of the concept of capitalism tends to be heavily influenced by opponents of capitalism and by the followers and critics of Karl Marx.",
"title": "Definition"
},
{
"paragraph_id": 9,
"text": "Capitalism, in its modern form, can be traced to the emergence of agrarian capitalism and mercantilism in the early Renaissance, in city-states like Florence. Capital has existed incipiently on a small scale for centuries in the form of merchant, renting and lending activities and occasionally as small-scale industry with some wage labor. Simple commodity exchange and consequently simple commodity production, which is the initial basis for the growth of capital from trade, have a very long history. During the Islamic Golden Age, Arabs promulgated capitalist economic policies such as free trade and banking. Their use of Indo-Arabic numerals facilitated bookkeeping. These innovations migrated to Europe through trade partners in cities such as Venice and Pisa. Italian mathematicians traveled the Mediterranean talking to Arab traders and returned to popularize the use of Indo-Arabic numerals in Europe.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The economic foundations of the feudal agricultural system began to shift substantially in 16th-century England as the manorial system had broken down and land began to become concentrated in the hands of fewer landlords with increasingly large estates. Instead of a serf-based system of labor, workers were increasingly employed as part of a broader and expanding money-based economy. The system put pressure on both landlords and tenants to increase the productivity of agriculture to make profit; the weakened coercive power of the aristocracy to extract peasant surpluses encouraged them to try better methods, and the tenants also had incentive to improve their methods in order to flourish in a competitive labor market. Terms of rent for land were becoming subject to economic market forces rather than to the previous stagnant system of custom and feudal obligation.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The economic doctrine prevailing from the 16th to the 18th centuries is commonly called mercantilism. This period, the Age of Discovery, was associated with the geographic exploration of foreign lands by merchant traders, especially from England and the Low Countries. Mercantilism was a system of trade for profit, although commodities were still largely produced by non-capitalist methods. Most scholars consider the era of merchant capitalism and mercantilism as the origin of modern capitalism, although Karl Polanyi argued that the hallmark of capitalism is the establishment of generalized markets for what he called the \"fictitious commodities\", i.e. land, labor and money. Accordingly, he argued that \"not until 1834 was a competitive labor market established in England, hence industrial capitalism as a social system cannot be said to have existed before that date\".",
"title": "History"
},
{
"paragraph_id": 12,
"text": "England began a large-scale and integrative approach to mercantilism during the Elizabethan Era (1558–1603). A systematic and coherent explanation of balance of trade was made public through Thomas Mun's argument England's Treasure by Forraign Trade, or the Balance of our Forraign Trade is The Rule of Our Treasure. It was written in the 1620s and published in 1664.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "European merchants, backed by state controls, subsidies and monopolies, made most of their profits by buying and selling goods. In the words of Francis Bacon, the purpose of mercantilism was \"the opening and well-balancing of trade; the cherishing of manufacturers; the banishing of idleness; the repressing of waste and excess by sumptuary laws; the improvement and husbanding of the soil; the regulation of prices...\".",
"title": "History"
},
{
"paragraph_id": 14,
"text": "After the period of the proto-industrialization, the British East India Company and the Dutch East India Company, after massive contributions from the Mughal Bengal, inaugurated an expansive era of commerce and trade. These companies were characterized by their colonial and expansionary powers given to them by nation-states. During this era, merchants, who had traded under the previous stage of mercantilism, invested capital in the East India Companies and other colonies, seeking a return on investment.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In the mid-18th century a group of economic theorists, led by David Hume (1711–1776) and Adam Smith (1723–1790), challenged fundamental mercantilist doctrines—such as the belief that the world's wealth remained constant and that a state could only increase its wealth at the expense of another state.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "During the Industrial Revolution, industrialists replaced merchants as a dominant factor in the capitalist system and effected the decline of the traditional handicraft skills of artisans, guilds and journeymen. Industrial capitalism marked the development of the factory system of manufacturing, characterized by a complex division of labor between and within work process and the routine of work tasks; and eventually established the domination of the capitalist mode of production.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Industrial Britain eventually abandoned the protectionist policy formerly prescribed by mercantilism. In the 19th century, Richard Cobden (1804–1865) and John Bright (1811–1889), who based their beliefs on the Manchester School, initiated a movement to lower tariffs. In the 1840s Britain adopted a less protectionist policy, with the 1846 repeal of the Corn Laws and the 1849 repeal of the Navigation Acts. Britain reduced tariffs and quotas, in line with David Ricardo's advocacy of free trade.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Broader processes of globalization carried capitalism across the world. By the beginning of the nineteenth century, a series of loosely connected market systems had come together as a relatively integrated global system, in turn intensifying processes of economic and other globalization. Late in the 20th century, capitalism overcame a challenge by centrally-planned economies and is now the encompassing system worldwide, with the mixed economy as its dominant form in the industrialized Western world.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Industrialization allowed cheap production of household items using economies of scale, while rapid population growth created sustained demand for commodities. The imperialism of the 18th-century decisively shaped globalization.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "After the First and Second Opium Wars (1839–60) and the completion of the British conquest of India by 1858, vast populations of Asia became consumers of European exports. Europeans colonized areas of sub-Saharan Africa and the Pacific islands. Colonisation by Europeans, notably of sub-Saharan Africa, yielded valuable natural resources such as rubber, diamonds and coal and helped fuel trade and investment between the European imperial powers, their colonies and the United States:",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The inhabitant of London could order by telephone, sipping his morning tea, the various products of the whole earth, and reasonably expect their early delivery upon his doorstep. Militarism and imperialism of racial and cultural rivalries were little more than the amusements of his daily newspaper. What an extraordinary episode in the economic progress of man was that age which came to an end in August 1914.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "From the 1870s to the early 1920s, the global financial system was mainly tied to the gold standard. The United Kingdom first formally adopted this standard in 1821. Soon to follow were Canada in 1853, Newfoundland in 1865, the United States and Germany (de jure) in 1873. New technologies, such as the telegraph, the transatlantic cable, the radiotelephone, the steamship and railways allowed goods and information to move around the world to an unprecedented degree.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the United States, the term \"capitalist\" primarily referred to powerful businessmen until the 1920s due to widespread societal skepticism and criticism of capitalism and its most ardent supporters.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Contemporary capitalist societies developed in the West from 1950 to the present and this type of system continues throughout the world—relevant examples started in the United States after the 1950s, France after the 1960s, Spain after the 1970s, Poland after 2015, and others. At this stage capitalist markets are considered developed and characterized by developed private and public markets for equity and debt, a high standard of living (as characterized by the World Bank and the IMF), large institutional investors and a well-funded banking system. A significant managerial class has emerged and decides on a significant proportion of investments and other decisions. A different future than that envisioned by Marx has started to emerge—explored and described by Anthony Crosland in the United Kingdom in his 1956 book The Future of Socialism and by John Kenneth Galbraith in North America in his 1958 book The Affluent Society, 90 years after Marx's research on the state of capitalism in 1867.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The postwar boom ended in the late 1960s and early 1970s and the economic situation grew worse with the rise of stagflation. Monetarism, a modification of Keynesianism that is more compatible with laissez-faire analyses, gained increasing prominence in the capitalist world, especially under the years in office of Ronald Reagan in the United States (1981–1989) and of Margaret Thatcher in the United Kingdom (1979–1990). Public and political interest began shifting away from the so-called collectivist concerns of Keynes's managed capitalism to a focus on individual choice, called \"remarketized capitalism\".",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The end of the Cold War and the dissolution of the Soviet Union allowed for capitalism to become a truly global system in a way not seen since before World War I. The development of the neoliberal global economy would have been impossible without the fall of communism.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Harvard Kennedy School economist Dani Rodrik distinguishes between three historical variants of capitalism:",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The relationship between democracy and capitalism is a contentious area in theory and in popular political movements. The extension of adult-male suffrage in 19th-century Britain occurred along with the development of industrial capitalism and representative democracy became widespread at the same time as capitalism, leading capitalists to posit a causal or mutual relationship between them. However, according to some authors in the 20th-century, capitalism also accompanied a variety of political formations quite distinct from liberal democracies, including fascist regimes, absolute monarchies and single-party states. Democratic peace theory asserts that democracies seldom fight other democracies, but others suggest this may be because of political similarity or stability, rather than because they are \"democratic\" or \"capitalist\". Critics argue that though economic growth under capitalism has led to democracy, it may not do so in the future as authoritarian régimes have been able to manage economic growth using some of capitalism's competitive principles without making concessions to greater political freedom.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Political scientists Torben Iversen and David Soskice see democracy and capitalism as mutually supportive. Robert Dahl argued in On Democracy that capitalism was beneficial for democracy because economic growth and a large middle class were good for democracy. He also argued that a market economy provided a substitute for government control of the economy, which reduces the risks of tyranny and authoritarianism.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In his book The Road to Serfdom (1944), Friedrich Hayek (1899–1992) asserted that the free-market understanding of economic freedom as present in capitalism is a requisite of political freedom. He argued that the market mechanism is the only way of deciding what to produce and how to distribute the items without using coercion. Milton Friedman and Ronald Reagan also promoted this view. Friedman claimed that centralized economic operations are always accompanied by political repression. In his view, transactions in a market economy are voluntary and that the wide diversity that voluntary activity permits is a fundamental threat to repressive political leaders and greatly diminishes their power to coerce. Some of Friedman's views were shared by John Maynard Keynes, who believed that capitalism was vital for freedom to survive and thrive. Freedom House, an American think-tank that conducts international research on, and advocates for, democracy, political freedom and human rights, has argued that \"there is a high and statistically significant correlation between the level of political freedom as measured by Freedom House and economic freedom as measured by the Wall Street Journal/Heritage Foundation survey\".",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In Capital in the Twenty-First Century (2013), Thomas Piketty of the Paris School of Economics asserted that inequality is the inevitable consequence of economic growth in a capitalist economy and the resulting concentration of wealth can destabilize democratic societies and undermine the ideals of social justice upon which they are built.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "States with capitalistic economic systems have thrived under political regimes deemed to be authoritarian or oppressive. Singapore has a successful open market economy as a result of its competitive, business-friendly climate and robust rule of law. Nonetheless, it often comes under fire for its style of government which, though democratic and consistently one of the least corrupt, operates largely under a one-party rule. Furthermore, it does not vigorously defend freedom of expression as evidenced by its government-regulated press, and its penchant for upholding laws protecting ethnic and religious harmony, judicial dignity and personal reputation. The private (capitalist) sector in the People's Republic of China has grown exponentially and thrived since its inception, despite having an authoritarian government. Augusto Pinochet's rule in Chile led to economic growth and high levels of inequality by using authoritarian means to create a safe environment for investment and capitalism. Similarly, Suharto's authoritarian reign and extirpation of the Communist Party of Indonesia allowed for the expansion of capitalism in Indonesia.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The term \"capitalism\" in its modern sense is often attributed to Karl Marx. In his Das Kapital, Marx analyzed the \"capitalist mode of production\" using a method of understanding today known as Marxism. However, Marx himself rarely used the term \"capitalism\" while it was used twice in the more political interpretations of his work, primarily authored by his collaborator Friedrich Engels. In the 20th century, defenders of the capitalist system often replaced the term \"capitalism\" with phrases such as free enterprise and private enterprise and replaced \"capitalist\" with rentier and investor in reaction to the negative connotations associated with capitalism.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In general, capitalism as an economic system and mode of production can be summarized by the following:",
"title": "Characteristics"
},
{
"paragraph_id": 35,
"text": "In free market and laissez-faire forms of capitalism, markets are used most extensively with minimal or no regulation over the pricing mechanism. In mixed economies, which are almost universal today, markets continue to play a dominant role, but they are regulated to some extent by the state in order to correct market failures, promote social welfare, conserve natural resources, fund defense and public safety or other rationale. In state capitalist systems, markets are relied upon the least, with the state relying heavily on state-owned enterprises or indirect economic planning to accumulate capital.",
"title": "Characteristics"
},
{
"paragraph_id": 36,
"text": "Competition arises when more than one producer is trying to sell the same or similar products to the same buyers. Adherents of the capitalist theory believe that competition leads to innovation and more affordable prices. Monopolies or cartels can develop, especially if there is no competition. A monopoly occurs when a firm has exclusivity over a market. Hence, the firm can engage in rent seeking behaviors such as limiting output and raising prices because it has no fear of competition.",
"title": "Characteristics"
},
{
"paragraph_id": 37,
"text": "Governments have implemented legislation for the purpose of preventing the creation of monopolies and cartels. In 1890, the Sherman Antitrust Act became the first legislation passed by the United States Congress to limit monopolies.",
"title": "Characteristics"
},
{
"paragraph_id": 38,
"text": "Wage labor, usually referred to as paid work, paid employment, or paid labor, refers to the socioeconomic relationship between a worker and an employer in which the worker sells their labor power under a formal or informal employment contract. These transactions usually occur in a labor market where wages or salaries are market-determined.",
"title": "Characteristics"
},
{
"paragraph_id": 39,
"text": "In exchange for the money paid as wages (usual for short-term work-contracts) or salaries (in permanent employment contracts), the work product generally becomes the undifferentiated property of the employer. A wage laborer is a person whose primary means of income is from the selling of their labor in this way.",
"title": "Characteristics"
},
{
"paragraph_id": 40,
"text": "The profit motive, in the theory of capitalism, is the desire to earn income in the form of profit. Stated differently, the reason for a business's existence is to turn a profit. The profit motive functions according to rational choice theory, or the theory that individuals tend to pursue what is in their own best interests. Accordingly, businesses seek to benefit themselves and/or their shareholders by maximizing profit.",
"title": "Characteristics"
},
{
"paragraph_id": 41,
"text": "In capitalist theoretics, the profit motive is said to ensure that resources are being allocated efficiently. For instance, Austrian economist Henry Hazlitt explains: \"If there is no profit in making an article, it is a sign that the labor and capital devoted to its production are misdirected: the value of the resources that must be used up in making the article is greater than the value of the article itself\".",
"title": "Characteristics"
},
{
"paragraph_id": 42,
"text": "Socialist theorists note that, unlike merchantilists, capitalists accumulate their profits while expecting their profit rates to remain the same. This causes problems as earnings in the rest of society do not increase in the same proportion.",
"title": "Characteristics"
},
{
"paragraph_id": 43,
"text": "The relationship between the state, its formal mechanisms, and capitalist societies has been debated in many fields of social and political theory, with active discussion since the 19th century. Hernando de Soto is a contemporary Peruvian economist who has argued that an important characteristic of capitalism is the functioning state protection of property rights in a formal property system where ownership and transactions are clearly recorded.",
"title": "Characteristics"
},
{
"paragraph_id": 44,
"text": "According to de Soto, this is the process by which physical assets are transformed into capital, which in turn may be used in many more ways and much more efficiently in the market economy. A number of Marxian economists have argued that the Enclosure Acts in England and similar legislation elsewhere were an integral part of capitalist primitive accumulation and that specific legal frameworks of private land ownership have been integral to the development of capitalism.",
"title": "Characteristics"
},
{
"paragraph_id": 45,
"text": "Private property rights are not absolute, as in many countries the state has the power to seize private property, typically for public use, under the powers of eminent domain.",
"title": "Characteristics"
},
{
"paragraph_id": 46,
"text": "In capitalist economics, market competition is the rivalry among sellers trying to achieve such goals as increasing profits, market share and sales volume by varying the elements of the marketing mix: price, product, distribution and promotion. Merriam-Webster defines competition in business as \"the effort of two or more parties acting independently to secure the business of a third party by offering the most favourable terms\". It was described by Adam Smith in The Wealth of Nations (1776) and later economists as allocating productive resources to their most highly valued uses and encouraging efficiency. Smith and other classical economists before Antoine Augustine Cournot were referring to price and non-price rivalry among producers to sell their goods on best terms by bidding of buyers, not necessarily to a large number of sellers nor to a market in final equilibrium. Competition is widespread throughout the market process. It is a condition where \"buyers tend to compete with other buyers, and sellers tend to compete with other sellers\". In offering goods for exchange, buyers competitively bid to purchase specific quantities of specific goods which are available, or might be available if sellers were to choose to offer such goods. Similarly, sellers bid against other sellers in offering goods on the market, competing for the attention and exchange resources of buyers. Competition results from scarcity, as it is not possible to satisfy all conceivable human wants, and occurs as people try to meet the criteria being used to determine allocation.",
"title": "Characteristics"
},
{
"paragraph_id": 47,
"text": "In the works of Adam Smith, the idea of capitalism is made possible through competition which creates growth. Although capitalism has not entered mainstream economics at the time of Smith, it is vital to the construction of his ideal society. One of the foundational blocks of capitalism is competition. Smith believed that a prosperous society is one where \"everyone should be free to enter and leave the market and change trades as often as he pleases.\" He believed that the freedom to act in one's self-interest is essential for the success of a capitalist society. The fear arises that if all participants focus on their own goals, society's well-being will be water under the bridge. Smith maintains that despite the concerns of intellectuals, \"global trends will hardly be altered if they refrain from pursuing their personal ends.\" He insisted that the actions of a few participants cannot alter the course of society. Instead, Smith maintained that they should focus on personal progress instead and that this will result in overall growth to the whole.",
"title": "Characteristics"
},
{
"paragraph_id": 48,
"text": "Competition between participants, \"who are all endeavoring to justle one another out of employment, obliges every man to endeavor to execute his work\" through competition towards growth.",
"title": "Characteristics"
},
{
"paragraph_id": 49,
"text": "Economic growth is a characteristic tendency of capitalist economies.",
"title": "Characteristics"
},
{
"paragraph_id": 50,
"text": "The capitalist mode of production refers to the systems of organising production and distribution within capitalist societies. Private money-making in various forms (renting, banking, merchant trade, production for profit and so on) preceded the development of the capitalist mode of production as such.",
"title": "Characteristics"
},
{
"paragraph_id": 51,
"text": "The term capitalist mode of production is defined by private ownership of the means of production, extraction of surplus value by the owning class for the purpose of capital accumulation, wage-based labor and, at least as far as commodities are concerned, being market-based.",
"title": "Characteristics"
},
{
"paragraph_id": 52,
"text": "Capitalism in the form of money-making activity has existed in the shape of merchants and money-lenders who acted as intermediaries between consumers and producers engaging in simple commodity production (hence the reference to \"merchant capitalism\") since the beginnings of civilisation. What is specific about the \"capitalist mode of production\" is that most of the inputs and outputs of production are supplied through the market (i.e. they are commodities) and essentially all production is in this mode. By contrast, in flourishing feudalism most or all of the factors of production, including labor, are owned by the feudal ruling class outright and the products may also be consumed without a market of any kind, it is production for use within the feudal social unit and for limited trade. This has the important consequence that, under capitalism, the whole organisation of the production process is reshaped and re-organised to conform with economic rationality as bounded by capitalism, which is expressed in price relationships between inputs and outputs (wages, non-labor factor costs, sales and profits) rather than the larger rational context faced by society overall—that is, the whole process is organised and re-shaped in order to conform to \"commercial logic\". Essentially, capital accumulation comes to define economic rationality in capitalist production.",
"title": "Characteristics"
},
{
"paragraph_id": 53,
"text": "A society, region or nation is capitalist if the predominant source of incomes and products being distributed is capitalist activity, but even so this does not yet mean necessarily that the capitalist mode of production is dominant in that society.",
"title": "Characteristics"
},
{
"paragraph_id": 54,
"text": "Mixed economies rely on the nation they are in to provide some goods or services, while the free market produces and maintains the rest.",
"title": "Characteristics"
},
{
"paragraph_id": 55,
"text": "Government agencies regulate the standards of service in many industries, such as airlines and broadcasting, as well as financing a wide range of programs. In addition, the government regulates the flow of capital and uses financial tools such as the interest rate to control such factors as inflation and unemployment.",
"title": "Characteristics"
},
{
"paragraph_id": 56,
"text": "In capitalist economic structures, supply and demand is an economic model of price determination in a market. It postulates that in a perfectly competitive market, the unit price for a particular good will vary until it settles at a point where the quantity demanded by consumers (at the current price) will equal the quantity supplied by producers (at the current price), resulting in an economic equilibrium for price and quantity.",
"title": "Supply and demand"
},
{
"paragraph_id": 57,
"text": "The \"basic laws\" of supply and demand, as described by David Besanko and Ronald Braeutigam, are the following four:",
"title": "Supply and demand"
},
{
"paragraph_id": 58,
"text": "A supply schedule is a table that shows the relationship between the price of a good and the quantity supplied.",
"title": "Supply and demand"
},
{
"paragraph_id": 59,
"text": "A demand schedule, depicted graphically as the demand curve, represents the amount of some goods that buyers are willing and able to purchase at various prices, assuming all determinants of demand other than the price of the good in question, such as income, tastes and preferences, the price of substitute goods and the price of complementary goods, remain the same. According to the law of demand, the demand curve is almost always represented as downward sloping, meaning that as price decreases, consumers will buy more of the good.",
"title": "Supply and demand"
},
{
"paragraph_id": 60,
"text": "Just like the supply curves reflect marginal cost curves, demand curves are determined by marginal utility curves.",
"title": "Supply and demand"
},
{
"paragraph_id": 61,
"text": "In the context of supply and demand, economic equilibrium refers to a state where economic forces such as supply and demand are balanced and in the absence of external influences the (equilibrium) values of economic variables will not change. For example, in the standard text-book model of perfect competition equilibrium occurs at the point at which quantity demanded and quantity supplied are equal. Market equilibrium, in this case, refers to a condition where a market price is established through competition such that the amount of goods or services sought by buyers is equal to the amount of goods or services produced by sellers. This price is often called the competitive price or market clearing price and will tend not to change unless demand or supply changes.",
"title": "Supply and demand"
},
{
"paragraph_id": 62,
"text": "Partial equilibrium, as the name suggests, takes into consideration only a part of the market to attain equilibrium. Jain proposes (attributed to George Stigler): \"A partial equilibrium is one which is based on only a restricted range of data, a standard example is price of a single product, the prices of all other products being held fixed during the analysis\".",
"title": "Supply and demand"
},
{
"paragraph_id": 63,
"text": "According to Hamid S. Hosseini, the \"power of supply and demand\" was discussed to some extent by several early Muslim scholars, such as fourteenth century Mamluk scholar Ibn Taymiyyah, who wrote: \"If desire for goods increases while its availability decreases, its price rises. On the other hand, if availability of the good increases and the desire for it decreases, the price comes down\".",
"title": "Supply and demand"
},
{
"paragraph_id": 64,
"text": "John Locke's 1691 work Some Considerations on the Consequences of the Lowering of Interest and the Raising of the Value of Money includes an early and clear description of supply and demand and their relationship. In this description, demand is rent: \"The price of any commodity rises or falls by the proportion of the number of buyer and sellers\" and \"that which regulates the price... [of goods] is nothing else but their quantity in proportion to their rent\".",
"title": "Supply and demand"
},
{
"paragraph_id": 65,
"text": "David Ricardo titled one chapter of his 1817 work Principles of Political Economy and Taxation \"On the Influence of Demand and Supply on Price\". In Principles of Political Economy and Taxation, Ricardo more rigorously laid down the idea of the assumptions that were used to build his ideas of supply and demand.",
"title": "Supply and demand"
},
{
"paragraph_id": 66,
"text": "In his 1870 essay \"On the Graphical Representation of Supply and Demand\", Fleeming Jenkin in the course of \"introduc[ing] the diagrammatic method into the English economic literature\" published the first drawing of supply and demand curves therein, including comparative statics from a shift of supply or demand and application to the labor market. The model was further developed and popularized by Alfred Marshall in the 1890 textbook Principles of Economics.",
"title": "Supply and demand"
},
{
"paragraph_id": 67,
"text": "There are many variants of capitalism in existence that differ according to country and region. They vary in their institutional makeup and by their economic policies. The common features among all the different forms of capitalism are that they are predominantly based on the private ownership of the means of production and the production of goods and services for profit; the market-based allocation of resources; and the accumulation of capital.",
"title": "Types"
},
{
"paragraph_id": 68,
"text": "They include advanced capitalism, corporate capitalism, finance capitalism, free-market capitalism, mercantilism, social capitalism, state capitalism and welfare capitalism. Other theoretical variants of capitalism include anarcho-capitalism, community capitalism, humanistic capitalism, neo-capitalism, state monopoly capitalism, and technocapitalism.",
"title": "Types"
},
{
"paragraph_id": 69,
"text": "Advanced capitalism is the situation that pertains to a society in which the capitalist model has been integrated and developed deeply and extensively for a prolonged period. Various writers identify Antonio Gramsci as an influential early theorist of advanced capitalism, even if he did not use the term himself. In his writings, Gramsci sought to explain how capitalism had adapted to avoid the revolutionary overthrow that had seemed inevitable in the 19th century. At the heart of his explanation was the decline of raw coercion as a tool of class power, replaced by use of civil society institutions to manipulate public ideology in the capitalists' favour.",
"title": "Types"
},
{
"paragraph_id": 70,
"text": "Jürgen Habermas has been a major contributor to the analysis of advanced-capitalistic societies. Habermas observed four general features that characterise advanced capitalism:",
"title": "Types"
},
{
"paragraph_id": 71,
"text": "Corporate capitalism is a free or mixed-market capitalist economy characterized by the dominance of hierarchical, bureaucratic corporations.",
"title": "Types"
},
{
"paragraph_id": 72,
"text": "Finance capitalism is the subordination of processes of production to the accumulation of money profits in a financial system. In their critique of capitalism, Marxism and Leninism both emphasise the role of finance capital as the determining and ruling-class interest in capitalist society, particularly in the latter stages.",
"title": "Types"
},
{
"paragraph_id": 73,
"text": "Rudolf Hilferding is credited with first bringing the term finance capitalism into prominence through Finance Capital, his 1910 study of the links between German trusts, banks and monopolies—a study subsumed by Vladimir Lenin into Imperialism, the Highest Stage of Capitalism (1917), his analysis of the imperialist relations of the great world powers. Lenin concluded that the banks at that time operated as \"the chief nerve centres of the whole capitalist system of national economy\". For the Comintern (founded in 1919), the phrase \"dictatorship of finance capitalism\" became a regular one.",
"title": "Types"
},
{
"paragraph_id": 74,
"text": "Fernand Braudel would later point to two earlier periods when finance capitalism had emerged in human history—with the Genoese in the 16th century and with the Dutch in the 17th and 18th centuries—although at those points it developed from commercial capitalism. Giovanni Arrighi extended Braudel's analysis to suggest that a predominance of finance capitalism is a recurring, long-term phenomenon, whenever a previous phase of commercial/industrial capitalist expansion reaches a plateau.",
"title": "Types"
},
{
"paragraph_id": 75,
"text": "A capitalist free-market economy is an economic system where prices for goods and services are set entirely by the forces of supply and demand and are expected, by its adherents, to reach their point of equilibrium without intervention by government policy. It typically entails support for highly competitive markets and private ownership of the means of production. Laissez-faire capitalism is a more extensive form of this free-market economy, but one in which the role of the state is limited to protecting property rights. In anarcho-capitalist theory, property rights are protected by private firms and market-generated law. According to anarcho-capitalists, this entails property rights without statutory law through market-generated tort, contract and property law, and self-sustaining private industry.",
"title": "Types"
},
{
"paragraph_id": 76,
"text": "Fernand Braudel argued that free market exchange and capitalism are to some degree opposed; free market exchange involves transparent public transactions and a large number of equal competitors, while capitalism involves a small number of participants using their capital to control the market via private transactions, control of information, and limitation of competition.",
"title": "Types"
},
{
"paragraph_id": 77,
"text": "Mercantilism is a nationalist form of early capitalism that came into existence approximately in the late 16th century. It is characterized by the intertwining of national business interests with state-interest and imperialism. Consequently, the state apparatus is used to advance national business interests abroad. An example of this is colonists living in America who were only allowed to trade with and purchase goods from their respective mother countries (e.g., Britain, France and Portugal). Mercantilism was driven by the belief that the wealth of a nation is increased through a positive balance of trade with other nations—it corresponds to the phase of capitalist development sometimes called the primitive accumulation of capital.",
"title": "Types"
},
{
"paragraph_id": 78,
"text": "A social market economy is a free-market or mixed-market capitalist system, sometimes classified as a coordinated market economy, where government intervention in price formation is kept to a minimum, but the state provides significant services in areas such as social security, health care, unemployment benefits and the recognition of labor rights through national collective bargaining arrangements.",
"title": "Types"
},
{
"paragraph_id": 79,
"text": "This model is prominent in Western and Northern European countries as well as Japan, albeit in slightly different configurations. The vast majority of enterprises are privately owned in this economic model.",
"title": "Types"
},
{
"paragraph_id": 80,
"text": "Rhine capitalism is the contemporary model of capitalism and adaptation of the social market model that exists in continental Western Europe today.",
"title": "Types"
},
{
"paragraph_id": 81,
"text": "State capitalism is a capitalist market economy dominated by state-owned enterprises, where the state enterprises are organized as commercial, profit-seeking businesses. The designation has been used broadly throughout the 20th century to designate a number of different economic forms, ranging from state-ownership in market economies to the command economies of the former Eastern Bloc. According to Aldo Musacchio, a professor at Harvard Business School, state capitalism is a system in which governments, whether democratic or autocratic, exercise a widespread influence on the economy either through direct ownership or various subsidies. Musacchio notes a number of differences between today's state capitalism and its predecessors. In his opinion, gone are the days when governments appointed bureaucrats to run companies: the world's largest state-owned enterprises are now traded on the public markets and kept in good health by large institutional investors. Contemporary state capitalism is associated with the East Asian model of capitalism, dirigisme and the economy of Norway. Alternatively, Merriam-Webster defines state capitalism as \"an economic system in which private capitalism is modified by a varying degree of government ownership and control\".",
"title": "Types"
},
{
"paragraph_id": 82,
"text": "In Socialism: Utopian and Scientific, Friedrich Engels argued that state-owned enterprises would characterize the final stage of capitalism, consisting of ownership and management of large-scale production and communication by the bourgeois state. In his writings, Vladimir Lenin characterized the economy of Soviet Russia as state capitalist, believing state capitalism to be an early step toward the development of socialism.",
"title": "Types"
},
{
"paragraph_id": 83,
"text": "Some economists and left-wing academics including Richard D. Wolff and Noam Chomsky, as well as many Marxist philosophers and revolutionaries such as Raya Dunayevskaya and C.L.R. James, argue that the economies of the former Soviet Union and Eastern Bloc represented a form of state capitalism because their internal organization within enterprises and the system of wage labor remained intact.",
"title": "Types"
},
{
"paragraph_id": 84,
"text": "The term is not used by Austrian School economists to describe state ownership of the means of production. The economist Ludwig von Mises argued that the designation of state capitalism was a new label for the old labels of state socialism and planned economy and differed only in non-essentials from these earlier designations.",
"title": "Types"
},
{
"paragraph_id": 85,
"text": "Welfare capitalism is capitalism that includes social welfare policies. Today, welfare capitalism is most often associated with the models of capitalism found in Central Mainland and Northern Europe such as the Nordic model, social market economy and Rhine capitalism. In some cases, welfare capitalism exists within a mixed economy, but welfare states can and do exist independently of policies common to mixed economies such as state interventionism and extensive regulation.",
"title": "Types"
},
{
"paragraph_id": 86,
"text": "A mixed economy is a largely market-based capitalist economy consisting of both private and public ownership of the means of production and economic interventionism through macroeconomic policies intended to correct market failures, reduce unemployment and keep inflation low. The degree of intervention in markets varies among different countries. Some mixed economies such as France under dirigisme also featured a degree of indirect economic planning over a largely capitalist-based economy.",
"title": "Types"
},
{
"paragraph_id": 87,
"text": "Most modern capitalist economies are defined as mixed economies to some degree, however French economist Thomas Piketty state that capitalist economies might shift to a much more laissez-faire approach in the near future.",
"title": "Types"
},
{
"paragraph_id": 88,
"text": "Eco-capitalism, also known as \"environmental capitalism\" or (sometimes) \"green capitalism\", is the view that capital exists in nature as \"natural capital\" (ecosystems that have ecological yield) on which all wealth depends. Therefore, governments should use market-based policy-instruments (such as a carbon tax) to resolve environmental problems.",
"title": "Types"
},
{
"paragraph_id": 89,
"text": "The term \"Blue Greens\" is often applied to those who espouse eco-capitalism. Eco-capitalism can be thought of as the right-wing equivalent to Red Greens.",
"title": "Types"
},
{
"paragraph_id": 90,
"text": "Sustainable capitalism is a conceptual form of capitalism based upon sustainable practices that seek to preserve humanity and the planet, while reducing externalities and bearing a resemblance of capitalist economic policy. A capitalistic economy must expand to survive and find new markets to support this expansion. Capitalist systems are often destructive to the environment as well as certain individuals without access to proper representation. However, sustainability provides quite the opposite; it implies not only a continuation, but a replenishing of resources. Sustainability is often thought of to be related to environmentalism, and sustainable capitalism applies sustainable principles to economic governance and social aspects of capitalism as well.",
"title": "Types"
},
{
"paragraph_id": 91,
"text": "The importance of sustainable capitalism has been more recently recognized, but the concept is not new. Changes to the current economic model would have heavy social environmental and economic implications and require the efforts of individuals, as well as compliance of local, state and federal governments. Controversy surrounds the concept as it requires an increase in sustainable practices and a marked decrease in current consumptive behaviors.",
"title": "Types"
},
{
"paragraph_id": 92,
"text": "This is a concept of capitalism described in Al Gore and David Blood's manifesto for the Generation Investment Management to describe a long-term political, economic and social structure which would mitigate current threats to the planet and society. According to their manifesto, sustainable capitalism would integrate the environmental, social and governance (ESG) aspects into risk assessment in attempt to limit externalities. Most of the ideas they list are related to economic changes, and social aspects, but strikingly few are explicitly related to any environmental policy change.",
"title": "Types"
},
{
"paragraph_id": 93,
"text": "The accumulation of capital is the process of \"making money\" or growing an initial sum of money through investment in production. Capitalism is based on the accumulation of capital, whereby financial capital is invested in order to make a profit and then reinvested into further production in a continuous process of accumulation. In Marxian economic theory, this dynamic is called the law of value. Capital accumulation forms the basis of capitalism, where economic activity is structured around the accumulation of capital, defined as investment in order to realize a financial profit. In this context, \"capital\" is defined as money or a financial asset invested for the purpose of making more money (whether in the form of profit, rent, interest, royalties, capital gain or some other kind of return).",
"title": "Capital accumulation"
},
{
"paragraph_id": 94,
"text": "In mainstream economics, accounting and Marxian economics, capital accumulation is often equated with investment of profit income or savings, especially in real capital goods. The concentration and centralisation of capital are two of the results of such accumulation. In modern macroeconomics and econometrics, the phrase \"capital formation\" is often used in preference to \"accumulation\", though the United Nations Conference on Trade and Development (UNCTAD) refers nowadays to \"accumulation\". The term \"accumulation\" is occasionally used in national accounts.",
"title": "Capital accumulation"
},
{
"paragraph_id": 95,
"text": "Wage labor refers to the sale of labor under a formal or informal employment contract to an employer. These transactions usually occur in a labor market where wages are market determined. In Marxist economics, these owners of the means of production and suppliers of capital are generally called capitalists. The description of the role of the capitalist has shifted, first referring to a useless intermediary between producers, then to an employer of producers, and finally to the owners of the means of production. Labor includes all physical and mental human resources, including entrepreneurial capacity and management skills, which are required to produce products and services. Production is the act of making goods or services by applying labor power.",
"title": "Wage labor"
},
{
"paragraph_id": 96,
"text": "Criticism of capitalism comes from various political and philosophical approaches, including anarchist, socialist, religious and nationalist viewpoints. Of those who oppose it or want to modify it, some believe that capitalism should be removed through revolution while others believe that it should be changed slowly through political reforms.",
"title": "Criticism"
},
{
"paragraph_id": 97,
"text": "Prominent critiques of capitalism allege that it is inherently exploitative, alienating, unstable, unsustainable, and economically inefficient—and that it creates massive economic inequality, commodifies people, degrades the environment, is anti-democratic, and leads to an erosion of human rights because of its incentivization of imperialist expansion and war.",
"title": "Criticism"
},
{
"paragraph_id": 98,
"text": "Other critics argue that such inequities are not due to the ethic-neutral construct of the economic system commonly known as capitalism, but to the ethics of those who shape and execute the system. For example, some contend that Milton Friedman's (human) ethic of 'maximizing shareholder value' creates a harmful form of capitalism, while a Millard Fuller or John Bogle (human) ethic of 'enough' creates a sustainable form. Equitable ethics and unified ethical decision-making is theorized to create a less damaging form of capitalism.",
"title": "Criticism"
}
] | Capitalism is an economic system based on the private ownership of the means of production and their operation for profit. Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor. In a market economy, decision-making and investments are determined by owners of wealth, property, or ability to maneuver capital or production ability in capital and financial markets—whereas prices and the distribution of goods and services are mainly determined by competition in goods and services markets. Economists, historians, political economists, and sociologists have adopted different perspectives in their analyses of capitalism and have recognized various forms of it in practice. These include laissez-faire or free-market capitalism, anarcho-capitalism, state capitalism, and welfare capitalism. Different forms of capitalism feature varying degrees of free markets, public ownership, obstacles to free competition, and state-sanctioned social policies. The degree of competition in markets and the role of intervention and regulation, as well as the scope of state ownership, vary across different models of capitalism. The extent to which different markets are free and the rules defining private property are matters of politics and policy. Most of the existing capitalist economies are mixed economies that combine elements of free markets with state intervention and in some cases economic planning. Capitalism in its modern form emerged from agrarianism in 16th century England and mercantilist practices by European countries in the 16th to 18th centuries. The Industrial Revolution of the 18th century established capitalism as a dominant mode of production, characterized by factory work and a complex division of labor. Through the process of globalization, capitalism spread across the world in the 19th and 20th centuries, especially before World War I and after the end of the Cold War. During the 19th century, capitalism was largely unregulated by the state, but became more regulated in the post-World War II period through Keynesianism, followed by a return of more unregulated capitalism starting in the 1980s through neoliberalism. Market economies have existed under many forms of government and in many different times, places and cultures. Modern industrial capitalist societies developed in Western Europe in a process that led to the Industrial Revolution. Economic growth is a characteristic tendency of capitalist economies. | 2001-10-14T11:16:39Z | 2023-12-31T20:21:38Z | [
"Template:ISBN",
"Template:Wikiquote",
"Template:Political ideologies",
"Template:Authority control",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Clarify",
"Template:Harvnb",
"Template:Refend",
"Template:Commons category",
"Template:Lang",
"Template:Lang-de",
"Template:By whom",
"Template:Reflist",
"Template:Use American English",
"Template:Economics sidebar",
"Template:In Our Time",
"Template:Aspects of capitalism",
"Template:Western culture",
"Template:Economic systems sidebar",
"Template:Page needed",
"Template:Cite news",
"Template:Portal bar",
"Template:Colend",
"Template:Cite book",
"Template:Doi",
"Template:Cite encyclopedia",
"Template:Primary source inline",
"Template:Cite journal",
"Template:Dead link",
"Template:Refbegin",
"Template:Cols",
"Template:Use dmy dates",
"Template:Wiktionary",
"Template:Pp-pc",
"Template:Neoliberalism sidebar",
"Template:When",
"Template:Request quotation",
"Template:Abbr",
"Template:EB1922 Poster",
"Template:Liberalism sidebar",
"Template:Citation",
"Template:Marxist & Communist phraseology",
"Template:Rp",
"Template:Capitalism sidebar",
"Template:Blockquote",
"Template:More citations needed section",
"Template:Cite web",
"Template:Library resources box",
"Template:Short description",
"Template:Main",
"Template:Further",
"Template:Weasel inline",
"Template:Redirect",
"Template:More citations needed",
"Template:Circa",
"Template:Expand section",
"Template:See also",
"Template:About"
] | https://en.wikipedia.org/wiki/Capitalism |
5,420 | Cross ownership | Cross ownership is a method of reinforcing business relationships by owning stock in the companies with which a given company does business. Heavy cross ownership is referred to as circular ownership.
In the US, "cross ownership" also refers to a type of investment in different mass-media properties in one market.
Countries noted to have high levels of cross ownership include:
Positives of cross ownership:
Cross ownership of shares is criticized for:
A major factor in perpetuating cross ownership of shares is a high capital gains tax rate. A company has less incentive to sell cross owned shares if taxes are high because of the immediate reduction in the value of the assets.
For example, a company owns $1000 of stock in another company that was originally purchased for $200. If the capital gains tax rate is 25% (like in Germany), the profit of $800 would be taxed for $200, causing the company to take a $200 loss on the sale.
Long term cross ownership of shares combined with a high capital tax rate greatly increases periods of asset deflation both in time and in severity.
Cross ownership also refers to a type of media ownership in which one type of communications (say a newspaper) owns or is the sister company of another type of medium (such as a radio or TV station). One example is The New York Times's former ownership of WQXR Radio and the Chicago Tribune's similar relationship with WGN Radio (WGN-AM) and Television (WGN-TV).
The Federal Communications Commission generally does not allow cross ownership, to keep from one license holder having too much local media ownership, unless the license holder obtains a waiver, such as News Corporation and the Tribune Company have in New York.
The mid-1970s cross-ownership guidelines grandfathered already-existing cross ownerships, such as Tribune-WGN, New York Times-WQXR and the New York Daily News ownership of WPIX Television and Radio. | [
{
"paragraph_id": 0,
"text": "Cross ownership is a method of reinforcing business relationships by owning stock in the companies with which a given company does business. Heavy cross ownership is referred to as circular ownership.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the US, \"cross ownership\" also refers to a type of investment in different mass-media properties in one market.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Countries noted to have high levels of cross ownership include:",
"title": "Cross ownership of stock"
},
{
"paragraph_id": 3,
"text": "Positives of cross ownership:",
"title": "Cross ownership of stock"
},
{
"paragraph_id": 4,
"text": "Cross ownership of shares is criticized for:",
"title": "Cross ownership of stock"
},
{
"paragraph_id": 5,
"text": "A major factor in perpetuating cross ownership of shares is a high capital gains tax rate. A company has less incentive to sell cross owned shares if taxes are high because of the immediate reduction in the value of the assets.",
"title": "Cross ownership of stock"
},
{
"paragraph_id": 6,
"text": "For example, a company owns $1000 of stock in another company that was originally purchased for $200. If the capital gains tax rate is 25% (like in Germany), the profit of $800 would be taxed for $200, causing the company to take a $200 loss on the sale.",
"title": "Cross ownership of stock"
},
{
"paragraph_id": 7,
"text": "Long term cross ownership of shares combined with a high capital tax rate greatly increases periods of asset deflation both in time and in severity.",
"title": "Cross ownership of stock"
},
{
"paragraph_id": 8,
"text": "Cross ownership also refers to a type of media ownership in which one type of communications (say a newspaper) owns or is the sister company of another type of medium (such as a radio or TV station). One example is The New York Times's former ownership of WQXR Radio and the Chicago Tribune's similar relationship with WGN Radio (WGN-AM) and Television (WGN-TV).",
"title": "Media cross ownership"
},
{
"paragraph_id": 9,
"text": "The Federal Communications Commission generally does not allow cross ownership, to keep from one license holder having too much local media ownership, unless the license holder obtains a waiver, such as News Corporation and the Tribune Company have in New York.",
"title": "Media cross ownership"
},
{
"paragraph_id": 10,
"text": "The mid-1970s cross-ownership guidelines grandfathered already-existing cross ownerships, such as Tribune-WGN, New York Times-WQXR and the New York Daily News ownership of WPIX Television and Radio.",
"title": "Media cross ownership"
}
] | Cross ownership is a method of reinforcing business relationships by owning stock in the companies with which a given company does business. Heavy cross ownership is referred to as circular ownership. In the US, "cross ownership" also refers to a type of investment in different mass-media properties in one market. | 2023-05-26T01:35:27Z | [
"Template:More citations needed",
"Template:Citation needed",
"Template:Main",
"Template:'",
"Template:Cite web",
"Template:Cite book",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Cross_ownership |
|
5,421 | Cardiology | Cardiology (from Ancient Greek καρδίᾱ (kardiā) 'heart', and -λογία (-logia) 'study') is the study of the heart. Cardiology is a branch of medicine that deals with disorders of the heart and the cardiovascular system. The field includes medical diagnosis and treatment of congenital heart defects, coronary artery disease, heart failure, valvular heart disease, and electrophysiology. Physicians who specialize in this field of medicine are called cardiologists, a specialty of internal medicine. Pediatric cardiologists are pediatricians who specialize in cardiology. Physicians who specialize in cardiac surgery are called cardiothoracic surgeons or cardiac surgeons, a specialty of general surgery.
All cardiologists in the branch of medicine study the disorders of the heart, but the study of adult and child heart disorders each require different training pathways. Therefore, an adult cardiologist (often simply called "cardiologist") is inadequately trained to take care of children, and pediatric cardiologists are not trained to treat adult heart disease. Surgical aspects are not included in cardiology and are in the domain of cardiothoracic surgery. For example, coronary artery bypass surgery (CABG), cardiopulmonary bypass and valve replacement are surgical procedures performed by surgeons, not cardiologists. However, some minimally invasive procedures such as cardiac catheterization and pacemaker implantation are performed by cardiologists who have additional training in non-surgical interventions (interventional cardiology and electrophysiology respectively).
Cardiology is a specialty of internal medicine. To be a cardiologist in the United States, a three-year residency in internal medicine is followed by a three-year fellowship in cardiology. It is possible to specialize further in a sub-specialty. Recognized sub-specialties in the U.S. by the Accreditation Council for Graduate Medical Education are cardiac electrophysiology, echocardiography, interventional cardiology, and nuclear cardiology. Recognized subspecialties in the U.S. by the American Osteopathic Association Bureau of Osteopathic Specialists include clinical cardiac electrophysiology and interventional cardiology. In India, a three-year residency in General Medicine or Pediatrics after M.B.B.S. and then three years of residency in cardiology are needed to be a D.M./Diplomate of National Board (DNB) in Cardiology.
Per Doximity, adult cardiologists earn an average of $436,849 per year in the U.S.
Cardiac electrophysiology is the science of elucidating, diagnosing, and treating the electrical activities of the heart. The term is usually used to describe studies of such phenomena by invasive (intracardiac) catheter recording of spontaneous activity as well as of cardiac responses to programmed electrical stimulation (PES). These studies are performed to assess complex arrhythmias, elucidate symptoms, evaluate abnormal electrocardiograms, assess risk of developing arrhythmias in the future, and design treatment. These procedures increasingly include therapeutic methods (typically radiofrequency ablation, or cryoablation) in addition to diagnostic and prognostic procedures. Other therapeutic modalities employed in this field include antiarrhythmic drug therapy and implantation of pacemakers and automatic implantable cardioverter-defibrillators (AICD).
The cardiac electrophysiology study typically measures the response of the injured or cardiomyopathic myocardium to PES on specific pharmacological regimens in order to assess the likelihood that the regimen will successfully prevent potentially fatal sustained ventricular tachycardia (VT) or ventricular fibrillation (VF) in the future. Sometimes a series of electrophysiology-study drug trials must be conducted to enable the cardiologist to select the one regimen for long-term treatment that best prevents or slows the development of VT or VF following PES. Such studies may also be conducted in the presence of a newly implanted or newly replaced cardiac pacemaker or AICD.
Clinical cardiac electrophysiology is a branch of the medical specialty of cardiology and is concerned with the study and treatment of rhythm disorders of the heart. Cardiologists with expertise in this area are usually referred to as electrophysiologists. Electrophysiologists are trained in the mechanism, function, and performance of the electrical activities of the heart. Electrophysiologists work closely with other cardiologists and cardiac surgeons to assist or guide therapy for heart rhythm disturbances (arrhythmias). They are trained to perform interventional and surgical procedures to treat cardiac arrhythmia.
The training required to become an electrophysiologist is long and requires 8 years after medical school (within the U.S.). Three years of internal medicine residency, three years of cardiology fellowship, and two years of clinical cardiac electrophysiology.
Cardiogeriatrics, or geriatric cardiology, is the branch of cardiology and geriatric medicine that deals with the cardiovascular disorders in elderly people.
Cardiac disorders such as coronary heart disease, including myocardial infarction, heart failure, cardiomyopathy, and arrhythmias such as atrial fibrillation, are common and are a major cause of mortality in elderly people. Vascular disorders such as atherosclerosis and peripheral arterial disease cause significant morbidity and mortality in aged people.
Cardiac imaging includes echocardiography (echo), cardiac magnetic resonance imaging (CMR), and computed tomography of the heart. Those who specialize in cardiac imaging may undergo more training in all imaging modes or focus on a single imaging modality.
Echocardiography (or "echo") uses standard two-dimensional, three-dimensional, and Doppler ultrasound to create images of the heart. Those who specialize in echo may spend a significant amount of their clinical time reading echos and performing transesophageal echo, in particular using the latter during procedures such as insertion of a left atrial appendage occlusion device.
Cardiac MRI utilizes special protocols to image heart structure and function with specific sequences for certain diseases such as hemochromatosis and amyloidosis.
Cardiac CT utilizes special protocols to image heart structure and function with particular emphasis on coronary arteries.
Interventional cardiology is a branch of cardiology that deals specifically with the catheter based treatment of structural heart diseases. A large number of procedures can be performed on the heart by catheterization, including angiogram, angioplasty, atherectomy, and stent implantation. These procedures all involve insertion of a sheath into the femoral artery or radial artery (but, in practice, any large peripheral artery or vein) and cannulating the heart under X-ray visualization (most commonly fluoroscopy). This cannulation allows indirect access to the heart, bypassing the trauma caused by surgical opening of the chest.
The main advantages of using the interventional cardiology or radiology approach are the avoidance of the scars and pain, and long post-operative recovery. Additionally, interventional cardiology procedure of primary angioplasty is now the gold standard of care for an acute myocardial infarction. This procedure can also be done proactively, when areas of the vascular system become occluded from atherosclerosis. The Cardiologist will thread this sheath through the vascular system to access the heart. This sheath has a balloon and a tiny wire mesh tube wrapped around it, and if the cardiologist finds a blockage or stenosis, they can inflate the balloon at the occlusion site in the vascular system to flatten or compress the plaque against the vascular wall. Once that is complete a stent is placed as a type of scaffold to hold the vasculature open permanently.
Specialization of general cardiology to just that of the cardiomyopathies leads to also specializing in heart transplant and pulmonary hypertension. Cardiomyopathy is a heart disease of the heart muscle, where the heart muscle becomes inflamed and thick.
A recent specialization of cardiology is that of cardiooncology. This area specializes in the cardiac management in those with cancer and, in particular, those with plans for chemotherapy or whom have experienced cardiac complications of chemotherapy.
In recent times, the focus is gradually shifting to preventive cardiology due to increased cardiovascular disease burden at an early age. According to the WHO, 37% of all premature deaths are due to cardiovascular diseases and out of this, 82% are in low and middle income countries. Clinical cardiology is the sub specialty of cardiology which looks after preventive cardiology and cardiac rehabilitation. Preventive cardiology also deals with routine preventive checkup though noninvasive tests, specifically electrocardiography, fasegraphy, stress tests, lipid profile and general physical examination to detect any cardiovascular diseases at an early age, while cardiac rehabilitation is the upcoming branch of cardiology which helps a person regain their overall strength and live a normal life after a cardiovascular event. A subspecialty of preventive cardiology is sports cardiology. Because heart disease is the leading cause of death in the world including United States (cdc.gov), national health campaigns and randomized control research has developed to improve heart health.
Helen B. Taussig is known as the founder of pediatric cardiology. She became famous through her work with Tetralogy congenital heart defect in which oxygenated and deoxygenated blood enters the circulatory system resulting from a ventricular septal defect (VSD) right beneath the aorta. This condition causes newborns to have a bluish-tint, cyanosis, and have a deficiency of oxygen to their tissues, hypoxemia. She worked with Alfred Blalock and Vivien Thomas at the Johns Hopkins Hospital where they experimented with dogs to look at how they would attempt to surgically cure these "blue babies". They eventually figured out how to do just that by the anastomosis of the systemic artery to the pulmonary artery and called this the Blalock-Taussig Shunt.
Tetralogy of Fallot, pulmonary atresia, double outlet right ventricle, transposition of the great arteries, persistent truncus arteriosus, and Ebstein's anomaly are various congenital cyanotic heart diseases, in which the blood of the newborn is not oxygenated efficiently, due to the heart defect.
As more children with congenital heart disease are surviving into adulthood, a hybrid of adult & pediatric cardiology has emerged called adult congenital heart disease (ACHD). This field can be entered as either adult or pediatric cardiology. ACHD specializes in congenital diseases in the setting of adult diseases (e.g., coronary artery disease, COPD, diabetes) that is, otherwise, atypical for adult or pediatric cardiology.
As the center focus of cardiology, the heart has numerous anatomical features (e.g., atria, ventricles, heart valves) and numerous physiological features (e.g., systole, heart sounds, afterload) that have been encyclopedically documented for many centuries. The heart is located in the middle of the abdomen with its tip slightly towards the left side of the abdomen.
Disorders of the heart lead to heart disease and cardiovascular disease and can lead to a significant number of deaths: cardiovascular disease is the leading cause of death in the U.S. and caused 24.95% of total deaths in 2008.
The primary responsibility of the heart is to pump blood throughout the body. It pumps blood from the body — called the systemic circulation — through the lungs — called the pulmonary circulation — and then back out to the body. This means that the heart is connected to and affects the entirety of the body. Simplified, the heart is a circuit of the circulation. While plenty is known about the healthy heart, the bulk of study in cardiology is in disorders of the heart and restoration, and where possible, of function.
The heart is a muscle that squeezes blood and functions like a pump. The heart's systems can be classified as either electrical or mechanical, and both of these systems are susceptible to failure or dysfunction.
The electrical system of the heart is centered on the periodic contraction (squeezing) of the muscle cells that is caused by the cardiac pacemaker located in the sinoatrial node. The study of the electrical aspects is a sub-field of electrophysiology called cardiac electrophysiology and is epitomized with the electrocardiogram (ECG/EKG). The action potentials generated in the pacemaker propagate throughout the heart in a specific pattern. The system that carries this potential is called the electrical conduction system. Dysfunction of the electrical system manifests in many ways and may include Wolff–Parkinson–White syndrome, ventricular fibrillation, and heart block.
The mechanical system of the heart is centered on the fluidic movement of blood and the functionality of the heart as a pump. The mechanical part is ultimately the purpose of the heart and many of the disorders of the heart disrupt the ability to move blood. Heart failure is one condition in which the mechanical properties of the heart have failed or are failing, which means insufficient blood is being circulated. Failure to move a sufficient amount of blood through the body can cause damage or failure of other organs and may result in death if severe.
Coronary circulation is the circulation of blood in the blood vessels of the heart muscle (the myocardium). The vessels that deliver oxygen-rich blood to the myocardium are known as coronary arteries. The vessels that remove the deoxygenated blood from the heart muscle are known as cardiac veins. These include the great cardiac vein, the middle cardiac vein, the small cardiac vein and the anterior cardiac veins.
As the left and right coronary arteries run on the surface of the heart, they can be called epicardial coronary arteries. These arteries, when healthy, are capable of autoregulation to maintain coronary blood flow at levels appropriate to the needs of the heart muscle. These relatively narrow vessels are commonly affected by atherosclerosis and can become blocked, causing angina or myocardial infarction (a.k.a a heart attack). The coronary arteries that run deep within the myocardium are referred to as subendocardial.
The coronary arteries are classified as "end circulation", since they represent the only source of blood supply to the myocardium; there is very little redundant blood supply, which is why blockage of these vessels can be so critical.
The cardiac examination (also called the "precordial exam"), is performed as part of a physical examination, or when a patient presents with chest pain suggestive of a cardiovascular pathology. It would typically be modified depending on the indication and integrated with other examinations especially the respiratory examination.
Like all medical examinations, the cardiac examination follows the standard structure of inspection, palpation and auscultation.
Cardiology is concerned with the normal functionality of the heart and the deviation from a healthy heart. Many disorders involve the heart itself, but some are outside of the heart and in the vascular system. Collectively, the two are jointly termed the cardiovascular system, and diseases of one part tend to affect the other.
Coronary artery disease, also known as "ischemic heart disease", is a group of diseases that includes: stable angina, unstable angina, myocardial infarction, and is one of the causes of sudden cardiac death. It is within the group of cardiovascular diseases of which it is the most common type. A common symptom is chest pain or discomfort which may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. Usually symptoms occur with exercise or emotional stress, last less than a few minutes, and get better with rest. Shortness of breath may also occur and sometimes no symptoms are present. The first sign is occasionally a heart attack. Other complications include heart failure or an irregular heartbeat.
Risk factors include: high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, and excessive alcohol, among others. Other risks include depression. The underlying mechanism involves atherosclerosis of the arteries of the heart. A number of tests may help with diagnoses including: electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, and coronary angiogram, among others.
Prevention is by eating a healthy diet, regular exercise, maintaining a healthy weight and not smoking. Sometimes medication for diabetes, high cholesterol, or high blood pressure are also used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets including aspirin, beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improve life expectancy or decreases heart attack risk.
In 2013 CAD was the most common cause of death globally, resulting in 8.14 million deaths (16.8%) up from 5.74 million deaths (12%) in 1990. The risk of death from CAD for a given age has decreased between 1980 and 2010 especially in developed countries. The number of cases of CAD for a given age has also decreased between 1990 and 2010. In the U.S. in 2010 about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45. Rates are higher among men than women of a given age.
Heart failure or formally cardiomyopathy, is the impaired function of the heart and there are numerous causes and forms of heart failure.
Cardiac arrhythmia, also known as "cardiac dysrhythmia" or "irregular heartbeat", is a group of conditions in which the heartbeat is too fast, too slow, or irregular in its rhythm. A heart rate that is too fast – above 100 beats per minute in adults – is called tachycardia. A heart rate that is too slow – below 60 beats per minute – is called bradycardia. Many types of arrhythmia present no symptoms. When symptoms are present, they may include palpitations, or feeling a pause between heartbeats. More serious symptoms may include lightheadedness, passing out, shortness of breath, or chest pain. While most types of arrhythmia are not serious, some predispose a person to complications such as stroke or heart failure. Others may result in cardiac arrest.
There are four main types of arrhythmia: extra beats, supraventricular tachycardias, ventricular arrhythmias, and bradyarrhythmias. Extra beats include premature atrial contractions, premature ventricular contractions, and premature junctional contractions. Supraventricular tachycardias include atrial fibrillation, atrial flutter, and paroxysmal supraventricular tachycardia. Ventricular arrhythmias include ventricular fibrillation and ventricular tachycardia. Arrhythmias are due to problems with the electrical conduction system of the heart. Arrhythmias may occur in children; however, the normal range for the heart rate is different and depends on age. A number of tests can help diagnose arrhythmia, including an electrocardiogram and Holter monitor.
Most arrhythmias can be effectively treated. Treatments may include medications, medical procedures such as a pacemaker, and surgery. Medications for a fast heart rate may include beta blockers or agents that attempt to restore a normal heart rhythm such as procainamide. This later group may have more significant side effects especially if taken for a long period of time. Pacemakers are often used for slow heart rates. Those with an irregular heartbeat are often treated with blood thinners to reduce the risk of complications. Those who have severe symptoms from an arrhythmia may receive urgent treatment with a jolt of electricity in the form of cardioversion or defibrillation.
Arrhythmia affects millions of people. In Europe and North America, as of 2014, atrial fibrillation affects about 2% to 3% of the population. Atrial fibrillation and atrial flutter resulted in 112,000 deaths in 2013, up from 29,000 in 1990. Sudden cardiac death is the cause of about half of deaths due to cardiovascular disease or about 15% of all deaths globally. About 80% of sudden cardiac death is the result of ventricular arrhythmias. Arrhythmias may occur at any age but are more common among older people.
Cardiac arrest is a sudden stop in effective blood flow due to the failure of the heart to contract effectively. Symptoms include loss of consciousness and abnormal or absent breathing. Some people may have chest pain, shortness of breath, or nausea before this occurs. If not treated within minutes, death usually occurs.
The most common cause of cardiac arrest is coronary artery disease. Less common causes include major blood loss, lack of oxygen, very low potassium, heart failure, and intense physical exercise. A number of inherited disorders may also increase the risk including long QT syndrome. The initial heart rhythm is most often ventricular fibrillation. The diagnosis is confirmed by finding no pulse. While a cardiac arrest may be caused by heart attack or heart failure these are not the same.
Prevention includes not smoking, physical activity, and maintaining a healthy weight. Treatment for cardiac arrest is immediate cardiopulmonary resuscitation (CPR) and, if a shockable rhythm is present, defibrillation. Among those who survive targeted temperature management may improve outcomes. An implantable cardiac defibrillator may be placed to reduce the chance of death from recurrence.
In the United States, cardiac arrest outside of hospital occurs in about 13 per 10,000 people per year (326,000 cases). In hospital cardiac arrest occurs in an additional 209,000 Cardiac arrest becomes more common with age. It affects males more often than females. The percentage of people who survive with treatment is about 8%. Many who survive have significant disability. Many U.S. television shows, however, have portrayed unrealistically high survival rates of 67%.
Hypertension, also known as "high blood pressure", is a long term medical condition in which the blood pressure in the arteries is persistently elevated. High blood pressure usually does not cause symptoms. Long term high blood pressure, however, is a major risk factor for coronary artery disease, stroke, heart failure, peripheral vascular disease, vision loss, and chronic kidney disease.
Lifestyle factors can increase the risk of hypertension. These include excess salt in the diet, excess body weight, smoking, and alcohol consumption. Hypertension can also be caused by other diseases, or occur as a side-effect of drugs.
Blood pressure is expressed by two measurements, the systolic and diastolic pressures, which are the maximum and minimum pressures, respectively. Normal blood pressure when at rest is within the range of 100–140 millimeters mercury (mmHg) systolic and 60–90 mmHg diastolic. High blood pressure is present if the resting blood pressure is persistently at or above 140/90 mmHg for most adults. Different numbers apply to children. When diagnosing high blood pressure, ambulatory blood pressure monitoring over a 24-hour period appears to be more accurate than "in-office" blood pressure measurement at a physician's office or other blood pressure screening location.
Lifestyle changes and medications can lower blood pressure and decrease the risk of health complications. Lifestyle changes include weight loss, decreased salt intake, physical exercise, and a healthy diet. If changes in lifestyle are insufficient, blood pressure medications may be used. A regimen of up to three medications effectively controls blood pressure in 90% of people. The treatment of moderate to severe high arterial blood pressure (defined as >160/100 mmHg) with medication is associated with an improved life expectancy and reduced morbidity. The effect of treatment for blood pressure between 140/90 mmHg and 160/100 mmHg is less clear, with some studies finding benefits while others do not. High blood pressure affects between 16% and 37% of the population globally. In 2010, hypertension was believed to have been a factor in 18% (9.4 million) deaths.
Essential hypertension is the form of hypertension that by definition has no identifiable cause. It is the most common type of hypertension, affecting 95% of hypertensive patients, it tends to be familial and is likely to be the consequence of an interaction between environmental and genetic factors. Prevalence of essential hypertension increases with age, and individuals with relatively high blood pressure at younger ages are at increased risk for the subsequent development of hypertension. Hypertension can increase the risk of cerebral, cardiac, and renal events.
Secondary hypertension is a type of hypertension which is caused by an identifiable underlying secondary cause. It is much less common than essential hypertension, affecting only 5% of hypertensive patients. It has many different causes including endocrine diseases, kidney diseases, and tumors. It also can be a side effect of many medications.
Complications of hypertension are clinical outcomes that result from persistent elevation of blood pressure. Hypertension is a risk factor for all clinical manifestations of atherosclerosis since it is a risk factor for atherosclerosis itself. It is an independent predisposing factor for heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease. It is the most important risk factor for cardiovascular morbidity and mortality, in industrialized countries.
A congenital heart defect, also known as a "congenital heart anomaly" or "congenital heart disease", is a problem in the structure of the heart that is present at birth. Signs and symptoms depend on the specific type of problem. Symptoms can vary from none to life-threatening. When present they may include rapid breathing, bluish skin, poor weight gain, and feeling tired. It does not cause chest pain. Most congenital heart problems do not occur with other diseases. Complications that can result from heart defects include heart failure.
The cause of a congenital heart defect is often unknown. Certain cases may be due to infections during pregnancy such as rubella, use of certain medications or drugs such as alcohol or tobacco, parents being closely related, or poor nutritional status or obesity in the mother. Having a parent with a congenital heart defect is also a risk factor. A number of genetic conditions are associated with heart defects including Down syndrome, Turner syndrome, and Marfan syndrome. Congenital heart defects are divided into two main groups: cyanotic heart defects and non-cyanotic heart defects, depending on whether the child has the potential to turn bluish in color. The problems may involve the interior walls of the heart, the heart valves, or the large blood vessels that lead to and from the heart.
Congenital heart defects are partly preventable through rubella vaccination, the adding of iodine to salt, and the adding of folic acid to certain food products. Some defects do not need treatment. Other may be effectively treated with catheter based procedures or heart surgery. Occasionally a number of operations may be needed. Occasionally heart transplantation is required. With appropriate treatment outcomes, even with complex problems, are generally good.
Heart defects are the most common birth defect. In 2013 they were present in 34.3 million people globally. They affect between 4 and 75 per 1,000 live births depending upon how they are diagnosed. About 6 to 19 per 1,000 cause a moderate to severe degree of problems. Congenital heart defects are the leading cause of birth defect-related deaths. In 2013 they resulted in 323,000 deaths down from 366,000 deaths in 1990.
Tetralogy of Fallot is the most common congenital heart disease arising in 1–3 cases per 1,000 births. The cause of this defect is a ventricular septal defect (VSD) and an overriding aorta. These two defects combined causes deoxygenated blood to bypass the lungs and going right back into the circulatory system. The modified Blalock-Taussig shunt is usually used to fix the circulation. This procedure is done by placing a graft between the subclavian artery and the ipsilateral pulmonary artery to restore the correct blood flow.
Pulmonary atresia happens in 7–8 per 100,000 births and is characterized by the aorta branching out of the right ventricle. This causes the deoxygenated blood to bypass the lungs and enter the circulatory system. Surgeries can fix this by redirecting the aorta and fixing the right ventricle and pulmonary artery connection.
There are two types of pulmonary atresia, classified by whether or not the baby also has a ventricular septal defect.
Double outlet right ventricle (DORV) is when both great arteries, the pulmonary artery and the aorta, are connected to the right ventricle. There is usually a VSD in different particular places depending on the variations of DORV, typically 50% are subaortic and 30%. The surgeries that can be done to fix this defect can vary due to the different physiology and blood flow in the defected heart. One way it can be cured is by a VSD closure and placing conduits to restart the blood flow between the left ventricle and the aorta and between the right ventricle and the pulmonary artery. Another way is systemic-to-pulmonary artery shunt in cases associated with pulmonary stenosis. Also, a balloon atrial septostomy can be done to relieve hypoxemia caused by DORV with the Taussig-Bing anomaly while surgical correction is awaited.
There are two different types of transposition of the great arteries, Dextro-transposition of the great arteries and Levo-transposition of the great arteries, depending on where the chambers and vessels connect. Dextro-transposition happens in about 1 in 4,000 newborns and is when the right ventricle pumps blood into the aorta and deoxygenated blood enters the bloodstream. The temporary procedure is to create an atrial septal defect. A permanent fix is more complicated and involves redirecting the pulmonary return to the right atrium and the systemic return to the left atrium, which is known as the Senning procedure. The Rastelli procedure can also be done by rerouting the left ventricular outflow, dividing the pulmonary trunk, and placing a conduit in between the right ventricle and pulmonary trunk. Levo-transposition happens in about 1 in 13,000 newborns and is characterized by the left ventricle pumping blood into the lungs and the right ventricle pumping the blood into the aorta. This may not produce problems at the beginning, but will eventually due to the different pressures each ventricle uses to pump blood. Switching the left ventricle to be the systemic ventricle and the right ventricle to pump blood into the pulmonary artery can repair levo-transposition.
Persistent truncus arteriosus is when the truncus arteriosus fails to split into the aorta and pulmonary trunk. This occurs in about 1 in 11,000 live births and allows both oxygenated and deoxygenated blood into the body. The repair consists of a VSD closure and the Rastelli procedure.
Ebstein's anomaly is characterized by a right atrium that is significantly enlarged and a heart that is shaped like a box. This is very rare and happens in less than 1% of congenital heart disease cases. The surgical repair varies depending on the severity of the disease.
Pediatric cardiology is a sub-specialty of pediatrics. To become a pediatric cardiologist in the U.S., one must complete a three-year residency in pediatrics, followed by a three-year fellowship in pediatric cardiology. Per doximity, pediatric cardiologists make an average of $303,917 in the U.S.
Diagnostic tests in cardiology are the methods of identifying heart conditions associated with healthy vs. unhealthy, pathologic heart function. The starting point is obtaining a medical history, followed by Auscultation. Then blood tests, electrophysiological procedures, and cardiac imaging can be ordered for further analysis. Electrophysiological procedures include electrocardiogram, cardiac monitoring, cardiac stress testing, and the electrophysiology study.
Cardiology is known for randomized controlled trials that guide clinical treatment of cardiac diseases. While dozens are published every year, there are landmark trials that shift treatment significantly. Trials often have an acronym of the trial name, and this acronym is used to reference the trial and its results. Some of these landmark trials include: | [
{
"paragraph_id": 0,
"text": "Cardiology (from Ancient Greek καρδίᾱ (kardiā) 'heart', and -λογία (-logia) 'study') is the study of the heart. Cardiology is a branch of medicine that deals with disorders of the heart and the cardiovascular system. The field includes medical diagnosis and treatment of congenital heart defects, coronary artery disease, heart failure, valvular heart disease, and electrophysiology. Physicians who specialize in this field of medicine are called cardiologists, a specialty of internal medicine. Pediatric cardiologists are pediatricians who specialize in cardiology. Physicians who specialize in cardiac surgery are called cardiothoracic surgeons or cardiac surgeons, a specialty of general surgery.",
"title": ""
},
{
"paragraph_id": 1,
"text": "All cardiologists in the branch of medicine study the disorders of the heart, but the study of adult and child heart disorders each require different training pathways. Therefore, an adult cardiologist (often simply called \"cardiologist\") is inadequately trained to take care of children, and pediatric cardiologists are not trained to treat adult heart disease. Surgical aspects are not included in cardiology and are in the domain of cardiothoracic surgery. For example, coronary artery bypass surgery (CABG), cardiopulmonary bypass and valve replacement are surgical procedures performed by surgeons, not cardiologists. However, some minimally invasive procedures such as cardiac catheterization and pacemaker implantation are performed by cardiologists who have additional training in non-surgical interventions (interventional cardiology and electrophysiology respectively).",
"title": "Specializations"
},
{
"paragraph_id": 2,
"text": "Cardiology is a specialty of internal medicine. To be a cardiologist in the United States, a three-year residency in internal medicine is followed by a three-year fellowship in cardiology. It is possible to specialize further in a sub-specialty. Recognized sub-specialties in the U.S. by the Accreditation Council for Graduate Medical Education are cardiac electrophysiology, echocardiography, interventional cardiology, and nuclear cardiology. Recognized subspecialties in the U.S. by the American Osteopathic Association Bureau of Osteopathic Specialists include clinical cardiac electrophysiology and interventional cardiology. In India, a three-year residency in General Medicine or Pediatrics after M.B.B.S. and then three years of residency in cardiology are needed to be a D.M./Diplomate of National Board (DNB) in Cardiology.",
"title": "Specializations"
},
{
"paragraph_id": 3,
"text": "Per Doximity, adult cardiologists earn an average of $436,849 per year in the U.S.",
"title": "Specializations"
},
{
"paragraph_id": 4,
"text": "Cardiac electrophysiology is the science of elucidating, diagnosing, and treating the electrical activities of the heart. The term is usually used to describe studies of such phenomena by invasive (intracardiac) catheter recording of spontaneous activity as well as of cardiac responses to programmed electrical stimulation (PES). These studies are performed to assess complex arrhythmias, elucidate symptoms, evaluate abnormal electrocardiograms, assess risk of developing arrhythmias in the future, and design treatment. These procedures increasingly include therapeutic methods (typically radiofrequency ablation, or cryoablation) in addition to diagnostic and prognostic procedures. Other therapeutic modalities employed in this field include antiarrhythmic drug therapy and implantation of pacemakers and automatic implantable cardioverter-defibrillators (AICD).",
"title": "Specializations"
},
{
"paragraph_id": 5,
"text": "The cardiac electrophysiology study typically measures the response of the injured or cardiomyopathic myocardium to PES on specific pharmacological regimens in order to assess the likelihood that the regimen will successfully prevent potentially fatal sustained ventricular tachycardia (VT) or ventricular fibrillation (VF) in the future. Sometimes a series of electrophysiology-study drug trials must be conducted to enable the cardiologist to select the one regimen for long-term treatment that best prevents or slows the development of VT or VF following PES. Such studies may also be conducted in the presence of a newly implanted or newly replaced cardiac pacemaker or AICD.",
"title": "Specializations"
},
{
"paragraph_id": 6,
"text": "Clinical cardiac electrophysiology is a branch of the medical specialty of cardiology and is concerned with the study and treatment of rhythm disorders of the heart. Cardiologists with expertise in this area are usually referred to as electrophysiologists. Electrophysiologists are trained in the mechanism, function, and performance of the electrical activities of the heart. Electrophysiologists work closely with other cardiologists and cardiac surgeons to assist or guide therapy for heart rhythm disturbances (arrhythmias). They are trained to perform interventional and surgical procedures to treat cardiac arrhythmia.",
"title": "Specializations"
},
{
"paragraph_id": 7,
"text": "The training required to become an electrophysiologist is long and requires 8 years after medical school (within the U.S.). Three years of internal medicine residency, three years of cardiology fellowship, and two years of clinical cardiac electrophysiology.",
"title": "Specializations"
},
{
"paragraph_id": 8,
"text": "Cardiogeriatrics, or geriatric cardiology, is the branch of cardiology and geriatric medicine that deals with the cardiovascular disorders in elderly people.",
"title": "Specializations"
},
{
"paragraph_id": 9,
"text": "Cardiac disorders such as coronary heart disease, including myocardial infarction, heart failure, cardiomyopathy, and arrhythmias such as atrial fibrillation, are common and are a major cause of mortality in elderly people. Vascular disorders such as atherosclerosis and peripheral arterial disease cause significant morbidity and mortality in aged people.",
"title": "Specializations"
},
{
"paragraph_id": 10,
"text": "Cardiac imaging includes echocardiography (echo), cardiac magnetic resonance imaging (CMR), and computed tomography of the heart. Those who specialize in cardiac imaging may undergo more training in all imaging modes or focus on a single imaging modality.",
"title": "Specializations"
},
{
"paragraph_id": 11,
"text": "Echocardiography (or \"echo\") uses standard two-dimensional, three-dimensional, and Doppler ultrasound to create images of the heart. Those who specialize in echo may spend a significant amount of their clinical time reading echos and performing transesophageal echo, in particular using the latter during procedures such as insertion of a left atrial appendage occlusion device.",
"title": "Specializations"
},
{
"paragraph_id": 12,
"text": "Cardiac MRI utilizes special protocols to image heart structure and function with specific sequences for certain diseases such as hemochromatosis and amyloidosis.",
"title": "Specializations"
},
{
"paragraph_id": 13,
"text": "Cardiac CT utilizes special protocols to image heart structure and function with particular emphasis on coronary arteries.",
"title": "Specializations"
},
{
"paragraph_id": 14,
"text": "Interventional cardiology is a branch of cardiology that deals specifically with the catheter based treatment of structural heart diseases. A large number of procedures can be performed on the heart by catheterization, including angiogram, angioplasty, atherectomy, and stent implantation. These procedures all involve insertion of a sheath into the femoral artery or radial artery (but, in practice, any large peripheral artery or vein) and cannulating the heart under X-ray visualization (most commonly fluoroscopy). This cannulation allows indirect access to the heart, bypassing the trauma caused by surgical opening of the chest.",
"title": "Specializations"
},
{
"paragraph_id": 15,
"text": "The main advantages of using the interventional cardiology or radiology approach are the avoidance of the scars and pain, and long post-operative recovery. Additionally, interventional cardiology procedure of primary angioplasty is now the gold standard of care for an acute myocardial infarction. This procedure can also be done proactively, when areas of the vascular system become occluded from atherosclerosis. The Cardiologist will thread this sheath through the vascular system to access the heart. This sheath has a balloon and a tiny wire mesh tube wrapped around it, and if the cardiologist finds a blockage or stenosis, they can inflate the balloon at the occlusion site in the vascular system to flatten or compress the plaque against the vascular wall. Once that is complete a stent is placed as a type of scaffold to hold the vasculature open permanently.",
"title": "Specializations"
},
{
"paragraph_id": 16,
"text": "Specialization of general cardiology to just that of the cardiomyopathies leads to also specializing in heart transplant and pulmonary hypertension. Cardiomyopathy is a heart disease of the heart muscle, where the heart muscle becomes inflamed and thick.",
"title": "Specializations"
},
{
"paragraph_id": 17,
"text": "A recent specialization of cardiology is that of cardiooncology. This area specializes in the cardiac management in those with cancer and, in particular, those with plans for chemotherapy or whom have experienced cardiac complications of chemotherapy.",
"title": "Specializations"
},
{
"paragraph_id": 18,
"text": "In recent times, the focus is gradually shifting to preventive cardiology due to increased cardiovascular disease burden at an early age. According to the WHO, 37% of all premature deaths are due to cardiovascular diseases and out of this, 82% are in low and middle income countries. Clinical cardiology is the sub specialty of cardiology which looks after preventive cardiology and cardiac rehabilitation. Preventive cardiology also deals with routine preventive checkup though noninvasive tests, specifically electrocardiography, fasegraphy, stress tests, lipid profile and general physical examination to detect any cardiovascular diseases at an early age, while cardiac rehabilitation is the upcoming branch of cardiology which helps a person regain their overall strength and live a normal life after a cardiovascular event. A subspecialty of preventive cardiology is sports cardiology. Because heart disease is the leading cause of death in the world including United States (cdc.gov), national health campaigns and randomized control research has developed to improve heart health.",
"title": "Specializations"
},
{
"paragraph_id": 19,
"text": "Helen B. Taussig is known as the founder of pediatric cardiology. She became famous through her work with Tetralogy congenital heart defect in which oxygenated and deoxygenated blood enters the circulatory system resulting from a ventricular septal defect (VSD) right beneath the aorta. This condition causes newborns to have a bluish-tint, cyanosis, and have a deficiency of oxygen to their tissues, hypoxemia. She worked with Alfred Blalock and Vivien Thomas at the Johns Hopkins Hospital where they experimented with dogs to look at how they would attempt to surgically cure these \"blue babies\". They eventually figured out how to do just that by the anastomosis of the systemic artery to the pulmonary artery and called this the Blalock-Taussig Shunt.",
"title": "Specializations"
},
{
"paragraph_id": 20,
"text": "Tetralogy of Fallot, pulmonary atresia, double outlet right ventricle, transposition of the great arteries, persistent truncus arteriosus, and Ebstein's anomaly are various congenital cyanotic heart diseases, in which the blood of the newborn is not oxygenated efficiently, due to the heart defect.",
"title": "Specializations"
},
{
"paragraph_id": 21,
"text": "As more children with congenital heart disease are surviving into adulthood, a hybrid of adult & pediatric cardiology has emerged called adult congenital heart disease (ACHD). This field can be entered as either adult or pediatric cardiology. ACHD specializes in congenital diseases in the setting of adult diseases (e.g., coronary artery disease, COPD, diabetes) that is, otherwise, atypical for adult or pediatric cardiology.",
"title": "Specializations"
},
{
"paragraph_id": 22,
"text": "As the center focus of cardiology, the heart has numerous anatomical features (e.g., atria, ventricles, heart valves) and numerous physiological features (e.g., systole, heart sounds, afterload) that have been encyclopedically documented for many centuries. The heart is located in the middle of the abdomen with its tip slightly towards the left side of the abdomen.",
"title": "The heart"
},
{
"paragraph_id": 23,
"text": "Disorders of the heart lead to heart disease and cardiovascular disease and can lead to a significant number of deaths: cardiovascular disease is the leading cause of death in the U.S. and caused 24.95% of total deaths in 2008.",
"title": "The heart"
},
{
"paragraph_id": 24,
"text": "The primary responsibility of the heart is to pump blood throughout the body. It pumps blood from the body — called the systemic circulation — through the lungs — called the pulmonary circulation — and then back out to the body. This means that the heart is connected to and affects the entirety of the body. Simplified, the heart is a circuit of the circulation. While plenty is known about the healthy heart, the bulk of study in cardiology is in disorders of the heart and restoration, and where possible, of function.",
"title": "The heart"
},
{
"paragraph_id": 25,
"text": "The heart is a muscle that squeezes blood and functions like a pump. The heart's systems can be classified as either electrical or mechanical, and both of these systems are susceptible to failure or dysfunction.",
"title": "The heart"
},
{
"paragraph_id": 26,
"text": "The electrical system of the heart is centered on the periodic contraction (squeezing) of the muscle cells that is caused by the cardiac pacemaker located in the sinoatrial node. The study of the electrical aspects is a sub-field of electrophysiology called cardiac electrophysiology and is epitomized with the electrocardiogram (ECG/EKG). The action potentials generated in the pacemaker propagate throughout the heart in a specific pattern. The system that carries this potential is called the electrical conduction system. Dysfunction of the electrical system manifests in many ways and may include Wolff–Parkinson–White syndrome, ventricular fibrillation, and heart block.",
"title": "The heart"
},
{
"paragraph_id": 27,
"text": "The mechanical system of the heart is centered on the fluidic movement of blood and the functionality of the heart as a pump. The mechanical part is ultimately the purpose of the heart and many of the disorders of the heart disrupt the ability to move blood. Heart failure is one condition in which the mechanical properties of the heart have failed or are failing, which means insufficient blood is being circulated. Failure to move a sufficient amount of blood through the body can cause damage or failure of other organs and may result in death if severe.",
"title": "The heart"
},
{
"paragraph_id": 28,
"text": "Coronary circulation is the circulation of blood in the blood vessels of the heart muscle (the myocardium). The vessels that deliver oxygen-rich blood to the myocardium are known as coronary arteries. The vessels that remove the deoxygenated blood from the heart muscle are known as cardiac veins. These include the great cardiac vein, the middle cardiac vein, the small cardiac vein and the anterior cardiac veins.",
"title": "The heart"
},
{
"paragraph_id": 29,
"text": "As the left and right coronary arteries run on the surface of the heart, they can be called epicardial coronary arteries. These arteries, when healthy, are capable of autoregulation to maintain coronary blood flow at levels appropriate to the needs of the heart muscle. These relatively narrow vessels are commonly affected by atherosclerosis and can become blocked, causing angina or myocardial infarction (a.k.a a heart attack). The coronary arteries that run deep within the myocardium are referred to as subendocardial.",
"title": "The heart"
},
{
"paragraph_id": 30,
"text": "The coronary arteries are classified as \"end circulation\", since they represent the only source of blood supply to the myocardium; there is very little redundant blood supply, which is why blockage of these vessels can be so critical.",
"title": "The heart"
},
{
"paragraph_id": 31,
"text": "The cardiac examination (also called the \"precordial exam\"), is performed as part of a physical examination, or when a patient presents with chest pain suggestive of a cardiovascular pathology. It would typically be modified depending on the indication and integrated with other examinations especially the respiratory examination.",
"title": "The heart"
},
{
"paragraph_id": 32,
"text": "Like all medical examinations, the cardiac examination follows the standard structure of inspection, palpation and auscultation.",
"title": "The heart"
},
{
"paragraph_id": 33,
"text": "Cardiology is concerned with the normal functionality of the heart and the deviation from a healthy heart. Many disorders involve the heart itself, but some are outside of the heart and in the vascular system. Collectively, the two are jointly termed the cardiovascular system, and diseases of one part tend to affect the other.",
"title": "Heart disorders"
},
{
"paragraph_id": 34,
"text": "Coronary artery disease, also known as \"ischemic heart disease\", is a group of diseases that includes: stable angina, unstable angina, myocardial infarction, and is one of the causes of sudden cardiac death. It is within the group of cardiovascular diseases of which it is the most common type. A common symptom is chest pain or discomfort which may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. Usually symptoms occur with exercise or emotional stress, last less than a few minutes, and get better with rest. Shortness of breath may also occur and sometimes no symptoms are present. The first sign is occasionally a heart attack. Other complications include heart failure or an irregular heartbeat.",
"title": "Heart disorders"
},
{
"paragraph_id": 35,
"text": "Risk factors include: high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, and excessive alcohol, among others. Other risks include depression. The underlying mechanism involves atherosclerosis of the arteries of the heart. A number of tests may help with diagnoses including: electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, and coronary angiogram, among others.",
"title": "Heart disorders"
},
{
"paragraph_id": 36,
"text": "Prevention is by eating a healthy diet, regular exercise, maintaining a healthy weight and not smoking. Sometimes medication for diabetes, high cholesterol, or high blood pressure are also used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets including aspirin, beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improve life expectancy or decreases heart attack risk.",
"title": "Heart disorders"
},
{
"paragraph_id": 37,
"text": "In 2013 CAD was the most common cause of death globally, resulting in 8.14 million deaths (16.8%) up from 5.74 million deaths (12%) in 1990. The risk of death from CAD for a given age has decreased between 1980 and 2010 especially in developed countries. The number of cases of CAD for a given age has also decreased between 1990 and 2010. In the U.S. in 2010 about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45. Rates are higher among men than women of a given age.",
"title": "Heart disorders"
},
{
"paragraph_id": 38,
"text": "Heart failure or formally cardiomyopathy, is the impaired function of the heart and there are numerous causes and forms of heart failure.",
"title": "Heart disorders"
},
{
"paragraph_id": 39,
"text": "Cardiac arrhythmia, also known as \"cardiac dysrhythmia\" or \"irregular heartbeat\", is a group of conditions in which the heartbeat is too fast, too slow, or irregular in its rhythm. A heart rate that is too fast – above 100 beats per minute in adults – is called tachycardia. A heart rate that is too slow – below 60 beats per minute – is called bradycardia. Many types of arrhythmia present no symptoms. When symptoms are present, they may include palpitations, or feeling a pause between heartbeats. More serious symptoms may include lightheadedness, passing out, shortness of breath, or chest pain. While most types of arrhythmia are not serious, some predispose a person to complications such as stroke or heart failure. Others may result in cardiac arrest.",
"title": "Heart disorders"
},
{
"paragraph_id": 40,
"text": "There are four main types of arrhythmia: extra beats, supraventricular tachycardias, ventricular arrhythmias, and bradyarrhythmias. Extra beats include premature atrial contractions, premature ventricular contractions, and premature junctional contractions. Supraventricular tachycardias include atrial fibrillation, atrial flutter, and paroxysmal supraventricular tachycardia. Ventricular arrhythmias include ventricular fibrillation and ventricular tachycardia. Arrhythmias are due to problems with the electrical conduction system of the heart. Arrhythmias may occur in children; however, the normal range for the heart rate is different and depends on age. A number of tests can help diagnose arrhythmia, including an electrocardiogram and Holter monitor.",
"title": "Heart disorders"
},
{
"paragraph_id": 41,
"text": "Most arrhythmias can be effectively treated. Treatments may include medications, medical procedures such as a pacemaker, and surgery. Medications for a fast heart rate may include beta blockers or agents that attempt to restore a normal heart rhythm such as procainamide. This later group may have more significant side effects especially if taken for a long period of time. Pacemakers are often used for slow heart rates. Those with an irregular heartbeat are often treated with blood thinners to reduce the risk of complications. Those who have severe symptoms from an arrhythmia may receive urgent treatment with a jolt of electricity in the form of cardioversion or defibrillation.",
"title": "Heart disorders"
},
{
"paragraph_id": 42,
"text": "Arrhythmia affects millions of people. In Europe and North America, as of 2014, atrial fibrillation affects about 2% to 3% of the population. Atrial fibrillation and atrial flutter resulted in 112,000 deaths in 2013, up from 29,000 in 1990. Sudden cardiac death is the cause of about half of deaths due to cardiovascular disease or about 15% of all deaths globally. About 80% of sudden cardiac death is the result of ventricular arrhythmias. Arrhythmias may occur at any age but are more common among older people.",
"title": "Heart disorders"
},
{
"paragraph_id": 43,
"text": "Cardiac arrest is a sudden stop in effective blood flow due to the failure of the heart to contract effectively. Symptoms include loss of consciousness and abnormal or absent breathing. Some people may have chest pain, shortness of breath, or nausea before this occurs. If not treated within minutes, death usually occurs.",
"title": "Heart disorders"
},
{
"paragraph_id": 44,
"text": "The most common cause of cardiac arrest is coronary artery disease. Less common causes include major blood loss, lack of oxygen, very low potassium, heart failure, and intense physical exercise. A number of inherited disorders may also increase the risk including long QT syndrome. The initial heart rhythm is most often ventricular fibrillation. The diagnosis is confirmed by finding no pulse. While a cardiac arrest may be caused by heart attack or heart failure these are not the same.",
"title": "Heart disorders"
},
{
"paragraph_id": 45,
"text": "Prevention includes not smoking, physical activity, and maintaining a healthy weight. Treatment for cardiac arrest is immediate cardiopulmonary resuscitation (CPR) and, if a shockable rhythm is present, defibrillation. Among those who survive targeted temperature management may improve outcomes. An implantable cardiac defibrillator may be placed to reduce the chance of death from recurrence.",
"title": "Heart disorders"
},
{
"paragraph_id": 46,
"text": "In the United States, cardiac arrest outside of hospital occurs in about 13 per 10,000 people per year (326,000 cases). In hospital cardiac arrest occurs in an additional 209,000 Cardiac arrest becomes more common with age. It affects males more often than females. The percentage of people who survive with treatment is about 8%. Many who survive have significant disability. Many U.S. television shows, however, have portrayed unrealistically high survival rates of 67%.",
"title": "Heart disorders"
},
{
"paragraph_id": 47,
"text": "Hypertension, also known as \"high blood pressure\", is a long term medical condition in which the blood pressure in the arteries is persistently elevated. High blood pressure usually does not cause symptoms. Long term high blood pressure, however, is a major risk factor for coronary artery disease, stroke, heart failure, peripheral vascular disease, vision loss, and chronic kidney disease.",
"title": "Heart disorders"
},
{
"paragraph_id": 48,
"text": "Lifestyle factors can increase the risk of hypertension. These include excess salt in the diet, excess body weight, smoking, and alcohol consumption. Hypertension can also be caused by other diseases, or occur as a side-effect of drugs.",
"title": "Heart disorders"
},
{
"paragraph_id": 49,
"text": "Blood pressure is expressed by two measurements, the systolic and diastolic pressures, which are the maximum and minimum pressures, respectively. Normal blood pressure when at rest is within the range of 100–140 millimeters mercury (mmHg) systolic and 60–90 mmHg diastolic. High blood pressure is present if the resting blood pressure is persistently at or above 140/90 mmHg for most adults. Different numbers apply to children. When diagnosing high blood pressure, ambulatory blood pressure monitoring over a 24-hour period appears to be more accurate than \"in-office\" blood pressure measurement at a physician's office or other blood pressure screening location.",
"title": "Heart disorders"
},
{
"paragraph_id": 50,
"text": "Lifestyle changes and medications can lower blood pressure and decrease the risk of health complications. Lifestyle changes include weight loss, decreased salt intake, physical exercise, and a healthy diet. If changes in lifestyle are insufficient, blood pressure medications may be used. A regimen of up to three medications effectively controls blood pressure in 90% of people. The treatment of moderate to severe high arterial blood pressure (defined as >160/100 mmHg) with medication is associated with an improved life expectancy and reduced morbidity. The effect of treatment for blood pressure between 140/90 mmHg and 160/100 mmHg is less clear, with some studies finding benefits while others do not. High blood pressure affects between 16% and 37% of the population globally. In 2010, hypertension was believed to have been a factor in 18% (9.4 million) deaths.",
"title": "Heart disorders"
},
{
"paragraph_id": 51,
"text": "Essential hypertension is the form of hypertension that by definition has no identifiable cause. It is the most common type of hypertension, affecting 95% of hypertensive patients, it tends to be familial and is likely to be the consequence of an interaction between environmental and genetic factors. Prevalence of essential hypertension increases with age, and individuals with relatively high blood pressure at younger ages are at increased risk for the subsequent development of hypertension. Hypertension can increase the risk of cerebral, cardiac, and renal events.",
"title": "Heart disorders"
},
{
"paragraph_id": 52,
"text": "Secondary hypertension is a type of hypertension which is caused by an identifiable underlying secondary cause. It is much less common than essential hypertension, affecting only 5% of hypertensive patients. It has many different causes including endocrine diseases, kidney diseases, and tumors. It also can be a side effect of many medications.",
"title": "Heart disorders"
},
{
"paragraph_id": 53,
"text": "Complications of hypertension are clinical outcomes that result from persistent elevation of blood pressure. Hypertension is a risk factor for all clinical manifestations of atherosclerosis since it is a risk factor for atherosclerosis itself. It is an independent predisposing factor for heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease. It is the most important risk factor for cardiovascular morbidity and mortality, in industrialized countries.",
"title": "Heart disorders"
},
{
"paragraph_id": 54,
"text": "A congenital heart defect, also known as a \"congenital heart anomaly\" or \"congenital heart disease\", is a problem in the structure of the heart that is present at birth. Signs and symptoms depend on the specific type of problem. Symptoms can vary from none to life-threatening. When present they may include rapid breathing, bluish skin, poor weight gain, and feeling tired. It does not cause chest pain. Most congenital heart problems do not occur with other diseases. Complications that can result from heart defects include heart failure.",
"title": "Heart disorders"
},
{
"paragraph_id": 55,
"text": "The cause of a congenital heart defect is often unknown. Certain cases may be due to infections during pregnancy such as rubella, use of certain medications or drugs such as alcohol or tobacco, parents being closely related, or poor nutritional status or obesity in the mother. Having a parent with a congenital heart defect is also a risk factor. A number of genetic conditions are associated with heart defects including Down syndrome, Turner syndrome, and Marfan syndrome. Congenital heart defects are divided into two main groups: cyanotic heart defects and non-cyanotic heart defects, depending on whether the child has the potential to turn bluish in color. The problems may involve the interior walls of the heart, the heart valves, or the large blood vessels that lead to and from the heart.",
"title": "Heart disorders"
},
{
"paragraph_id": 56,
"text": "Congenital heart defects are partly preventable through rubella vaccination, the adding of iodine to salt, and the adding of folic acid to certain food products. Some defects do not need treatment. Other may be effectively treated with catheter based procedures or heart surgery. Occasionally a number of operations may be needed. Occasionally heart transplantation is required. With appropriate treatment outcomes, even with complex problems, are generally good.",
"title": "Heart disorders"
},
{
"paragraph_id": 57,
"text": "Heart defects are the most common birth defect. In 2013 they were present in 34.3 million people globally. They affect between 4 and 75 per 1,000 live births depending upon how they are diagnosed. About 6 to 19 per 1,000 cause a moderate to severe degree of problems. Congenital heart defects are the leading cause of birth defect-related deaths. In 2013 they resulted in 323,000 deaths down from 366,000 deaths in 1990.",
"title": "Heart disorders"
},
{
"paragraph_id": 58,
"text": "Tetralogy of Fallot is the most common congenital heart disease arising in 1–3 cases per 1,000 births. The cause of this defect is a ventricular septal defect (VSD) and an overriding aorta. These two defects combined causes deoxygenated blood to bypass the lungs and going right back into the circulatory system. The modified Blalock-Taussig shunt is usually used to fix the circulation. This procedure is done by placing a graft between the subclavian artery and the ipsilateral pulmonary artery to restore the correct blood flow.",
"title": "Heart disorders"
},
{
"paragraph_id": 59,
"text": "Pulmonary atresia happens in 7–8 per 100,000 births and is characterized by the aorta branching out of the right ventricle. This causes the deoxygenated blood to bypass the lungs and enter the circulatory system. Surgeries can fix this by redirecting the aorta and fixing the right ventricle and pulmonary artery connection.",
"title": "Heart disorders"
},
{
"paragraph_id": 60,
"text": "There are two types of pulmonary atresia, classified by whether or not the baby also has a ventricular septal defect.",
"title": "Heart disorders"
},
{
"paragraph_id": 61,
"text": "Double outlet right ventricle (DORV) is when both great arteries, the pulmonary artery and the aorta, are connected to the right ventricle. There is usually a VSD in different particular places depending on the variations of DORV, typically 50% are subaortic and 30%. The surgeries that can be done to fix this defect can vary due to the different physiology and blood flow in the defected heart. One way it can be cured is by a VSD closure and placing conduits to restart the blood flow between the left ventricle and the aorta and between the right ventricle and the pulmonary artery. Another way is systemic-to-pulmonary artery shunt in cases associated with pulmonary stenosis. Also, a balloon atrial septostomy can be done to relieve hypoxemia caused by DORV with the Taussig-Bing anomaly while surgical correction is awaited.",
"title": "Heart disorders"
},
{
"paragraph_id": 62,
"text": "There are two different types of transposition of the great arteries, Dextro-transposition of the great arteries and Levo-transposition of the great arteries, depending on where the chambers and vessels connect. Dextro-transposition happens in about 1 in 4,000 newborns and is when the right ventricle pumps blood into the aorta and deoxygenated blood enters the bloodstream. The temporary procedure is to create an atrial septal defect. A permanent fix is more complicated and involves redirecting the pulmonary return to the right atrium and the systemic return to the left atrium, which is known as the Senning procedure. The Rastelli procedure can also be done by rerouting the left ventricular outflow, dividing the pulmonary trunk, and placing a conduit in between the right ventricle and pulmonary trunk. Levo-transposition happens in about 1 in 13,000 newborns and is characterized by the left ventricle pumping blood into the lungs and the right ventricle pumping the blood into the aorta. This may not produce problems at the beginning, but will eventually due to the different pressures each ventricle uses to pump blood. Switching the left ventricle to be the systemic ventricle and the right ventricle to pump blood into the pulmonary artery can repair levo-transposition.",
"title": "Heart disorders"
},
{
"paragraph_id": 63,
"text": "Persistent truncus arteriosus is when the truncus arteriosus fails to split into the aorta and pulmonary trunk. This occurs in about 1 in 11,000 live births and allows both oxygenated and deoxygenated blood into the body. The repair consists of a VSD closure and the Rastelli procedure.",
"title": "Heart disorders"
},
{
"paragraph_id": 64,
"text": "Ebstein's anomaly is characterized by a right atrium that is significantly enlarged and a heart that is shaped like a box. This is very rare and happens in less than 1% of congenital heart disease cases. The surgical repair varies depending on the severity of the disease.",
"title": "Heart disorders"
},
{
"paragraph_id": 65,
"text": "Pediatric cardiology is a sub-specialty of pediatrics. To become a pediatric cardiologist in the U.S., one must complete a three-year residency in pediatrics, followed by a three-year fellowship in pediatric cardiology. Per doximity, pediatric cardiologists make an average of $303,917 in the U.S.",
"title": "Heart disorders"
},
{
"paragraph_id": 66,
"text": "Diagnostic tests in cardiology are the methods of identifying heart conditions associated with healthy vs. unhealthy, pathologic heart function. The starting point is obtaining a medical history, followed by Auscultation. Then blood tests, electrophysiological procedures, and cardiac imaging can be ordered for further analysis. Electrophysiological procedures include electrocardiogram, cardiac monitoring, cardiac stress testing, and the electrophysiology study.",
"title": "Diagnostic tests in cardiology"
},
{
"paragraph_id": 67,
"text": "Cardiology is known for randomized controlled trials that guide clinical treatment of cardiac diseases. While dozens are published every year, there are landmark trials that shift treatment significantly. Trials often have an acronym of the trial name, and this acronym is used to reference the trial and its results. Some of these landmark trials include:",
"title": "Trials"
}
] | Cardiology is the study of the heart. Cardiology is a branch of medicine that deals with disorders of the heart and the cardiovascular system. The field includes medical diagnosis and treatment of congenital heart defects, coronary artery disease, heart failure, valvular heart disease, and electrophysiology. Physicians who specialize in this field of medicine are called cardiologists, a specialty of internal medicine. Pediatric cardiologists are pediatricians who specialize in cardiology. Physicians who specialize in cardiac surgery are called cardiothoracic surgeons or cardiac surgeons, a specialty of general surgery. | 2001-07-27T13:39:51Z | 2023-12-07T12:32:42Z | [
"Template:Heart diseases",
"Template:Short description",
"Template:Citation needed",
"Template:Further",
"Template:Cite book",
"Template:Clear",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:About",
"Template:Infobox medical speciality",
"Template:Main category",
"Template:Portal",
"Template:Medicine",
"Template:Cardiovascular system",
"Template:Infobox Occupation",
"Template:Expand section",
"Template:Citation",
"Template:Wiktionary",
"Template:Cardiovascular system symptoms and signs",
"Template:Cardiac procedures",
"Template:Authority control",
"Template:Ety",
"Template:Main",
"Template:Nowrap",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Cardiology |
5,422 | Capcom | Capcom Co., Ltd. (Japanese: 株式会社カプコン, Hepburn: Kabushiki-gaisha Kapukon) is a Japanese video game company. It has created a number of multi-million-selling game franchises, with its most commercially successful being Resident Evil, Monster Hunter, Street Fighter, Mega Man, Devil May Cry, Dead Rising, Ace Attorney, and Marvel vs. Capcom. Mega Man himself serves as the official mascot of the company. Established in 1979, it has become an international enterprise with subsidiaries in East Asia (Hong Kong), Europe (London, England), and North America (San Francisco, California).
Capcom's predecessor, I.R.M. Corporation, was founded on May 30, 1979 by Kenzo Tsujimoto, who was still president of Irem Corporation when he founded I.R.M. He worked concomitantly in both companies until leaving the former in 1983.
The original companies that spawned Capcom's Japan branch were I.R.M. and its subsidiary Japan Capsule Computers Co., Ltd., both of which were devoted to the manufacture and distribution of electronic game machines. The two companies underwent a name change to Sanbi Co., Ltd. in September 1981. On June 11, 1983, Tsujimoto established Capcom Co., Ltd. for the purpose of taking over the internal sales department.
In January 1989, Capcom Co., Ltd. merged with Sanbi Co., Ltd., resulting in the current Japan branch. The name Capcom is a clipped compound of "Capsule Computers", a term coined by the company for the arcade machines it solely manufactured in its early years, designed to set themselves apart from personal computers that were becoming widespread. "Capsule" alludes to how Capcom likened its game software to "a capsule packed to the brim with gaming fun", and to the company's desire to protect its intellectual property with a hard outer shell, preventing illegal copies and inferior imitations.
Capcom's first product was the medal game Little League (1983). It released its first arcade video game, Vulgus (May 1984). Starting with the arcade hit 1942 (1984), they began designing games with international markets in mind. The successful 1985 arcade games Commando and Ghosts 'n Goblins have been credited as the products "that shot [Capcom] to 8-bit silicon stardom" in the mid-1980s. Starting with Commando (late 1985), Capcom began licensing their arcade games for release on home computers, notably to British software houses Elite Systems and U.S. Gold in the late 1980s.
Beginning with a Nintendo Entertainment System port of 1942 (published in Dec. 1985), the company ventured into the market of home console video games, which would eventually become its main business. The Capcom USA division had a brief stint in the late 1980s as a video game publisher for Commodore 64 and IBM PC DOS computers, although development of these arcade ports was handled by other companies. Capcom went on to create 15 multi-million-selling home video game franchises, with the best-selling being Resident Evil (1996). Their highest-grossing is the fighting game Street Fighter II (1991), driven largely by its success in arcades.
In the late 1980s, Capcom was on the verge of bankruptcy when the development of a strip Mahjong game called Mahjong Gakuen started. It outsold Ghouls 'n Ghosts, the eighth highest-grossing arcade game of 1989 in Japan, and is credited with saving the company from financial crisis.
Capcom has been noted as the last major publisher to be committed to 2D games, though it was not entirely by choice. The company's commitment to the Super Nintendo Entertainment System as its platform of choice caused them to lag behind other leading publishers in developing 3D-capable arcade boards. Also, the 2D animated cartoon-style graphics seen in games such as Darkstalkers: The Night Warriors and X-Men: Children of the Atom proved popular, leading Capcom to adopt them as a signature style and use them in more games.
In 1990, Capcom entered the bowling industry with Bowlingo. It was a coin-operated, electro-mechanical, fully automated mini ten-pin bowling installation. It was smaller than a standard bowling alley, designed to be smaller and cheaper for amusement arcades. Bowlingo drew significant earnings in North America upon release in 1990.
In 1994, Capcom adapted its Street Fighter series of fighting games into a film of the same name. While commercially successful, it was critically panned. A 2002 adaptation of its Resident Evil series faced similar criticism but was also successful in theaters. The company sees films as a way to build sales for its video games.
Capcom partnered with Nyu Media in 2011 to publish and distribute the Japanese independent (dōjin soft) games that Nyu localized into the English language. The company works with the Polish localization company QLOC to port Capcom's games to other platforms; notably, examples are DmC: Devil May Cry's PC version and its PlayStation 4 and Xbox One remasters, Dragon's Dogma's PC version, and Dead Rising's version on PlayStation 4, Xbox One, and PC.
In 2012, Capcom came under criticism for controversial sales tactics, such as the implementation of disc-locked content, which requires players to pay for additional content that is already available within the game's files, most notably in Street Fighter X Tekken. The company defended the practice. It has also been criticized for other business decisions, such as not releasing certain games outside of Japan (most notably the Sengoku Basara series), abruptly cancelling anticipated projects (most notably Mega Man Legends 3), and shutting down Clover Studio.
On August 27, 2014, Capcom filed a patent infringement lawsuit against Koei Tecmo Games at the Osaka District Court for 980 million yen in damage. Capcom claimed Koei Tecmo infringed a patent it obtained in 2002 regarding a play feature in video games.
In 2015, the PlayStation 4 version of Ultra Street Fighter IV was pulled from the Capcom Pro Tour due to numerous technical issues and bugs. In 2016, Capcom released Street Fighter V with very limited single player content. At launch, there were stability issues with the game's network that booted players mid-game even when they were not playing in an online mode. Street Fighter V failed to meet its sales target of 2 million in March 2016.
On 2 November 2020, the company reported that its servers were affected by ransomware, scrambling its data, and the threat actors, the Ragnar Locker hacker group, had allegedly stolen 1TB of sensitive corporate data and were blackmailing Capcom to pay them to remove the ransomware. By mid-November, the group began putting information from the hack online, which included contact information for up to 350,000 of the company's employees and partners, as well as plans for upcoming games, indicating that Capcom opted to not pay the group. Capcom affirmed that no credit-card or other sensitive financial information was obtained in the hack.
In 2021, Capcom removed appearances of the Rising Sun Flag from their rerelease of Street Fighter II. Although Capcom did not provide an official explanation for the flag's removal, due to the flag-related controversy, it is speculated that it was done so to avoid offending segments of the international gaming community.
Artist and author Judy A. Juracek filed a lawsuit in June 2021 against Capcom for copyright infringement. In the court filings, she asserted Capcom had used images from her 1996 book Surfaces in their cover art and other assets for Resident Evil 4, Devil May Cry and other games. This was discovered due to the 2020 Capcom data breach, with several files and images matching those that were included within the book's companion CD-ROM. The court filings noted one image file of a metal surface, named ME0009 in Capcom's files, to have the same exact name on the book's CD-ROM. Juracek was seeking over $12 million in damages and $2,500 to $25,000 in false copyright management for each photograph Capcom used. Before a court date could be made, the matter was settled "amicably" in February 2022. It comes on the heels of Capcom being accused by Dutch movie director Richard Raaphorst of copying the monster design of his movie Frankenstein's Army into their game Resident Evil Village.
In February 2022, it was reported by Bloomberg that Saudi Arabia's Public Investment Fund had purchased a 5% stake in Capcom, for an approximate value of US$332 million.
In July 2023, Capcom acquired Tokyo-based computer graphics studio Swordcanes Studio.
In its beginning few years, Capcom's Japan branch had three development groups referred to as "Planning Rooms", led by Tokuro Fujiwara, Takashi Nishiyama and Yoshiki Okamoto. Later, games developed internally were created by several numbered "Production Studios", each assigned to different games. Starting in 2002, the development process was reformed to better share technologies and expertise, and the individual studios were gradually restructured into bigger departments responsible for different tasks. While there are self-contained departments for the creation of arcade, pachinko and pachislo, online, and mobile games, the Consumer Games R&D Division is an amalgamation of subsections in charge of game development stages.
Capcom has two internal Consumer Games Development divisions:
In addition to these teams, Capcom commissions outside development studios to ensure a steady output of titles. However, following poor sales of Dark Void and Bionic Commando, its management has decided to limit outsourcing to sequels and newer versions of installments in existing franchises, reserving the development of original titles for its in-house teams. The production of games, budgets, and platform support are decided on in development approval meetings, attended by the company management and the marketing, sales and quality control departments.
Although the company often relies on existing franchises, it has also published and developed several titles for the Xbox 360, PlayStation 3, and Wii based on original intellectual property: Lost Planet: Extreme Condition, Dead Rising, Dragon's Dogma, Asura's Wrath, and Zack and Wiki. During this period, Capcom also helped publish several original titles from up-and-coming Western developers, including Remember Me, Dark Void, and Spyborgs, titles other publishers were not willing to gamble on. Other games of note are the titles Ōkami, Ōkamiden, and Ghost Trick: Phantom Detective.
Capcom Co., Ltd.'s head office building and R&D building are in Chūō-ku, Osaka. The parent company also has a branch office in the Shinjuku Mitsui Building in Nishi-Shinjuku, Shinjuku, Tokyo; and the Ueno Facility, a branch office in Iga, Mie Prefecture.
The international Capcom Group encompasses 12 subsidiaries in Japan, rest of East Asia, North America, and Europe.
In addition to home, online, mobile, arcade, pachinko, and pachislot games, Capcom publishes strategy guides; maintains its own Plaza Capcom arcade centers in Japan; and licenses its franchise and character properties for tie-in products, movies, television series, and stage performances.
Suleputer, an in-house marketing and music label established in cooperation with Sony Music Entertainment Intermedia in 1998, publishes CDs, DVDs, and other media based on Capcom's games. Captivate (renamed from Gamers Day in 2008), an annual private media summit, is traditionally used for new game and business announcements.
Capcom started its Street Fighter franchise in 1987. The series of fighting games are among the most popular in their genre. Having sold more than 50 million copies, it is one of Capcom's flagship franchises. The company also introduced its Mega Man series in 1987, which has sold more than 40 million copies.
The company released the first entry in its Resident Evil survival horror series in 1996, which become its most successful game series, selling 150 million copies. After releasing the second entry in the Resident Evil series, Capcom began a Resident Evil game for PlayStation 2. As it was significantly different from the existing series' games, Capcom decided to spin it into its own series, Devil May Cry. The first three entries were exclusively for PlayStation 2; further entries were released for non-Sony consoles. The entire series has sold 30 million copies. Capcom began its Monster Hunter series in 2004, which has sold more than 90 million copies on a variety of consoles.
Capcom compiles a "Platinum Titles" list, updated quarterly, of its games that have sold over one million copies. It contains over 100 video games. This table shows the top ten titles, by sold copies, as of September 30, 2023. | [
{
"paragraph_id": 0,
"text": "Capcom Co., Ltd. (Japanese: 株式会社カプコン, Hepburn: Kabushiki-gaisha Kapukon) is a Japanese video game company. It has created a number of multi-million-selling game franchises, with its most commercially successful being Resident Evil, Monster Hunter, Street Fighter, Mega Man, Devil May Cry, Dead Rising, Ace Attorney, and Marvel vs. Capcom. Mega Man himself serves as the official mascot of the company. Established in 1979, it has become an international enterprise with subsidiaries in East Asia (Hong Kong), Europe (London, England), and North America (San Francisco, California).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Capcom's predecessor, I.R.M. Corporation, was founded on May 30, 1979 by Kenzo Tsujimoto, who was still president of Irem Corporation when he founded I.R.M. He worked concomitantly in both companies until leaving the former in 1983.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "The original companies that spawned Capcom's Japan branch were I.R.M. and its subsidiary Japan Capsule Computers Co., Ltd., both of which were devoted to the manufacture and distribution of electronic game machines. The two companies underwent a name change to Sanbi Co., Ltd. in September 1981. On June 11, 1983, Tsujimoto established Capcom Co., Ltd. for the purpose of taking over the internal sales department.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In January 1989, Capcom Co., Ltd. merged with Sanbi Co., Ltd., resulting in the current Japan branch. The name Capcom is a clipped compound of \"Capsule Computers\", a term coined by the company for the arcade machines it solely manufactured in its early years, designed to set themselves apart from personal computers that were becoming widespread. \"Capsule\" alludes to how Capcom likened its game software to \"a capsule packed to the brim with gaming fun\", and to the company's desire to protect its intellectual property with a hard outer shell, preventing illegal copies and inferior imitations.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Capcom's first product was the medal game Little League (1983). It released its first arcade video game, Vulgus (May 1984). Starting with the arcade hit 1942 (1984), they began designing games with international markets in mind. The successful 1985 arcade games Commando and Ghosts 'n Goblins have been credited as the products \"that shot [Capcom] to 8-bit silicon stardom\" in the mid-1980s. Starting with Commando (late 1985), Capcom began licensing their arcade games for release on home computers, notably to British software houses Elite Systems and U.S. Gold in the late 1980s.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Beginning with a Nintendo Entertainment System port of 1942 (published in Dec. 1985), the company ventured into the market of home console video games, which would eventually become its main business. The Capcom USA division had a brief stint in the late 1980s as a video game publisher for Commodore 64 and IBM PC DOS computers, although development of these arcade ports was handled by other companies. Capcom went on to create 15 multi-million-selling home video game franchises, with the best-selling being Resident Evil (1996). Their highest-grossing is the fighting game Street Fighter II (1991), driven largely by its success in arcades.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the late 1980s, Capcom was on the verge of bankruptcy when the development of a strip Mahjong game called Mahjong Gakuen started. It outsold Ghouls 'n Ghosts, the eighth highest-grossing arcade game of 1989 in Japan, and is credited with saving the company from financial crisis.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Capcom has been noted as the last major publisher to be committed to 2D games, though it was not entirely by choice. The company's commitment to the Super Nintendo Entertainment System as its platform of choice caused them to lag behind other leading publishers in developing 3D-capable arcade boards. Also, the 2D animated cartoon-style graphics seen in games such as Darkstalkers: The Night Warriors and X-Men: Children of the Atom proved popular, leading Capcom to adopt them as a signature style and use them in more games.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1990, Capcom entered the bowling industry with Bowlingo. It was a coin-operated, electro-mechanical, fully automated mini ten-pin bowling installation. It was smaller than a standard bowling alley, designed to be smaller and cheaper for amusement arcades. Bowlingo drew significant earnings in North America upon release in 1990.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1994, Capcom adapted its Street Fighter series of fighting games into a film of the same name. While commercially successful, it was critically panned. A 2002 adaptation of its Resident Evil series faced similar criticism but was also successful in theaters. The company sees films as a way to build sales for its video games.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Capcom partnered with Nyu Media in 2011 to publish and distribute the Japanese independent (dōjin soft) games that Nyu localized into the English language. The company works with the Polish localization company QLOC to port Capcom's games to other platforms; notably, examples are DmC: Devil May Cry's PC version and its PlayStation 4 and Xbox One remasters, Dragon's Dogma's PC version, and Dead Rising's version on PlayStation 4, Xbox One, and PC.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 2012, Capcom came under criticism for controversial sales tactics, such as the implementation of disc-locked content, which requires players to pay for additional content that is already available within the game's files, most notably in Street Fighter X Tekken. The company defended the practice. It has also been criticized for other business decisions, such as not releasing certain games outside of Japan (most notably the Sengoku Basara series), abruptly cancelling anticipated projects (most notably Mega Man Legends 3), and shutting down Clover Studio.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "On August 27, 2014, Capcom filed a patent infringement lawsuit against Koei Tecmo Games at the Osaka District Court for 980 million yen in damage. Capcom claimed Koei Tecmo infringed a patent it obtained in 2002 regarding a play feature in video games.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 2015, the PlayStation 4 version of Ultra Street Fighter IV was pulled from the Capcom Pro Tour due to numerous technical issues and bugs. In 2016, Capcom released Street Fighter V with very limited single player content. At launch, there were stability issues with the game's network that booted players mid-game even when they were not playing in an online mode. Street Fighter V failed to meet its sales target of 2 million in March 2016.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On 2 November 2020, the company reported that its servers were affected by ransomware, scrambling its data, and the threat actors, the Ragnar Locker hacker group, had allegedly stolen 1TB of sensitive corporate data and were blackmailing Capcom to pay them to remove the ransomware. By mid-November, the group began putting information from the hack online, which included contact information for up to 350,000 of the company's employees and partners, as well as plans for upcoming games, indicating that Capcom opted to not pay the group. Capcom affirmed that no credit-card or other sensitive financial information was obtained in the hack.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 2021, Capcom removed appearances of the Rising Sun Flag from their rerelease of Street Fighter II. Although Capcom did not provide an official explanation for the flag's removal, due to the flag-related controversy, it is speculated that it was done so to avoid offending segments of the international gaming community.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Artist and author Judy A. Juracek filed a lawsuit in June 2021 against Capcom for copyright infringement. In the court filings, she asserted Capcom had used images from her 1996 book Surfaces in their cover art and other assets for Resident Evil 4, Devil May Cry and other games. This was discovered due to the 2020 Capcom data breach, with several files and images matching those that were included within the book's companion CD-ROM. The court filings noted one image file of a metal surface, named ME0009 in Capcom's files, to have the same exact name on the book's CD-ROM. Juracek was seeking over $12 million in damages and $2,500 to $25,000 in false copyright management for each photograph Capcom used. Before a court date could be made, the matter was settled \"amicably\" in February 2022. It comes on the heels of Capcom being accused by Dutch movie director Richard Raaphorst of copying the monster design of his movie Frankenstein's Army into their game Resident Evil Village.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In February 2022, it was reported by Bloomberg that Saudi Arabia's Public Investment Fund had purchased a 5% stake in Capcom, for an approximate value of US$332 million.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In July 2023, Capcom acquired Tokyo-based computer graphics studio Swordcanes Studio.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In its beginning few years, Capcom's Japan branch had three development groups referred to as \"Planning Rooms\", led by Tokuro Fujiwara, Takashi Nishiyama and Yoshiki Okamoto. Later, games developed internally were created by several numbered \"Production Studios\", each assigned to different games. Starting in 2002, the development process was reformed to better share technologies and expertise, and the individual studios were gradually restructured into bigger departments responsible for different tasks. While there are self-contained departments for the creation of arcade, pachinko and pachislo, online, and mobile games, the Consumer Games R&D Division is an amalgamation of subsections in charge of game development stages.",
"title": "Corporate structure"
},
{
"paragraph_id": 20,
"text": "Capcom has two internal Consumer Games Development divisions:",
"title": "Corporate structure"
},
{
"paragraph_id": 21,
"text": "In addition to these teams, Capcom commissions outside development studios to ensure a steady output of titles. However, following poor sales of Dark Void and Bionic Commando, its management has decided to limit outsourcing to sequels and newer versions of installments in existing franchises, reserving the development of original titles for its in-house teams. The production of games, budgets, and platform support are decided on in development approval meetings, attended by the company management and the marketing, sales and quality control departments.",
"title": "Corporate structure"
},
{
"paragraph_id": 22,
"text": "Although the company often relies on existing franchises, it has also published and developed several titles for the Xbox 360, PlayStation 3, and Wii based on original intellectual property: Lost Planet: Extreme Condition, Dead Rising, Dragon's Dogma, Asura's Wrath, and Zack and Wiki. During this period, Capcom also helped publish several original titles from up-and-coming Western developers, including Remember Me, Dark Void, and Spyborgs, titles other publishers were not willing to gamble on. Other games of note are the titles Ōkami, Ōkamiden, and Ghost Trick: Phantom Detective.",
"title": "Corporate structure"
},
{
"paragraph_id": 23,
"text": "Capcom Co., Ltd.'s head office building and R&D building are in Chūō-ku, Osaka. The parent company also has a branch office in the Shinjuku Mitsui Building in Nishi-Shinjuku, Shinjuku, Tokyo; and the Ueno Facility, a branch office in Iga, Mie Prefecture.",
"title": "Corporate structure"
},
{
"paragraph_id": 24,
"text": "The international Capcom Group encompasses 12 subsidiaries in Japan, rest of East Asia, North America, and Europe.",
"title": "Corporate structure"
},
{
"paragraph_id": 25,
"text": "In addition to home, online, mobile, arcade, pachinko, and pachislot games, Capcom publishes strategy guides; maintains its own Plaza Capcom arcade centers in Japan; and licenses its franchise and character properties for tie-in products, movies, television series, and stage performances.",
"title": "Corporate structure"
},
{
"paragraph_id": 26,
"text": "Suleputer, an in-house marketing and music label established in cooperation with Sony Music Entertainment Intermedia in 1998, publishes CDs, DVDs, and other media based on Capcom's games. Captivate (renamed from Gamers Day in 2008), an annual private media summit, is traditionally used for new game and business announcements.",
"title": "Corporate structure"
},
{
"paragraph_id": 27,
"text": "Capcom started its Street Fighter franchise in 1987. The series of fighting games are among the most popular in their genre. Having sold more than 50 million copies, it is one of Capcom's flagship franchises. The company also introduced its Mega Man series in 1987, which has sold more than 40 million copies.",
"title": "Game sales"
},
{
"paragraph_id": 28,
"text": "The company released the first entry in its Resident Evil survival horror series in 1996, which become its most successful game series, selling 150 million copies. After releasing the second entry in the Resident Evil series, Capcom began a Resident Evil game for PlayStation 2. As it was significantly different from the existing series' games, Capcom decided to spin it into its own series, Devil May Cry. The first three entries were exclusively for PlayStation 2; further entries were released for non-Sony consoles. The entire series has sold 30 million copies. Capcom began its Monster Hunter series in 2004, which has sold more than 90 million copies on a variety of consoles.",
"title": "Game sales"
},
{
"paragraph_id": 29,
"text": "Capcom compiles a \"Platinum Titles\" list, updated quarterly, of its games that have sold over one million copies. It contains over 100 video games. This table shows the top ten titles, by sold copies, as of September 30, 2023.",
"title": "Game sales"
}
] | Capcom Co., Ltd. is a Japanese video game company. It has created a number of multi-million-selling game franchises, with its most commercially successful being Resident Evil, Monster Hunter, Street Fighter, Mega Man, Devil May Cry, Dead Rising, Ace Attorney, and Marvel vs. Capcom. Mega Man himself serves as the official mascot of the company. Established in 1979, it has become an international enterprise with subsidiaries in East Asia, Europe, and North America. | 2001-04-15T17:08:56Z | 2023-12-26T19:52:45Z | [
"Template:For-text",
"Template:Infobox company",
"Template:Dts",
"Template:Cite web",
"Template:Efn",
"Template:Notelist",
"Template:Cite journal",
"Template:Pp-move",
"Template:Nihongo",
"Template:Primary source inline",
"Template:USD",
"Template:Main",
"Template:Webarchive",
"Template:Short description",
"Template:Cite news",
"Template:Cite video game",
"Template:Portal bar",
"Template:Cite book",
"Template:Franchises by Capcom",
"Template:Authority control",
"Template:Use mdy dates",
"Template:'",
"Template:Abbr",
"Template:Reflist",
"Template:Cite magazine"
] | https://en.wikipedia.org/wiki/Capcom |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.