content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Controlling blood glucose early in the course of type 1 diabetes yields huge dividends, preserving kidney function for decades. The new finding from a study funded by the National Institutes of Health was published online in the New England Journal of Medicine Nov. 12 to coincide with presentation at a scientific meeting. Compared to conventional therapy, near-normal control of blood glucose beginning soon after diagnosis of type 1 diabetes and continuing an average six and a half years reduced by half the long-term risk of developing kidney disease, according to the Diabetes Control and Complications Trial (DCCT) and Epidemiology of Diabetes Interventions and Complications (EDIC) Research Group. The risk of kidney failure was also halved, but the difference was not statistically significant, perhaps due to the relatively small total number of patients who reached that stage of the disease. Participants entered the DCCT on average six years after onset of diabetes when complications of diabetes were absent or very mild. Half aimed for near-normal glucose control (intensive therapy) and the others received what was then standard glucose control. After an average 22-year follow-up, 24 in the intensive group developed significantly reduced kidney function and 8 progressed to kidney failure requiring dialysis or transplantation. On conventional therapy, 46 developed kidney disease, with kidney failure in 16. The landmark DCCT demonstrated that intensive control reduced early signs of eye, kidney and nerve damage and is the basis for current guidelines for diabetes therapy. However, the initial kidney findings were based on reductions in urine protein, a sign of kidney damage but not a measure of kidney function. Preventing a loss of kidney function and reducing kidney failure had not been proven. Since the DCCT ended in 1993, all participants have tried to maintain excellent diabetes control and have achieved similar glucose levels. The new finding emphasizes the importance of good control of type 1 diabetes soon after diagnosis. "Achieving near-normal glucose levels in type 1 diabetes can be challenging. But our study provides strong evidence that reinforces the benefits of reaching the goal as early as possible to slow or prevent kidney disease and other complications," said first author Ian H. de Boer, M.D., a kidney specialist at the University of Washington, Seattle. He is scheduled to present the findings Nov. 12, 2011, at the American Society of Nephrology's annual meeting in Philadelphia. The DCCT, conducted from 1983 to 1993 in 1,441 people with type 1 diabetes, found that intensive glucose control was superior to conventional control in delaying or preventing complications overall. EDIC continues to follow 1,375 DCCT participants to determine the long-term effects of the therapies beyond the initial treatment period. Other reports have bolstered support for intensive treatment to reduce the risk of heart disease, stroke and eye and nerve damage associated with diabetes. "The DCCT and EDIC studies illustrate the value of long-term studies. The full benefit of treatment may not be seen for decades, especially for complications of diabetes, such as kidney disease, which can progress slowly but have devastating consequences," said Griffin P. Rodgers, M.D., director of the NIH's National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), which oversaw the research. "Not only has NIH-sponsored research shown the benefits of early glucose control, it has provided new tools to help people with type 1 diabetes achieve that control and live longer and healthier lives." The DCCT compared intensive to conventional control of blood glucose in people with type 1 diabetes. At the time, conventional treatment was one or two insulin injections a day with daily urine or blood glucose testing. Participants randomly assigned to intensive treatment were asked to keep glucose levels as near normal as possible. That meant trying to keep hemoglobin A1c (A1C) readings at 6 percent or less with at least three insulin injections a day or an insulin pump, guided by frequent self-monitoring of blood glucose. (A1C reflects average blood glucose over the previous two to three months.) ### Nearly 26 million Americans have diabetes. In adults, type 1 diabetes accounts for 5 to 10 percent of all diagnosed cases of the disease. Formerly called juvenile-onset or insulin-dependent diabetes, type 1 diabetes develops when the body's immune system destroys pancreatic beta cells, the only cells in the body that make the hormone insulin that regulates blood glucose. Type 1 diabetes usually arises in children and young adults but can occur at any age. Management involves keeping blood glucose levels as close to normal as possible with three or more insulin injections a day or treatment with an insulin pump, careful monitoring of glucose, and close attention to diet and exercise. Type 2 diabetes, or adult-onset diabetes, accounts for about 90 to 95 percent of all diabetes diagnosed in adults. It usually begins as insulin resistance, a disorder in which the cells do not use insulin properly. As the need for insulin rises, the pancreas gradually loses its ability to produce it. Type 2 diabetes is associated with older age, obesity, family history of diabetes, history of gestational diabetes, impaired glucose metabolism, physical inactivity, and race/ethnicity. African-Americans, Hispanic/Latino-Americans, American Indians, and some Asian-Americans and Native Hawaiians or other Pacific Islanders are at particularly high risk for type 2 diabetes and its complications. Chronic kidney disease can lead to kidney failure, also called end-stage renal disease, requiring dialysis or a kidney transplant for survival. Chronic kidney disease affects more than 10 percent of Americans over age 20 and 35 percent of those over age 20 with diabetes. People with diabetes and chronic kidney disease account for 26.1 percent, or $18 billion, of Medicare costs for diabetes. Diabetes is the leading cause of kidney failure, accounting for nearly 38 percent (215,000) of Americans on dialysis or living with a kidney transplant. Each year 110,000 patients in the United States start treatment for kidney failure. These lifesaving treatments cost $42.5 billion annually. The DCCT is registered as NCT00360815, and EDIC is registered as NCT00360893 in clinicaltrials.gov. NIDDK and other NIH components supporting DCCT/EDIC are the National Eye Institute, the National Institute of Neurological Disorders and Stroke and the National Center for Research Resources. Genentech contributed to the DCCT/EDIC through a Cooperative Research and Development Agreement with the NIDDK. Lifescan, Roche, Aventis, Eli Lilly, Omnipod, Can-Am, B-D, Animas, Medtronic, Medtronic Minimed, Bayer and Omron contributed free or discounted supplies to the DCCT/EDIC. The NIDDK, a component of the NIH, conducts and supports research on diabetes and other endocrine and metabolic diseases; digestive diseases, nutrition and obesity; and kidney, urologic and hematologic diseases. Spanning the full spectrum of medicine and afflicting people of all ages and ethnic groups, these diseases encompass some of the most common, severe and disabling conditions affecting Americans. For more information about the NIDDK and its programs, see http://www. niddk. nih. gov . Education programs for diabetes and kidney disease offer information and resources for patients and health professionals.
The Tohoku Project: Sumidagawa presents powerful dramatic readings by professional actors of Sumidagawa, a Noh play from the early 15th Century that timelessly depicts the unique challenges faced by parents in the wake of unimaginable disaster. Each reading is followed by the responses of community panelists, culminating in a lively, facilitated audience discussion. This interactive event promotes healthy, constructive dialogue about the lasting impact of the Tohoku disaster upon individuals, families, and communities—fostering compassion, understanding, awareness, and positive action. About the play - Sumidagawa by Kanze Motomasa A Noh play from the early 15th Century in which a grief-stricken woman searches frantically for her son who has been taken by slave traders. As a ferryman transports her across the Sumida river, she notices a memorial service on the opposite bank, and discovers that it is for her son. Explore Projects - Gun ViolenceHercules Drawing from an ancient Greek tragedy about a vicious act of violence committed by an angry man with an invincible weapon, this project aims to generate powerful dialogue between concerned citizens, members of the law enforcement community, victims and perpetrators of gun violence, and the general public. - Addiction & Substance AbuseAddiction Performance Project Designed to raise awareness about opiate addiction and alcohol abuse, the project is intended to promote dialogue about helping those who are struggling with addiction. - Racism & Social JusticeFrederick Douglass Frederick Douglass is a project that presents dramatic readings of Douglass' speeches by professional actors as a catalyst for powerful dialogue about racism, inequality, civil rights, education, and the legal system with the objective of fostering compassion, understanding, and positive action.
https://theaterofwar.com/projects/the-tohoku-project
Way back in Module 1, we discussed a pair of views. One view, that of Milton Friedman, argues that businesses have only one social obligation: to increase profits to the benefit of shareholders. The other, from Ed Freeman, argues that businesses have a responsibility to many parties (stakeholders) in society, and so they cannot focus only on increasing profits. Let's take a moment to remind ourselves what these two arguments were before we move on to another pair of arguments which address the idea of corporate social responsibility in different ways. In his argument, Friedman emphasizes the place of the business within the structures of society, and the job of the CEO. The main claim is that a CEO is an employee of a business, and that business is beholden to the stockholders. Social responsibilities generally refer to obligations that the business would owe to the society. Friedman denies that there are any obligations like these (aside from making more money for stockholders) because the stockholders are the source of a business's money and this obligates the business to act in their interests. In fact, Friedman argues that it would be illicit for the CEO of a business to to use his stockholder's money to address any social obligations that are not aimed at increasing profits. They have given the business that money for the express purpose of increasing their wealth. When the CEO does not act according to these wishes, they are liable to be replaced in the same way that any other employee would be for the same transgression. A related claim is that when a CEO uses his financier's money in the general social interest he is spending someone else's money in ways that they have not approved. If those financiers had wanted to spend money on social interests then they would have done so. Businesses, and its leaders, are intended to make profits. Their skills are better used toward this end than they would be toward the end of solving social problems. He argues that the CEO is not a civil servant, and that the CEO is not any sort of expert on how to do things like lower the unemployment rate or keep inflation down. These things are in the realm of the civil servant, and should be left to those officials. The CEO is best suited to the realm of business, and that is the area in which they have an obligation to their shareholders. Friedman's argument looks at two things: (1) what business is particularly suited for and (2) to whom businesses owe direct obligations. The answer to (1) is "making money," and the answer to (2) is "the stockholders." This has an intuitive appeal because it looks pretty simple. The manager of a business is supposed to use stockholder's money to grow the business because this will make more money for stockholders. All they need to be concerned with are the rules of the game, and this makes the game much easier to play. Stakeholder theory starts from the opposite side. Instead of looking at the strengths of the business to decide what it ought to do, it looks at the world in which the business is situated. That world is a complicated one where the rules aren't nearly as "simple" as following the rules set out by the law and society. The world that every business exists in is home to lots of people and entities who all have a stake in the business's activities. These stakeholders are those people who interact with the business directly (customers, suppliers, employees, stockholders, and communities) or indirectly (governments, competitors, consumer advocate groups, special interest groups, the media). Each of these groups is impacted in some way by some of the business's decisions, and this gives them each some moral claim on the actions of the business. This "moral claim" on the actions of the business is an inexact measurement. It isn't meant to give all of these competing interests power over the actions of the corporation, but it does mean that the corporation should not act without considering the impact that their actions will have on others. One way to think about this is to say that the purpose of the corporation is to maximize a collective bottom line. This bottom line is the sum of the effects of a business's decision upon all stakeholders. Consider a case in which the decision before the firm is whether or not they should build a paper mill in a location that makes it convenient to discharge their waste materials into the river. In such a case, the first likely consideration is whether this location would maximize the firm's profits and minimize their expenditures. They cannot stop here, however. Their bottom line must also include the ways that this decision will impact the stakeholders in the business's decision. Once this combined bottom line is figured out, the decision can be made in a way that is maximally beneficial. This is a pretty complicated mechanism, because it requires the firm to weigh all of these (often competing) interests against one another. This might look like a strike against the theory because it makes doing business a complicated venture, but it might just be that doing business in a way which impacts a large number of people (or groups of people) should be complicated. In this way, the stakeholder theory limits a business's ability to profit through harming other persons or groups. Of course, it might be the case that people sometimes trade off harms for greater benefits. Perhaps the mill will add waste products to the river which will affect the quality of the community's drinking water. It may well be that the mill would also provide enough jobs to the community that they would be willing to endure the lowered quality of their drinking water. This sort of balancing act would be required by the stakeholder theory, but that is not necessarily a bad quality. It seems that this view is a more accurate picture of the business environment than the stockholder view. Businesses do not act alone, and they cannot survive for long on their own. As Freeman points out using the Responsibility Principle businesses and persons ought to be accountable when their actions affect others, and this view of the business community seems to account for that.
https://philosophia.uncg.edu/phi361-matteson/module-3-social-responsibility-professionalism-and-loyalty/social-responsibility-the-story-so-far/
Georgetown’s New Institute Tackles Urgent Environmental Challenges Georgetown has launched a new institute to accelerate action, research and education on the most pressing environmental and sustainability challenges both locally and globally. The Earth Commons, Georgetown’s Institute for Environment & Sustainability, is building new educational programming for undergraduate and graduate students, expanding research opportunities for faculty members and students and developing scalable solutions for a greener campus and planet. Guided by the university’s Jesuit values, the Earth Commons builds on Georgetown’s commitment to advance environmental sustainability and justice and to care for our common home. “This new Institute builds on the work of our community over many years to expand and deepen our engagement with the environment,” says President John J. DeGioia. “It is with great excitement that we launch this new work, and contribute the knowledge, engagement, and expertise of our community to the urgent environmental challenges facing our world.” A Collective Step Toward Environmental Change Earth Commons will be composed of multiple focal areas that will each focus on a major environmental issue, including environmental justice, climate change and energy transitions, environmental health, food and water security and biodiversity conservation. Applying an interdisciplinary approach, the institute will connect faculty, experts and students across the arts, humanities, sciences, medicine, policy and law to innovate on these environmental challenges. “We can’t engineer ourselves out of all these problems,” says Peter Marra, founding director of the Earth Commons. “These are science problems. These are policy problems. These are moral dilemmas. These are problems that require us to think about environmental issues from multiple lenses.” The Earth Commons will also include the focal area of sustainability, collaborating with the Office of Sustainability that will use Georgetown’s campus as a living laboratory, providing students with hands-on, experiential learning opportunities with faculty and staff to green Georgetown’s built environment and operations. “As a part of the Earth Commons, the Office of Sustainability will create sustainable solutions, empower students to be change agents and invite all members of the Georgetown community to adopt a sustainability mindset in their day-to-day lives on-campus,” says Meghan Chapple, Georgetown’s first vice president of sustainability. Through engagement inclusive of diverse voices, the Office of Sustainability develops solutions on campus, in administrative operations and with local communities to create a sustainable world. How Earth Commons Found Its Home The Earth Commons grew out of decades-long work in environment and sustainability at Georgetown. The university’s Georgetown Environment Initiative (GEI), a university-wide effort established in 2012 to advance the multidisciplinary study of the environment and sustainability, provided hundreds of thousands of dollars in grants to faculty, students and staff and helped fuel environmental projects, such as the Bee Campus, a sustainability education campaign, and environmental justice projects in India. Marra, the Laudato Si’ Professor of Biology and the Environment, left his 20-year career at the Smithsonian Institution in 2019 to helm the GEI. In response to urgent environmental challenges, Marra was committed to deepen the initiative’s and Georgetown’s impact and empower the 70-plus full-time scholars studying the environment across disciplines, from the declines of monarch butterflies and birds to climate change’s impact on women and its overall economic costs. “It was clear there was an urgent need, as well as momentum at Georgetown,” says Marra. “The important question was how do we move the GEI to a more impactful level, amplifying and supporting existing faculty and staff and not creating new silos, and do all this in a way that allows us to build things fast, because we are on the clock.” With the COVID-19 pandemic also exposing socio-economic disparities caused by environmental injustice, and, coupled with long-term harm to the environment, “the stakes have never been higher,” Marra said. President DeGioia charged a faculty advisory committee to help advise the activities of an Institute for the Environment and Sustainability. As a result, GEI has now been transformed into the Earth Commons, appointing more faculty, conducting original research and developing educational programs. Experiential Learning Around the Globe In addition to faculty research, the Earth Commons will fuel interdisciplinary learning at Georgetown. In 2020, GEI began developing a master’s degree in environmental and sustainability management in collaboration with the McDonough School of Business and the Graduate School of Arts & Sciences. The program prepares students to be leaders in both sustainable business and environmental practices and is welcoming its first class in the Fall of this year. In the coming years, the Earth Commons will develop additional master’s programs, undergraduate offerings, a Ph.D. program in the Environment and a Postdoctoral Fellowship Program. The institute will provide students with multidisciplinary, experiential learning and hands-on practical experience both at the university and in remote areas around the world. “I want Georgetown to train the next generation of leaders in environment and sustainability, whether they’re from a medical, business or STEM background, so that they can come up with solutions for our planet,” says Marra. “We need to have students leave Georgetown with both the passions in their belly and knowledge in their heads to tackle environment and sustainability challenges like never before.” Shelby Gresch (SFS’22), who’s majoring in science, technology and international affairs (STIA), has interned for GEI since August. She plans to work full-time for the Earth Commons upon graduating. During her internship, Gresch helped design an urban farm for the Georgetown community – a formative experience that solidified her plans to work in sustainable food systems. For Gresch, the Earth Commons represents an opportunity for all students to get involved in environmental issues. “I think it’s incredibly important to students that Georgetown have this Institute because it demonstrates that the university really cares about these issues and understands that they require multidisciplinary solutions — they require all of us,” she says. “Climate change and environmental justice are the defining challenges of our generation, and we need as many opportunities to address them as possible.”Shelby Gresch (SFS’22) Environmental Headquarters in DC The Earth Commons will also leverage Georgetown’s Washington, DC, location to partner with NGOs, corporations, federal policy-making institutions and other environmentally-focused organizations for research, policy and academic opportunities, Marra says. Joanna Lewis, the Provost’s Distinguished Associate Professor of Energy and Environment and the director of STIA, serves on the Earth Commons’ faculty advisory committee. Lewis, who researches global climate change, says that the institute’s DC location presents an opportunity for more faculty to connect with policymakers and practitioners. “This is such a pivotal moment in climate policy, biodiversity policy and numerous other areas where Georgetown is well positioned to make a much larger contribution to forging solutions,” she says. “In serving as the hub of environmental research, education and engagement across the university, the Earth Commons is poised to become a headquarters for environmental knowledge in Washington.” A ‘Radically Transparent’ Sustainability Plan Georgetown has continued to build on its commitment to sustainability, including divesting from fossil fuels in February 2020, launching a renewable energy power purchase agreement in October 2020 and establishing an energy partnership in April 2021 that promotes sustainability through energy conservation. Most recently, Georgetown signed the U7+ Statement on Climate Change and Sustainability, along with almost 50 other universities across the globe, and Pope Francis’s Laudato si’ 7-year commitment to implement sustainability in different areas of the Catholic church, including universities. In tandem with the launch of the Earth Commons, Georgetown’s Office of Sustainability plans to launch a “radically transparent” strategic plan for sustainability that incorporates input from the Georgetown community. “Drawing on student ideas, the experience of staff, the expertise of Georgetown faculty, and the wisdom of local communities, the Office of Sustainability is launching an inclusive process to develop ambitious goals and bold solutions that create a relationship with the earth that is regenerative for all,” Chapple says. Students, faculty and staff are invited to participate in a joint Earth Commons and Office of Environment and Sustainability Town Hall on Feb. 22 to learn more about and help shape the planning process. Upcoming Environment and Art Initiatives In keeping with its interdisciplinary focus, on Feb. 15, Earth Commons is also launching the inaugural issue of Common Home, an online quarterly magazine produced by a board of undergraduate editors that examines environmental issues through a cross-disciplinary lens. In March, the Earth Commons will open a campus-wide art installation that features artistic interpretations of climate change and biodiversity data, from droughts and wildfires to climate refugees and arctic sea ice movements. Art will be featured in the Regents Building, Lauinger Library, the ICC and the Car Barn. And on March 18, Earth Commons is launching the inaugural performance of its artist in residence’s series, “We Hear You — A Climate Archive,” a global performance project exploring youth perspectives on climate crisis/chaos. Premiering at the Coal + Ice exhibit at the Kennedy Center, in partnership with the Asia Society, Embassy of Sweden in Washington, DC, and the Laboratory for Global Performance and Politics, the project seeks to amplify — and to record for future generations — the ways that today’s young people are experiencing changes on earth. No matter the discipline, founding director Marra hopes that all Georgetown community members will get involved with Earth Commons and the environment. “My goal is to get the green in the blue and the gray at Georgetown,” says Marra. “It must be in our fundamental fabric. We are integral to the environment and it is essential that we get our educational offerings and our research and actions up to speed immediately to make sure we’re part of the solution to repairing our common home – the environment all living things depend on.” This article was originally published by Georgetown University. Please follow the link to read the full story.
https://provost.georgetown.edu/georgetowns-new-institute-tackles-urgent-environmental-challenges/
Introduction ============ Dysphagia (swallowing difficulty) is a growing health concern in our aging population. Age-related changes in swallowing physiology as well as age-related diseases are predisposing factors for dysphagia in the elderly. In the US, dysphagia affects 300,000--600,000 persons yearly.[@b1-cia-7-287] Although the exact prevalence of dysphagia across different settings is unclear, conservative estimates suggest that 15% of the elderly population is affected by dysphagia.[@b2-cia-7-287] Furthermore, according to a single study, dysphagia referral rates among the elderly in a single tertiary teaching hospital increased 20% from 2002--2007; with 70% of referrals for persons above the age of 60.[@b3-cia-7-287] The US Census Bureau indicates that in 2010, the population of persons above the age of 65 was 40 million. Taken together, this suggests that up to 6 million older adults could be considered at risk for dysphagia. Any disruption in the swallowing process may be defined as dysphagia.[@b4-cia-7-287] Persons with anatomical or physiologic deficits in the mouth, pharynx, larynx, and esophagus may demonstrate signs and symptoms of dysphagia.[@b4-cia-7-287] In addition, dysphagia contributes to a variety of negative health status changes; most notably, increased risk of malnutrition and pneumonia. In this review, we will discuss how aging and disease impact swallowing physiology with a focus on nutritional status and pneumonia. We will conclude with a brief overview of dysphagia management approaches and consequences of dysphagia management on nutritional status and pneumonia in the elderly. Aging effects on swallow function ================================= Swallow physiology changes with advancing age. Reductions in muscle mass and connective tissue elasticity result in loss of strength[@b5-cia-7-287] and range of motion.[@b6-cia-7-287] These age-related changes can negatively impact the effective and efficient flow of swallowed materials through the upper aerodigestive tract. In general, a subtle slowing of swallow processes occurs with advancing age. Oral preparation of food requires more time and material transits through the mechanism more slowly. Over time, these subtle but cumulative changes can contribute to increased frequency of swallowed material penetrating into the upper airway and greater post-swallow residue during meals.[@b6-cia-7-287] Beyond subtle motor changes, age-related decrements in oral moisture, taste, and smell acuity may contribute to reduced swallowing performance in the elderly. Though sensorimotor changes related to healthy aging may contribute to voluntary alterations in dietary intake, the presence of age-related disease is the primary factor contributing to clinically significant dysphagia in the elderly. Dysphagia and its sequelae ========================== Disease risk increases with advancing age. Due to the complexity of the swallowing process, many adverse health conditions can influence swallowing function. Neurological diseases, cancers of the head/neck and esophagus, and metabolic deficits are broad categories of diseases that might contribute to dysphagia. [Table 1](#t1-cia-7-287){ref-type="table"} summarizes different categories of diseases and health conditions that negatively impact functional swallowing ability. Dysphagia affects up to 68% of elderly nursing home residents,[@b7-cia-7-287] up to 30% of elderly admitted to the hospital,[@b8-cia-7-287] up to 64% of patients after stroke,[@b9-cia-7-287],[@b10-cia-7-287] and 13%--38% of elderly who live independently.[@b11-cia-7-287]--[@b13-cia-7-287] Furthermore, dysphagia has been associated with increased mortality and morbidity.[@b14-cia-7-287] Two prevalent diseases of aging are stroke and dementia. In 2005, 2.6% of all noninstitutionalized adults (over 5 million people) in the US reported that they had previously experienced a stroke.[@b15-cia-7-287] The prevalence of stroke also increases with age, with 8.1% of people older than 65 years reporting having a stroke.[@b15-cia-7-287] Similarly, adults older than 65 years demonstrate an increased prevalence of dementia, with estimates between 6%--14%.[@b16-cia-7-287],[@b17-cia-7-287] Prevalence of dementia increases to over 30% beyond 85 years of age,[@b16-cia-7-287] and over 37% beyond 90 years.[@b17-cia-7-287] Common complications of dysphagia in both stroke and dementia include malnutrition and pneumonia. Dysphagia and nutrition in stroke --------------------------------- Dysphagia is highly prevalent following stroke with estimates ranging 30%--65%.[@b9-cia-7-287],[@b10-cia-7-287],[@b18-cia-7-287],[@b19-cia-7-287] Specific to the US, the Agency for Healthcare Research and Quality estimates that about 300,000--600,000 persons experience dysphagia as a result of stroke or other neurological deficits.[@b20-cia-7-287] Although many patients regain functional swallowing spontaneously within the first month following stroke,[@b10-cia-7-287] some patients maintain difficulty swallowing beyond 6 months.[@b9-cia-7-287],[@b21-cia-7-287] Complications that have been associated with dysphagia post-stroke include pneumonia,[@b22-cia-7-287],[@b23-cia-7-287] malnutrition,[@b24-cia-7-287] dehydration,[@b10-cia-7-287],[@b24-cia-7-287] poorer long-term outcome,[@b10-cia-7-287],[@b21-cia-7-287] increased length of hospital stay,[@b25-cia-7-287] increased rehabilitation time and the need for long-term care assistance,[@b26-cia-7-287] increased mortality,[@b10-cia-7-287],[@b19-cia-7-287],[@b22-cia-7-287] and increased health care costs.[@b10-cia-7-287] These complications impact the physical and social well being of patients, quality of life of both patients and caregivers, and the utilization of health care resources.[@b20-cia-7-287] In the acute phase of stroke, between 40%--60% of patients are reported to have swallowing difficulties.[@b9-cia-7-287],[@b10-cia-7-287] These difficulties may contribute to malnutrition due to limited food and liquid intake. Decreased food and liquid intake may reflect altered level of consciousness, physical weakness, or incoordination in the swallowing mechanism.[@b27-cia-7-287] Although the odds of malnutrition are increased in the presence of dysphagia following stroke,[@b28-cia-7-287] pre-stroke factors should be considered when assessing nutritional status and predicting stroke outcome. For example, upon admission, approximately 16% of stroke patients present with nutritional deficits. During acute hospitalization, nutritional deficits may worsen with reported prevalence increasing to 22%--26% at discharge from acute care.[@b29-cia-7-287]--[@b31-cia-7-287] Although nutritional deficits and dysphagia often coexist, malnutrition does not appear to be associated with dysphagia in the acute phase of stroke.[@b32-cia-7-287] Rather, malnutrition is more prevalent during the post acute rehabilitation phase, with a reported prevalence of up to 45%.[@b33-cia-7-287] Reduced food/liquid intake during acute hospitalization associated with dysphagia may be a contributing factor to increased malnutrition rates during subsequent rehabilitation.[@b28-cia-7-287] Dysphagia and pneumonia in stroke --------------------------------- Post-stroke pneumonia is a common adverse infection that affects up to one-third of acute stroke patients.[@b34-cia-7-287],[@b35-cia-7-287] Pneumonia is also a leading cause of mortality after stroke, accounting for nearly 35% of post-stroke deaths.[@b36-cia-7-287] Most stroke-related pneumonias are believed to result from dysphagia and the subsequent aspiration of oropharyngeal material. Aspiration is defined as entry of food or liquid into the airway below the level of the true vocal cords,[@b37-cia-7-287] and aspiration pneumonia is defined as entrance of swallowed materials into the airway that results in lung infection.[@b4-cia-7-287] A recent systematic review reported that stroke patients with dysphagia demonstrate ≥3-fold increase in pneumonia risk with an 11-fold increase in pneumonia risk among patients with confirmed aspiration.[@b22-cia-7-287] Along with this increased risk, the burden of aspiration pneumonia is high. Increased costs associated with longer hospitalization,[@b10-cia-7-287] greater disability at 3 and 6 months,[@b10-cia-7-287],[@b38-cia-7-287] and poor nutritional status during hospitalization[@b10-cia-7-287] characterize aspiration pneumonia in stroke. Dysphagia and dementia ---------------------- Dysphagia is a common symptom in dementia. It has been estimated that up to 45% of patients institutionalized with dementia have some degree of swallowing difficulty.[@b39-cia-7-287] Different clinical presentations of dementia will result in different swallowing or feeding impairments.[@b40-cia-7-287]--[@b43-cia-7-287] Most commonly, patients with dementia demonstrate a slowing of the swallowing process.[@b14-cia-7-287] Slowed swallow processes may increase time taken to finish a meal and subsequently increase the risk for poor nutritional status.[@b14-cia-7-287] Furthermore, patients with dementia often have difficulties self-feeding. These difficulties may relate to cognitive impairment, motor deficits such as weakness or apraxia, loss of appetite, and/or food avoidance. As a result, patients with dementia may experience weight loss and increased dependency for feeding.[@b14-cia-7-287] Subsequently, increased feeding dependency may lead to other dysphagia-related health problems, including pneumonia.[@b14-cia-7-287] Weight loss can reflect decreased nutritional status which increases the patient's risk of opportunistic infections such as pneumonia.[@b44-cia-7-287]--[@b46-cia-7-287] Pneumonia is a common cause of mortality in patients with dementia.[@b47-cia-7-287] Thus, dementia, dysphagia, and related feeding impairments can lead to nutritional deficits which in turn contribute to pneumonia and mortality. Among elderly patients in particular, the presence of dementia is associated with higher hospital admission rates and overall higher mortality.[@b48-cia-7-287] Moreover, elderly patients admitted to a hospital with dementia have a higher overall prevalence of both pneumonia and stroke, suggesting that aging significantly increases the risk for these negative health states.[@b48-cia-7-287] Dysphagia and nutrition in community dwelling elderly adults ------------------------------------------------------------ Dysphagia can result in reduced or altered oral intake of food/liquid which, in turn, can contribute to lowered nutritional status. One group which merits more attention in reference to potential relationships between dysphagia and nutritional status is community dwelling elderly adults. Dysphagia can contribute to malnutrition, and malnutrition can further contribute to decreased functional capacity. Thus, dysphagia may trigger or promote the frailty process among elderly persons.[@b46-cia-7-287] In a group of 65--94-year-old community dwelling adults, prevalence of dysphagia was reported to be 37.6%.[@b13-cia-7-287] Of these, 5.2% reported the use of a feeding tube at some point in life, and 12.9% reported the use of nutritional supplements to reach an adequate daily caloric intake.[@b13-cia-7-287] In another cohort of independently living older persons, prevalent cases of malnutrition or those at risk for malnutrition were estimated at 18.6% of elderly adults with dysphagia, and 12.3% of adults without dysphagia. Significant differences in nutritional status were noted between these subgroups at 1-year follow-up.[@b46-cia-7-287] These figures underscore the prevalence and importance of malnutrition and dysphagia among elderly individuals. Moreover, they suggest that dysphagic elderly living in the community are likely to present with an elevated risk of malnutrition. Dysphagia and pneumonia in community dwelling elderly adults ------------------------------------------------------------ The prevalence of community-acquired pneumonia in elderly adults is rising, with a greater risk of infection in those older than 75 years.[@b49-cia-7-287]--[@b51-cia-7-287] In addition, deaths from pneumonitis due to aspiration of solids and liquids (eg, aspiration pneumonia) are increasing and are currently ranked 15th on the CDC list of common causes of mortality.[@b52-cia-7-287] Frequency of pneumonia and its associated mortality increases with advancing age.[@b53-cia-7-287] More specifically, the prevalence of pneumonia in community dwelling persons increases in a direct relationship to aging and the presence of disease.[@b45-cia-7-287] Furthermore, an increased prevalence of dysphagia in the elderly increases the risk for pneumonia.[@b54-cia-7-287] It appears that with the aging population, both dysphagia and pneumonia rates are increasing. However, relationships between dysphagia and pneumonia in community dwelling elderly are poorly understood. Cabre and colleagues reported that 55% of 134 community dwelling elderly adults 70 years and older diagnosed with pneumonia upon admission to a geriatric hospital unit, presented with clinical signs of oropharyngeal dysphagia.[@b55-cia-7-287] In this cohort, cases presenting with dysphagia were older, presented with more severe pneumonia, greater decline in functional status, and demonstrated a higher prevalence of malnutrition.[@b55-cia-7-287] These patients also demonstrated increased mortality at 30 days and 1-year follow-up.[@b55-cia-7-287] Also, a recent study evaluated relationships between oropharyngeal dysphagia and the risk for malnutrition and lower respiratory tract infections-community--acquired pneumonia (LRTI-CAP) in a cohort of independently living older persons. Results indicated that 40% of LRTI-CAP cases presented with dysphagia, compared to 21.8% who did not present with dysphagia.[@b46-cia-7-287] These findings highlight the potential relationships among dysphagia, nutritional status, and pneumonia in community dwelling elderly. Dysphagia management ==================== The presence of a strong relationship between swallowing ability, nutritional status, and health outcomes in the elderly suggests a role for dysphagia management in this population. Successful swallowing interventions not only benefit individuals with reference to oral intake of food/liquid, but also have extended benefit to nutritional status and prevention of related morbidities such as pneumonia. A variety of dysphagia management tools are available pending the characteristics of the swallowing impairment and the individual patient. Swallowing management --------------------- Dysphagia management is a 'team event'. Many professionals may contribute to the management of dysphagia symptoms in a given patient. Furthermore, no single strategy is appropriate for all elderly patients with dysphagia. Concerning behavioral management and therapy, speech-language pathologists (SLP) play a central role in the management of patients with dysphagia and related morbidities. SLP clinical assessment is often supplemented with imaging studies (endoscopy and/or fluoroscopy), and these professionals may engage in a wide range of interventions. Some intervention strategies, termed 'compensations', are intended to be utilized for short periods in patients who are anticipated to improve. Compensations are viewed as short-term adjustments to the patient, food and/or liquid, or environment, with the goal of maintaining nutrition and hydration needs until the patient can do so by themselves. Other patients require more direct, intense rehabilitation strategies to improve impaired swallow functions. A brief review of each general strategy with examples follows. Compensatory management ----------------------- Compensatory strategies focus on implementation of techniques to facilitate continued safe oral intake of food and/or liquid; or to provide alternate sources of nutrition for maintenance of nutritional needs. Compensatory strategies are intended to have an immediate benefit on functional swallowing through simple adjustments that allow patients to continue oral diets safely. Compensatory strategies include, but are not limited to, postural adjustments of the patient, swallow maneuvers, and diet modifications (foods and/or liquids).[@b14-cia-7-287] ### Postural adjustments Changes in body and/or head posture may be recommended as compensatory techniques to reduce aspiration or residue.[@b56-cia-7-287] Changes in posture may alter the speed and flow direction of a food or liquid bolus, often with the intent of protecting the airway to facilitate a safe swallow.[@b14-cia-7-287] [Table 2](#t2-cia-7-287){ref-type="table"} lists commonly used postural adjustments. In general, these postural adjustments are intended to be utilized short term, and the impact of each may be evaluated during the clinical examination or with imaging studies. Available literature on the benefit of these techniques is variable. For example, while some investigators report reduced aspiration from a chin down technique,[@b56-cia-7-287],[@b57-cia-7-287] others report no significant benefit[@b58-cia-7-287] or no superior benefit to other compensations like thick liquids.[@b57-cia-7-287] Furthermore, these compensatory strategies only impact nutritional status or pneumonia when they allow patients to consume adequate amounts of food/liquid in the absence of airway compromise leading to chest infection. No existing data confirms this potential benefit of postural adjustments and some data suggest that these strategies are inferior to more active rehabilitation efforts in the prevention of nutritional deficits and pneumonia.[@b59-cia-7-287] ### Swallow maneuvers Swallow maneuvers are 'abnormal' variants on the normal swallow intended to improve the safety or efficiency of swallow function. Various swallow maneuvers have been suggested to address different physiologic swallowing deficits.[@b14-cia-7-287] [Table 3](#t3-cia-7-287){ref-type="table"} presents commonly used swallow maneuvers. Swallow maneuvers can be used as short-term compensations but many have also been used as swallow rehabilitative strategies. Different maneuvers are intended to address different aspects of the impaired swallow. For example, the supraglottic and super supraglottic swallow techniques both incorporate a voluntary breath hold and related laryngeal closure to protect the airway during swallowing.[@b14-cia-7-287] The Mendelsohn maneuver is intended to extend opening or more appropriately relaxation of the upper esophageal sphincter.[@b63-cia-7-287] Finally, the effortful or 'hard' swallow is intended to increase swallow forces on bolus materials with the result of less residue or airway compromise.[@b64-cia-7-287],[@b65-cia-7-287] Like postural adjustments, available data on the success of these techniques in patient populations is limited, conflicted, and often comprised of small samples.[@b59-cia-7-287],[@b66-cia-7-287]--[@b68-cia-7-287] Thus, the best advice for clinicians is to verify the impact of these maneuvers using swallowing imaging studies before introducing any of them as compensatory strategies. Also, similar to postural adjustments, no significant research has demonstrated the impact of these maneuvers, when used as compensatory strategies, on nutritional status or pneumonia. ### Diet modifications: modification of foods/liquids Modifying the consistency of solid food and/or liquid is a mainstay of compensatory intervention for patients with dysphagia.[@b37-cia-7-287] The goal of diet modification is to improve the safety and/or ease of oral consumption and thus maintain safe and adequate oral intake of food/liquid. However, low acceptability and resulting poor adherence with modified foods/liquids can contribute to increased risk of inadequate nutrition in elderly patients with dysphagia. ### Thickened liquids The use of thickened liquids is 'one of the most frequently used compensatory interventions in hospitals and long-term care facilities'.[@b70-cia-7-287] Generally accepted clinical intuition and anecdotal evidence claim that thickened liquids have an effect in helping to control the speed, direction, duration, and clearance of the bolus.[@b70-cia-7-287] However, only scant evidence suggests that thickened liquids result in significant positive health outcomes with regards to nutritional status or pneumonia. Despite the overall lack of evidence supporting the use of thickened liquids, this strategy continues to be a cornerstone in dysphagia management in many facilities.[@b70-cia-7-287] For example, a survey of 145 SLPs by Garcia et al reported that 84.8% of the respondents felt that thickening liquids was an effective management strategy for swallowing disorders with nectar thick liquids being the most frequently used.[@b71-cia-7-287] Unfortunately, the perceptions of these clinicians are not supported by available research. For example, Logemann et al[@b57-cia-7-287] reported that honey thick liquid was more effective in reducing aspiration during fluoroscopic swallow examination than nectar thick liquids (or the chin down technique). But, even this benefit disappeared when honey thick liquids were administered at the end of the examination. Kuhlemeier et al[@b72-cia-7-287] identified 'ultrathick' liquid to have lower aspiration rates than thick or thin liquids, although the manner of presentation (cup vs spoon) modified their results. Thus, available evidence appears discrepant from clinician perceptions regarding use of thick liquids. Beyond this scenario, thick liquids may present pragmatic limitations in clinical practice. ### Limitations of thickened liquids A primary concern with the overuse of thickened liquids is the risk of dehydration in elderly patients with dysphagia. Patient compliance with thickened liquids is often reduced.[@b4-cia-7-287] A recent survey of SLPs suggested that honey thick liquids were strongly disliked by their patients but even nectar thick liquids were poorly accepted by more than one in ten patients.[@b71-cia-7-287] Poor compliance with thickened liquids may lead to reduced fluid intake and an increased risk of dehydration.[@b73-cia-7-287] Beyond patient acceptance, no strong evidence is available supporting the use of thickened liquids as an intervention for patients with dysphagia. Only a single randomized trial has compared treatment outcomes between the chin down technique and nectar or honey thick liquids in patients with dysphagia.[@b74-cia-7-287] The results of this study revealed no significant differences between these strategies on the primary outcome of pneumonia. Consequently, strong evidence for the preferential use of liquid thickening as a strategy in dysphagia intervention is not currently available. An alternative approach to thickened liquids has been recommended to counter the risk for dehydration due to reduced fluid intake and dislike for thickened liquids. This approach, the 'Frazier water protocol', utilizes specific water intake guidelines and allows patients with dysphagia to consume water between meals.[@b75-cia-7-287] Although this technique has not yet been objectively assessed, experiences from the Frazier Rehabilitation Institute are impressive. Results suggest low rates of dehydration (2.1%) and chest infection (0.9%) in 234 elderly patients. With additional confirming results, this approach may become more widely used as a dysphagia intervention. ### Modified food diets Solid foods may be modified to accommodate perceived limitations in elderly patients with dysphagia. Solid food modification has been suggested to promote safe swallowing and adequate nutrition. However, no strong and universal clinical guidelines are available to describe the most appropriate modification of foods.[@b14-cia-7-287] At least one study indicated that among nursing home residents, 91% of patients placed on modified diets were placed on overly restrictive diets.[@b76-cia-7-287] Only 5% of these patients were identified to be on an appropriate diet level matching their swallow ability and 4% of patients were placed on diets above their clinically measured swallow ability. More recently, in an attempt to standardize the application of modified diets in patients with dysphagia, the National Dysphagia Diet was proposed.[@b77-cia-7-287] The National Dysphagia Diet is comprised of four levels of food modification with specific food items recommended at each level ([Table 4](#t4-cia-7-287){ref-type="table"}). While this approach is commendable, unfortunately, to date no studies have compared the benefit of using this standardized approach to institution specific diet modification strategies. ### Limitations of modified solids Although recommended to promote safe swallowing and reduce aspiration in patients with dysphagia, modified diets may result in reduced food intake, increasing the risk of malnutrition for some patients with dysphagia.[@b78-cia-7-287] Available literature on the nutritional benefit of modified diets is conflicted.[@b79-cia-7-287],[@b80-cia-7-287] One study evaluated dietary intake over the course of a day in hospitalized patients older than 60 years. The authors compared intake in patients consuming a regular diet to those consuming a texture modified diet and found that patients on the modified diet had a significantly lower nutritional intake in terms of energy and protein. Additionally, 54% of patients on a texture modified diet were recommended a nutritional supplement, compared with 24% of patients on a regular diet.[@b79-cia-7-287] Conversely, Germain et al[@b80-cia-7-287] compared patients consuming a modified diet with greater food choices to patients consuming a 'standard' (more restricted) modified diet over a 12-week period. They found significantly greater nutritional intake in patients consuming the expanded option diet. Additionally, they observed a significant weight gain in the patients consuming the expanded option diet at the end of 12 weeks. ### Feeding dependence and targeted feeding As mentioned previously, elderly patients with dementia and stroke may be dependent on others for feeding due to cognitive and/or physical limitations. Feeding dependence poses an increased risk for aspiration and related complications in patients with dysphagia due to factors such as rapid and uncontrolled presentation of food by feeders.[@b81-cia-7-287] This finding poses serious concern for patients with dysphagia on long-term modified diets. Implementing targeted feeding training can compensate for these difficulties and reduce related complications. For example, oral intake by targeted feeding (by trained individuals) in patients with dysphagia resulted in higher energy and protein intake compared to a control condition where no feeding assistance was provided.[@b78-cia-7-287] In combination with specific training on feeding, other strategies to monitor rate and intake of food may help increase safety, decrease fatigue, and improve feedback on successful swallowing for the patients during the course of the meal.[@b37-cia-7-287] Eating in environments without external distractions, especially in skilled nursing or long-term care settings, are essential to this aim. Likewise, the prescription and provision of adaptive equipment like cups without rims and angled utensils, etc, may also support improved outcomes for elderly dysphagic patients.[@b37-cia-7-287] Provision of alternate nutrition -------------------------------- Perhaps the ultimate form of compensation would be the use of alternate nutrition strategies. Non-oral feeding sources can benefit patients with nutritional deficits. This is especially true in the elderly, as malnutrition contributes to a variety of health problems including cardiovascular disease, deterioration of cognitive status and immune system, and poorly healing pressure ulcers and wounds.[@b82-cia-7-287],[@b83-cia-7-287] Patient populations most commonly receiving non-oral feeding support include the general category of dysphagia (64.1%), and patients with stroke (65.1%)[@b84-cia-7-287] or dementia (30%).[@b85-cia-7-287] While non-oral feeding methods provide direct benefit in many clinical situations, they do not benefit all elderly patients with dysphagia or nutritional decline. For example, regarding enteral feeding in patients with advanced dementia, Finucane et al[@b86-cia-7-287] did not find strong evidence to suggest that non-oral feeding prevented aspiration pneumonia, prolonged survival, improved wound healing, or reduced infections. Moreover, a study of \>80,000 Medicare beneficiaries over the age of 65 years indicated that presence of a percutaneous endoscopic gastrostomy (PEG) tube in hospitalized patients had a high mortality rate (23.1%). Mortality increased to 63% in 1 year.[@b87-cia-7-287] In addition, adverse events associated with non-oral feeding sources are common and include local wound complications, leakage around the insertion site, tube occlusion, and increased reflux leading to other complications such as pneumonia. Finally, the presence of alternate feeding methods can also promote a cascade of negative psychosocial features including depression and loss of social interaction associated with feeding.[@b86-cia-7-287],[@b88-cia-7-287] Despite the associated complications and impacts of non-oral feeding, provision of alternate feeding has demonstrated impact on nutritional adequacy and weight maintenance in some elderly populations and is therefore an important option in dysphagia management.[@b37-cia-7-287] Swallow rehabilitation ---------------------- The focus of swallow rehabilitation is to improve physiology of the impaired swallow. As such, many swallow rehabilitation approaches incorporate some form of exercise.[@b4-cia-7-287] Though the focus and amount of exercise varies widely from one rehabilitation approach to another, in general, exercise-based swallowing interventions have been shown to improve functional swallowing, minimize or prevent dysphagia-related morbidities, and improve impaired swallowing physiology.[@b59-cia-7-287],[@b89-cia-7-287]--[@b92-cia-7-287] [Table 5](#t5-cia-7-287){ref-type="table"} presents examples of recent exercise-based approaches to swallow rehabilitation. Though each of these programs differs in focus and technique, each shares some commonalities. Specifically, each incorporates some component of resistance into the exercise program and each advocates an intensive therapy program monitored by the amount of work completed by patients. Some are specific to comprehensive swallow function, while others focus on strengthening individual swallow subsystems. Yet, each program is novel and shares the common goal of improving impaired swallowing physiology. Perhaps one of the more traditional approaches to swallow rehabilitation is the use of oral motor exercises. Although, there is limited information on the effectiveness of oral motor exercises, recent studies have shown effective strengthening of swallow musculature and hence improved swallowing with the use of lip and tongue resistance exercises.[@b89-cia-7-287],[@b93-cia-7-287],[@b94-cia-7-287] More recent exercise approaches such as expiratory muscle strength training (EMST) or the Shaker head lift exercise focus on the use of resistance to strengthen swallowing subsystems. As implied by the name, EMST attempts to strengthen the respiratory muscles of expiration. However, initial research has indicated potential extended benefits to swallow function. For example, EMST has been shown to increase hyolaryngeal movement and improve airway protection in patients with Parkinson's disease.[@b95-cia-7-287] As indicated in the name, the head lift exercise developed by Shaker incorporates both repetitive and sustained head raises from a lying position.[@b96-cia-7-287] Improvements from this exercise include increased anterior laryngeal excursion and upper esophageal sphincter opening during swallowing, both of which contribute to more functional swallowing ability. These positive physiologic changes have been demonstrated in healthy older adults and also in patients on tube feeding due to abnormal upper esophageal sphincter opening.[@b96-cia-7-287],[@b97-cia-7-287] The McNeill Dysphagia Therapy Program (MDTP) is an exercise-based therapy program, using swallowing as an exercise.[@b91-cia-7-287] From this perspective, MDTP addresses the entire swallow mechanism, not just subsystems as in other approaches. This program is completed in daily sessions for 3 weeks and reports excellent functional improvement in patients with chronic dysphagia. In addition, recent studies have documented physiological improvements in strength, movement, and timing of the swallow.[@b91-cia-7-287],[@b92-cia-7-287] In addition to exercise-based interventions, the use of adjunctive modalities may be useful in swallowing rehabilitation. Application of adjunctive electrical stimulation has been widely debated and studied primarily in small samples. The rationale behind the application of electrical stimulation is that it facilitates increased muscle contraction during swallowing activity. Reported gains have included advances in oral diet, reduced aspiration, and reduced dependence on tube feeding.[@b68-cia-7-287],[@b98-cia-7-287],[@b99-cia-7-287] However, other studies have reported no significant differences in outcomes following dysphagia therapy with and without adjunctive neuromuscular electrical stimulation.[@b100-cia-7-287],[@b101-cia-7-287] Currently, the benefit of adding this modality to dyphagia therapy is not well documented; however, several smaller studies have suggested a clinical benefit. Surface electromyography (sEMG) has been demonstrated as a beneficial feedback mechanism in dysphagia rehabilitation. sEMG biofeedback provides immediate information on neuromuscular activity associated with swallowing and is reported to help patients learn novel swallowing maneuvers quickly. Studies have documented that sEMG biofeedback facilitates favorable outcomes with reduced therapy time in patients, even with chronic dysphagia.[@b102-cia-7-287]--[@b104-cia-7-287] Impact of swallow rehabilitation on nutritional status and pneumonia ==================================================================== As presented above, dysphagia, nutritional status, and pneumonia appear to have strong interrelationships in various elderly populations. Recent evidence suggests that successful swallowing rehabilitation and/or early preventative efforts may reduce the frequency of both malnutrition and pneumonia in elderly patients with dysphagia. For example, patients with dysphagia in acute stroke who received an intensive exercise-based swallow rehabilitation program demonstrated less malnutrition and pneumonia compared to patients receiving diet modifications and compensations or those receiving no intervention.[@b59-cia-7-287] Other studies have demonstrated improved functional oral intake following successful swallow rehabilitation measured by the Functional Oral Intake Scale (FOIS), removal of PEG tube, and/or improved nutritional markers.[@b65-cia-7-287],[@b95-cia-7-287]--[@b97-cia-7-287] In the head and neck cancer population, recent clinical research has shown that exercise during the course of chemoradiation treatment helps preserve muscle mass with reduced negative nutritional outcomes common in this population.[@b90-cia-7-287] Finally, one exercise-based swallow rehabilitation program (MDTP) has demonstrated positive nutritional outcomes in patients with chronic dysphagia, including weight gain, removal of feeding tubes, and increased oral intake. These benefits were maintained at a 3-month follow-up evaluation.[@b91-cia-7-287],[@b92-cia-7-287] Collectively, such research suggests that intensive swallow rehabilitation can result in improved nutritional status and a reduction of pneumonia in a variety of elderly populations with dysphagia. Summary ======= A strong relationship appears to exist between dysphagia and the negative health outcomes of malnutrition and pneumonia in patients following stroke, those with dementia, and also in community dwelling elderly adults. This trilogy of deficits, prominent among the elderly, demands more efforts focused on early identification and effective rehabilitation and prevention. Addressing issues such as the most efficient and effective methods to identify dysphagia and malnutrition in high-risk patients and community dwelling elderly adults could result in reduced morbidity in elderly populations. Of particular interest are recent studies that implicate benefit from intensive swallowing rehabilitation in preventing nutritional decline and pneumonia in adults with dysphagia. Future research should extend this 'prophylactic' approach to other at-risk populations including community dwelling elderly adults. **Disclosure** The authors report no conflicts of interest in this work. ###### Conditions that may contribute to dysphagia[@b4-cia-7-287] --------------------------------------------- **Neurologic disease** Stroke Dementia Traumatic brain injury Myasthenia gravis Cerebral palsy Guillain--Barré syndrome Poliomyelitis Myopathy **Progressive disease** Parkinson's disease Huntington disease Age-related changes **Rheumatoid disease** Polydermatomyositis Progressive systemic sclerosis Sjögren disease **Other** Any tumor involving the aerodigestive tract Iatrogenic diagnoses Radiation therapy Chemotherapy Intubation tracheostomy Medication related Other, related diagnoses Severe respiratory compromise --------------------------------------------- Adapted from Groher ME, Crary MA. *Dysphagia: Clinical Management in adults and children*. Maryland Heights, MO: Mosby Elsevier; 2010. ###### Examples of postural adjustments **Technique** **Performance** **Intended outcome** **Reported benefit** -------------------------- -------------------------------------------------------- --------------------------------------------------- -------------------------------------------------------------------------- **Body posture changes** Lying down Lie down/angledReduce impact of gravity during swallow • Increased hypopharyngeal pressure on bolus • Increased PES opening[@b60-cia-7-287] Side lying • Lie down on stronger side (lower) Slows bolusProvides time to adjust/protect airway • Less aspiration[@b56-cia-7-287] **Head posture changes** Head extension/chin up • Raise chin Propels bolus to back of mouthWidens oropharynx Reduced aspiration[@b61-cia-7-287]Better bolus transport[@b14-cia-7-287] Head flexion/chin tuck • Tucking chin towards the chest • Improves airway protection • Reduced aspiration[@b56-cia-7-287],[@b57-cia-7-287],[@b61-cia-7-287] Head rotation/head turn • Turning head towards the weaker side Reduces residue after swallowReduces aspiration Less residue[@b62-cia-7-287]Reduced aspiration[@b56-cia-7-287] ###### Examples of swallow maneuvers **Swallow maneuver** **Performance** **Intended outcome** **Reported benefit** ------------------------------------------------------------- ---------------------------------------------------------- ---------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Supraglottic swallow • Hold breath, swallow, and then gentle cough • Reduce aspiration and increase movement of the larynx • Reduces aspiration[@b66-cia-7-287] Super supraglottic swallow • Hold breath, bear down, swallow, and then gentle cough Effortful swallow. Also called 'hard' or 'forceful' swallow • Swallow 'harder' Increased lingual force on the bolusLess aspiration and pharyngeal residue • Increased pharyngeal pressure and less residue[@b67-cia-7-287],[@b68-cia-7-287] Mendelsohn maneuver • 'Squeeze' swallow at apex • Improve swallow coordination • Reduced residue and aspiration[@b69-cia-7-287] ###### Levels of modified diet[@b77-cia-7-287] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Level** **Description** **Examples of recommended foods** ------------------------------------------------ ------------------------------------------- -------------------------------------------------------------------------------------------------- **Four Levels in the National dysphagia diet** Level 1: dysphagia pureed Homogeneous, cohesive, and pudding like.\ Smooth, homogenous cooked cereals\ No chewing required, only bolus control Pureed: meats, starches (like mashed potatoes), and vegetables with smooth sauces without lumps\ Pureed/strained soups\ Pudding, soufflé, yogurt Level 2: dysphagia mechanically altered Moist, semi-solid foods, cohesive.\ Cooked cereals with little texture\ Requires chewing ability Moistened ground or cooked meat\ Moistened, soft, easy to chew canned fruit and vegetables Level 3: dysphagia advanced Soft-solids. Require more chewing ability Well moistened breads, rice, and other starches\ Canned or cooked fruit and vegetables\ Thin sliced, tender meats/poultry Level 4: regular No modifications, all foods allowed No restrictions ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Adapted from Groher ME, Crary MA. Dysphagia: Clinical management in adults and children. Maryland Heights, MO. Mosby, Elsevier; 2010. ###### Examples of exercise-based swallow rehabilitation approaches **Program** **Focus** **Intended outcome** **Reported benefit** -------------------------------------------- ----------------------------------------------------------------------------------------- ----------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Lingual resistance • Strengthening tongue with progressively increasing intensity Increased tongue strengthImproved swallow Increased tongue muscle massIncreased swallow pressureReduced aspiration[@b76-cia-7-287] Shaker/head-lift Strengthening suprahyoid musclesImprove elevation of larynxIncreasing UES opening • Improve strength of muscles for greater UES opening Increased larynx elevationIncreased UES openingLess post-swallow aspiration[@b80-cia-7-287] EMST (expiratory muscle strength training) Strengthening submental muscleImprove expiratory pressures for better airway protection Improve expiratory driveReduce penetration and aspiration Better penetration-aspiration scores in Parkinson's diseaseIncreased maximum expiratory pressureIncreased submental muscle electromyography activity during swallowing[@b81-cia-7-287],[@b89-cia-7-287] MDTP (McNeill dysphagia therapy program) • Swallow as exercise with progressive resistance • Improve swallowing including strength and timing Improved swallow strengthImproved movement of swallow structuresImproved timingWeight gain[@b68-cia-7-287],[@b91-cia-7-287],[@b92-cia-7-287] **Abbreviations:** UES, Upper Esophageal Sphincter.
This very illuminating book centers on an important notion in Confucian philosophy, the concept of harmony. Chenyang Li understands Confucian harmony as a property possessed by particular kinds of things. The property comes in degrees (p. 9): something could be (more or less) harmonious, whereas something else might not be harmonious at all. For the sake of convenience, let me use the term "harmony-apt subjects", which does not appear in the book, to designate the things that are capable of being more or less harmonious. Centering on Confucian harmony, Li aims to address four questions. First, what kinds of things are harmony-apt subjects? Second, what is the standard of being (more or less) harmonious? Third, what are the reasons for promoting harmony? Fourth, how is harmony promoted? Although the book isn't organized around these four questions, understanding Li's answers to them is the most efficient way of grasping what he wants to show. To understand Li's answers to the first two questions, it might be helpful to consider a specific scenario. You and your roommate both want to stay in the living room from 8pm to 10pm. But the thing is that you both simply cannot be there at the same time. You want to conduct your academic research. Unfortunately, your roommate wants to watch TV. So you and your roommate must find a way to solve the conflict. Consider the following three possible strategies. First, you discard the virtue of diligence and watch TV together. Second, you completely ignore your roommate's preference and force her to have fun in her own room. Third, the two of you set up a schedule according to which you conduct research in the living room every Monday, Wednesday and Friday, and your roommate watches TV there every Tuesday, Thursday and Saturday. And you flip a coin to decide what will happen on Sunday. Based on my understanding of the book, the interaction between you and your roommate is a harmony-apt subject, that is, the interaction between you two could be more or less harmonious. According to Li, a harmony-apt subject is constituted by at least two parties that interact with each other, and the parties usually possess conflicting dispositions (p. 9). He thinks that various different things could serve as harmony-apt subjects, such as a person, a society, a family, the whole world, and the whole cosmos (p. 17). Moreover, in Li's view, a subject is more harmonious rather than less if it better achieves the state of "adjusting differences and reconciling conflicts" and creating "constructive conditions for the healthy existence of all parties" (p. 10). In our example, the third strategy would lead to a more harmonious outcome than the first two strategies. Now turn to the third question, which concerns the reasons for promoting harmony. On the one hand, Li holds that there are various reasons for promoting different types of harmony. This could be seen from his discussion of each type of harmony. On the other hand, he seems to presuppose that there is a unifying reason for promoting harmony of all types, and the unifying reason is that a harmonious state of affairs has profound aesthetic value. He takes music as the archetype of harmony (p. 39), and spends one whole chapter on the connection between harmony and music. According to him, as with the harmony of music, the harmony of all other types is based on beauty (p. 49). I agree with Li that a harmonious subject usually possesses aesthetic value to a certain extent. For example, a person with a harmonious mental state, or inner peace, might make others who observe her life feel comfortable, or even pleasurable. When Confucius was seventy years old, he was able to do what he intended freely without breaking the rules. Picturing in our minds Confucius at this age we can sense the aesthetic value he possessed. Likewise, the state of affairs where a group of persons engage in a process of adjusting their conflicting preferences and seeking peaceful cooperation or coordination possesses some aesthetic value. Similarly, the beauty of the harmony in the cosmos often attracts a great mind. However, I doubt that the aesthetic value could serve as a major reason for promoting harmony of any type. Let me take intra-personal harmony and inter-personal harmony as two examples. Promoting the aesthetic value inherent to a state of inner peace is acceptable or permissible but not very significant. The reason inner peace is worth promoting is not primarily due to the beauty accompanied by inner peace, but rather because of the intrinsic value of inner peace or some other momentous values. By the same token, you and your roommate might consider seeking the aesthetic value of your peaceful co-existence as a motivation for your coordination. But it is odd to say that the aesthetic value provides an important reason for justifying the coordination. Moreover, I doubt that in principle there is any unifying reason for promoting harmony of all types. The reasons for promoting intra-personal harmony could be very different from the reasons for promoting inter-personal harmony, such as familial harmony, national harmony and international harmony. The value of harmony in the cosmos, which is one of the major concerns of environmental philosophy, is different from the value of either intra-personal harmony or inter-personal harmony. For me, harmony is too inclusive to instantiate one unifying value. So far, I've briefly examined Li's answers to the first three questions. Now, I turn to the last question: how to promote harmony. Here I only focus on inter-personal harmony. I agree with Li on the general strategy for reaching harmony: we need to be open-minded and to be as creative as possible (pp. 20-22). In seeking harmony, we need to avoid two tendencies. On the one hand, we shouldn't exercise the tendency to avoid confrontation by surrendering to opponents rather than fighting for good causes. On the other hand, we shouldn't give in to the tendency to go all out to defeat opponents (p. 169). Li is right that seeking harmony is similar to creating a new way of life (pp. 20-22). Take intra-family division of labor as an example. As he says, If one person's income-earning capacity is low but is skilled at and enjoys housework, while the other person is a cardiologist who eschews housework, it may be to the advantage of both persons and their children if their division of labor is arranged accordingly to maximize their family prospect. Such a division of labor does not have to be drawn along sexual lines. (p. 113) Li emphasizes that harmony seeking as creating new ways of life is quite different from seeking conformity with pre-fixed orders (p. 21). He uses a few other terms, such as "a fixed grand scheme of things that pre-exists in the world" (p. 1) and "antecedent patterns" (p. 20) interchangeably with the term "pre-fixed orders." I find all of the terms quite ambiguous. Sometimes, Li seems to use them to refer to a scheme of dogmas (pp. 7-8). But at other times he seems to use the terms to mean a scheme of truth as "correspondence with objective fact in the world" (p. 21). It is quite correct to say that creating new ways of life is different from seeking conformity with a scheme of dogmas. But a process of creating new ways of life is usually a process of truth-seeking as well. Let me give a realistic scenario of intra-family division of labor to illustrate the congruence of creativity and truth seeking. Suppose that the wife is a very busy cardiologist. The husband is a freelance writer who is skilled at and enjoys housework. In the not too distant past, both the husband and the wife had held the dogma that women ought to take the responsibility of doing housework, a dogma that at the time was universally accepted by their society. One day, the wife returned home, as she routinely did, at 8pm and immediately went into the kitchen to fix dinner. She didn't even take time to drink some water. Five minutes later, the husband who was watching TV in the living room, heard some unusual noise in the kitchen. He sadly found that his beloved wife had fainted due to exhaustion. At that moment, the husband suddenly realized the fact that he hadn't been a caring husband and the moral truth that intra-family division of labor ought not to be drawn along sexual lines. After the tragedy, the husband took the responsibility of doing almost all of the housework, and the family became more harmonious. In this scenario, the process of reaching harmony is also a process of finding the truth about intra-family division of labor. Besides creativity, according to Li, ritual propriety is conducive to harmony of all types (p. 57). Ritual propriety is another important notion in Confucian philosophy. It is not easy to characterize the concept accurately. By and large, ritual propriety concerns a conventional practice that regulates human behavior (p. 64). Many ritual proprieties could motivate the agents who partake in the practice to express a certain attitude towards others or themselves in an effective and efficient way. We can find many examples of ritual propriety in daily life, say, smiling towards each other and gift giving at Christmas. Many such practices express mutual respect or mutual concern effectively and efficiently. I agree with Li that, for any type of harmony, some proper forms of ritual propriety might be conducive to it. But we must be cautious. Establishing a ritual propriety shouldn't be at the expense of creativity and truth-seeking. Li undertakes a significant and difficult project. Seeking harmony is momentous in our times, as we oftentimes run into all kinds of conflicts. Li suggests we take a profound aspect of Confucian philosophy seriously, that is, we should seek ways of solving the conflicts between ourselves and others without having anyone make unjustified sacrifices. As Confucius says, "cultivated persons seek harmony but not sameness" (Analects 13. 23). I recommend this book to everyone who is interested in seeking a better self, a better life, or a better world, especially those people who are interested in finding the answers from ancient Chinese wisdom.
https://ndpr.nd.edu/news/the-confucian-philosophy-of-harmony/
Honoring International Migrants Day (AGENPARL) – ven 17 dicembre 2021 You are subscribed to Press Releases for U.S. Department of State. This information has recently been updated, and is now available. 12/17/2021 04:39 PM EST Antony J. Blinken, Secretary of State On December 18, International Migrants Day, we recognize the rights, contributions, and struggles of migrants, and reiterate the United States’ commitment to support safe, orderly, and humane migration around the world. Through regional partnerships created by the Department of State’s Bureau of Population, Refugees, and Migration with international humanitarian partners like the International Organization for Migration (IOM), as well as close coordination with governments throughout the world, we are working to enhance cooperation and migration management, to protect migrants in situations of vulnerability, and to address the root causes of irregular migration. In its World Migration Report 2022, the IOM estimates that there were almost 281 million international migrants in 2020, which equates to 3.6 percent of the total global population. The United States underscores the need to discourage irregular migration, which exposes migrants to dangerous smuggling operations and trafficking in persons. At the same time, we encourage governments to improve access to international protection screening, strengthen their asylum capacity, identify and assist victims of trafficking in persons, support returning migrants’ reintegration, and expand alternative legal pathways. Many migrants have faced tremendous hardships or lost their lives in dangerous irregular journeys across the Mediterranean, the Bay of Bengal and the Andaman Sea, the Red Sea, the Darien Gap, and desert conditions near our own southwest border. Instability, economic hardship, and climate change are all factors that can push people into taking these dangerous journeys. The United States recognizes that to achieve safe, orderly, and humane migration, we need comprehensive regional and global plans that address these complex issues. In July, the United States released its Collaborative Migration Management Strategy, as well as a Strategy to Address the Root Causes of Migration in Central America. The United States is the largest single donor to IOM and in Fiscal Year 2021 provided more than $25 million to support regional migration programs. Managing the unprecedented level of migration in the Western Hemisphere is a shared responsibility, and we continue to encourage governments and partners in the region to join us on a bold new regional approach on migration and protection. In honor of International Migrants Day, today we released our revised U.S. national statement on the Global Compact for Safe, Orderly and Regular Migration (GCM). We endorse the vision of the GCM and commit to working with countries to enhance cooperation and manage migration in ways that are safe, orderly and humane.
Ashtamahishi: The Eight Wives of Krishna Krishna, the eternal lover, is believed to have charmed the heart of every woman he came across, and his marriage with 16,100 women is the stuff of numerous ballads that have enthralled us over ages. But who amongst them all did Krishna love? Who ruled his heart and influenced his life? Not one, but there were eight women whom Krishna married solely on the basis of mutual love and respect. Each of these wives—the Ashtabharyas—contributed to making Krishna what he was. While their names figure in the text of the great epic Mahabharata, not much has been discussed about them. Who are these women, and what was that special ‘something’ in each of them that won Krishna over? What were each of those relationships like? Radha Viswanath delves deep into the great Hindu epics, puranas, and other ancient texts, weaving nuggets of information with rich imagination to give us a fascinating picture of Krishna’s life with these eight extraordinary women.
https://www.booksetgo.com/products/ashtamahishi-the-eight-wives-of-krishna
About Lord Hanuman Ji Hanuman (Sanskrit: हनुमान्, IPA: hʌnʊˈmɑn) is a Hindu deity, who was an ardent devotee of Rama according to the Hindu legends. He is a central character in the Indian epic Ramayana and its various versions. He also finds mentions in several other texts, including Mahabharata, the various Puranas and some Jain texts. A vanara (ape-like humanoid), Hanuman participated in Rama's war against the demon king Ravana. He is son of Lord Vayu and incarnation of Lord Shiva. Add a description Best Supporters How can you earn points?
http://whopopular.com/Lord-Hanuman-Ji
This isn’t corporate medicine we’re practicing here – it’s personal, custom-designed health care. Our veterinary practice is located just 7 minutes outside of Courtenay in the village of Cumberland, and being small gives us the opportunity to build a relationship with each and every one of our Comox Valley patients – and their families. More than a veterinarian, Dr. Carol Champion is your pet's family doctor; someone who understands your pet's medical history, likes and dislikes and even how her environment may be affecting her health. We recognize that your pet’s health is dependent upon a variety of factors, including age, lifestyle, diet and breed. We take all of these factors into account when making a diagnosis in order to provide the best treatment available. Whether you’re from Comox, Courtenay, or Cumberland, we’d love to help you look after your pet, and show you what makes our clinic so unique. Because we treat only dogs and cats, our training and expertise is focused entirely on canine and feline health. That means that your pet is always assured of the most professional and up-to-date treatment available. See why it's different here. Meet our team or learn about some of the services we offer. Feel free to contact us with any questions you may have.
https://www.championvet.ca/
How accurately can we predict incidental L2 vocabulary learning during activities such as reading and writing? - Do kindergarteners develop awareness of the grammatical structures they acquire? - Listening to songs and singing helped improve the pronunciation of foreign words but not meaning recall - Interdependence in listening comprehension skills across first and second languages in bilingual children - Promoting multisite research in the field of second language acquisition - Mandarin listeners may treat tone errors differently depending on the accent of the speaker who makes them - Understanding others’ intentions supports the learning of adjective meanings - Is it really easier to learn more languages the more you already know? - To develop fluency, what is the best schedule for recycling the same speaking task?
https://oasis-database.org/catalog?f%5Bpublication_journal_name_sim%5D%5B%5D=https%3A%2F%2Fonlinelibrary.wiley.com%2Fjournal%2F14679922&locale=en
This protocol describes methods for increasing and evaluating the efficiency of genome editing based on the CRISPR-Cas9 (clustered regularly interspaced short palindromic repeats-CRISPR-associated 9) system, transcription activator-like effector nucleases (TALENs) or zinc-finger nucleases (ZFNs). First, Indel Detection by Amplicon Analysis (IDAA) determines the size and frequency of insertions and deletions elicited by nucleases in cells, tissues or embryos through analysis of fluorophore-labeled PCR amplicons covering the nuclease target site by capillary electrophoresis in a sequenator. Second, FACS enrichment of cells expressing nucleases linked to fluorescent proteins can be used to maximize knockout or knock-in editing efficiencies or to balance editing efficiency and toxic/off-target effects. The two methods can be combined to form a pipeline for cell-line editing that facilitates the testing of new nuclease reagents and the generation of edited cell pools or clonal cell lines, reducing the number of clones that need to be generated and increasing the ease with which they are screened. The pipeline shortens the time line, but it most prominently reduces the workload of cell-line editing, which may be completed within 4 weeks.
https://curis.ku.dk/portal/da/publications/genome-editing-using-facs-enrichment-of-nucleaseexpressing-cells-and-indel-detection-by-amplicon-analysis(cb2b2165-6aad-4d3f-b172-a95e1d8ce4a7).html
Tuscan cuisine has its own features in every province and still has some common peculiarities which distinguish it from other cuisines in the world. One of the unique features is olive oil and bread which are at the same time additional and compulsory ingredients of many recipes. Local savoury herbs (rosemary, thyme, basil, sage, parsley, etc.), garlic and onion are used for dressing. The simplicity of cooking is one of the features of Tuscan cuisine. All dishes are easy to cook and one can always taste the natural flavor of the products. Tuscan table is famous for a variety of vegetables which are served not in salads but “as they are”. Tuscan cuisine is often described as essenziale – substantial, main as it cares for keeping of the essence of each of the product used. Cooks don’t often decorate the dishes. Any dish goes with famous Tuscan wines the presence of which is as obligatory as the presence of bread. Thanks to the geographical position of Tuscany its cuisine includes sea food and country food. Each Tuscan province can surprise the visitors with delicatessen. Tuscan cuisine of the poor and the rich is almost alike. The main ingredients and recipes are used to be the same. The difference is not in quality but in quantity. The dishes which wealthy people could afford on every day basis were cooked by peasants only in case of a holiday. Today Tuscan guests can taste the whole variety of local dishes. According to the Italian proverb salad should be cooked by four chefs: a miser, a philosopher, a prodigal and a painter. The miser should serve the salad with vinegar, the philosopher should add salt; the prodigal adds oil and the painter mixes all the ingredients. The dishes are different for each of the provinces. Pasta, despite the prejudice, is a favorite but not the only national dish. Tuscany is noted for the picturesque olive-woods on the beautiful hillsides. Although the beauty of the landscapes is not less important than the benefit of the oil trees. Olive oil in Tuscany is considered to be the best. High quality of this delicate product is provided thanks to the favorable climate and soil peculiarities. In some places of Tuscany people use the traditional way of production: olives are gathered manually and then pressed mechanically. “Mangiare senza pane e come non mangiare”: “No bread – no food”, say Tuscans. There was a time when bread with onions or crust of bread grilled with vegetables used to be the main dish of peasants. A Tuscan could eat up to 650 grammes of bread which means about 230 kilos a year! Despite the modesty of present figure, 74 kilos a year, bread is always served and treated with great warmth by Tuscans.
https://en.toskana-netz.de/168/tuscan-cuisine.html
THE Malaysian University English Test (MUET) has been improved to ensure it retains its position in testing language proficiency. Carried out by the Malaysian Examinations Council (MEC) since 1999, the MUET syllabus and test specifications have been revised twice – the first in 2008, and the second this year to align MUET with the Common European Framework of Reference for Languages (CEFR). MUET candidates were tested based on the latest revision of the Reading, Writing and Listening syllabus and test specifications for the first time yesterday. The new Speaking test was held in February. MUET Excellent Teacher Shanti Subramaniam from Kolej Tingkatan Enam Sri Istana, Klang, Selangor, said the latest Reading texts can be quite challenging compared to the previous ones. “Students will find some unfamiliar topics and have to infer their answers from the texts. In the old format, they could still look for answers in the passages. “The Listening test is also a little more challenging even though it comprises all multiple-choice questions. “The Writing tasks are much easier with letter and email writing. While Task 2 requires critical thinking skills, the students should be able to give their elaboration. The number of words required for both essays has also been reduced. “For Speaking, the topic discussed in Part 2 is different from that in Part 1. This will help us identify the better students, ” she told StarEdu. The objective of the MUET is to measure the English language proficiency of candidates who intend to pursue first degrees in public and private universities in Malaysia. It helps institutions make better decisions about the readiness of prospective students for academic coursework, and their ability to use and understand English at university. The test can also be used to measure the English proficiency of adult learners of English, including teachers and others who need to use the English language in the workplace. The MUET tests all four language skills: Listening, Speaking, Reading and Writing. Administered three times a year, the MUET sessions are named MUET Session 1, MUET Session 2 and MUET Session 3.The MEC also administers MUET on Demand (MoD), which enables candidates to take the test outside the fixed sessions held each year.
https://www.thestar.com.my/news/education/2021/04/11/muet-enhanced-to-ensure-testing-rigour
Acute Traumatic Compartment Syndrome of the Forearm: Literature Review and Unfavorable Outcomes Risk Analysis of Fasciotomy Treatment. Forearm compartment syndrome is a relatively underreported event compared with compartment syndrome of the lower extremity or trunk. The aim of this review of the literature was to provide insight into the potential consequences of certain treatment modalities in the control of acute compartment syndrome of the forearm based on data presented over the past 44 years. A comprehensive search was conducted across several databases including EMBASE, Ovid MEDLINE, Cochrane Database of Systematic Reviews, and Scopus, capturing studies published from 1973 to 2017 to identify potential articles for inclusion in the review. Outcomes data were evaluated for each of the studies included in this analysis on the basis of treatment utilized (fasciotomy vs. no fasciotomy) and respective outcome (favorable vs. unfavorable). Relative risk (RR) analysis was performed to determine risk factors for unfavorable outcomes from the pooled data. The analysis revealed a statistically significant higher likelihood of unfavorable outcomes resulting from performing fasciotomy in the event of forearm compartment syndrome compared with conservative management (RR = 4.82, p < .01). Fasciotomy treatment was associated with a higher likelihood of patients presenting with forearm compartment syndrome to experience unfavorable outcomes. The results of this study can help guide awareness of potential sequelae of treatment choices in forearm compartment syndrome, and clinical decision-making for wise patient selection for surgical intervention, when necessary.
Blake Krikorian Leaves Amazon's Board Because He Just Sold His Startup To One Of Its Rivals SAN FRANCISCO (Reuters) – Silicon Valley entrepreneur and investor Blake Krikorian has quit the board of Amazon.com Inc about a year and a half after joining to take up an unspecified role at the buyer of a company he owned. Krikorian, known for co-founding Sling Media in 2004, informed the rest of the board on Wednesday of his intention to resign, Amazon said in a Friday filing. Spokesman Ty Rogers added that the serial entrepreneur, whose latest endeavour is home-automation startup id8 Group R2 Studios Inc, has sold a company and quit in order to take up a position at the acquirer. He did not name the company involved or the buyer. The Wall Street Journal reported last week that Krikorian’s year-old startup was in acquisition discussions with Amazon rivals Apple Inc, Google Inc and Microsoft Corp. It cited sources as saying the trio of tech powerhouses coveted R2 Studios’ home-oriented technology as they expanded their own forays into living-room media entertainment. R2 Studios recently launched a Google Android application to allow users to control home heating and lighting systems from their smartphone. Krikorian’s Sling Media — which was sold to EchoStar Communications in 2007 — made the “Slingbox” for watching TV on computers.
Verdure Chantilly tapestry is finely woven in France by Jules Pansu, showing a woodland scene of a cottage, a church and a meandering stream with a bridge viewed through the forest trees. In the foreground wild flowers are in bloom. The border of this piece has ripened fruits including grapes, apples, plums, and pears gracefully entwined around the piece. The word "verdure" comes from the old French word verd, meaning green, itself a corruption of the original Latin word viridis. One notable feature of all verdures is their attention to detail and their striking use of natural colors to create a strong visual impact. Unlike other well known tapestry motifs, such as Mille Fleurs, verdures are works of art in their own right, with the central and ornamental elements making up the focus of the work rather than just a decorative backdrop. This French tapestry is an beautiful example of the warmth of verdure tapestry designs, and contains much detail. The foreground and background features combine to create an illusion of depth that compliments the luxury of the woven fabric found in all our wall tapestries. It is a work of art that would look equally good in a traditional home as well as a more contemporary setting. This verdure tapestry is lined and has a tunnel for ease of hanging - we provide tapestry hanging instructions with your order. Please note that it is non-returnable.
https://www.thetapestryhouse.com/tapestries/view/19/verdure-chantilly
CALL FOR ENTRIES: INNOVATION CHALLENGE Earn a prize for a new technology or idea that will benefit the Wearable Robotics Industry. The competition will open on November 1, 2017 and close February 1, 2018. The Wearable Robotics Association (WearRA) is inviting entries that represent the most innovative new ideas in wearable robotic technology for the Innovation Challenge as part of WearRAcon. The winner of the competition will receive international recognition and a $5,000 cash prize to support development and commercialization of the technology. Upload your concept paper by February 1, 2018. The review committee will select as many as ten (10) finalists. All finalists will be invited to present their product concepts at WearRAcon 18, March 21-23 at the Scottsdale Plaza Resort in Scottsdale, AZ (USA). The winning entry will be selected through live voting by conference attendees and a panel of judges representing industry, government, academia, and corporate executives. The top projects will receive awards. The one considered to be the most innovative will be announced at the conference, and will receive $5,000 to accelerate the new technology. WearRAcon 17 Winning Entry: EDU EXO (STEM Education Exoskeleton Kit) Submitted by: Volker Bartenbach, Switzerland View submission here View all 2017 submissions View all 2016 submissions Submission Details: Submissions are due by February 1, 2018. The proposal must include information about intellectual property and the development outline with milestones for deliverables. Explain the potential impact on the wearable robotics community, as well as significance of this concept as an advancement for the industry. Eligibility: Applicants must demonstrate an ability to move the proposed plan to completion, showing how the prize can accelerate any step along the path to market. Applicants may reside in any country. No specific background or experience is required. Finalists must register and attend WearRAcon 18 Instructions for the Submission: Please submit your proposal by Thursday, February 1, 2018, 11:59 PM (EST) Proposal Content (NTE 4 pages) Paper Layout: 1. Title of proposed project (maximum 50 characters) 2. Leader (name, title, affiliation, address, telephone, email address) 3. Key collaborators (names, titles, affiliations) 4. Brief summary of project goals and plans/approaches (not to exceed 350 characters) a. Focus specifically on what your project would bring to the wearable robotics industry 5. Brief description of what is unique about the project. If it is competitive to any existing product or service, please describe how your project compares to any competitor (500 characters) 6. Detailed description of how you plan to accomplish the overall plan (750 characters) a. Estimated steps/milestones/timeline to reach customers b. Go/No Go decision points 7. Brief description of intellectual property position and potential for commercialization: investment potential and likelihood of ongoing funding support to customers (500 characters) 8. Anticipated total budget and outline of what a prize of $5,000 will allow you to accomplish Proposal Submission: Must not exceed 4 pages, inclusive of all information listed above, minimum 10 point font Convert document to a PDF and send to [email protected] A confirmation email will be replied to upon receipt of your submission. If you have not received confirmation, please call: 602-632- 0999. Review: Proposals will be scored based on the following criteria: 1. Potential benefit and appeal to people within the wearable robotics community 2. Likelihood of Development: Investment potential and consideration of ongoing funding support 3. Intellectual Property Strategy and Status: Freedom to operate 4. Timeframe and probability of success 5. Overall Impression Selected Proposals/Finalists: Finalists will be notified by February 10, 2018. As many as ten (10) finalists will be selected to present at WearRAcon 18 (Scottsdale, Arizona, March 21-23, 2018). Each presenter will have five (5) minutes to present the concept, followed by five (5) minutes of questioning. The event will feature live voting among audience members and a panel of judges representative of industry, government, academia, and corporate executives. The winning project considered to be the most innovative will be announced at the conference.
http://www.wearablerobotics.com/wearracon-18/program/innovation-competition/submission-details/
Pakistan is a country that has always been teetering on the precipice. In spite of an abundance of human and natural resources, spurts of growth and successes, it has been a fragile state. To extricate ourselves from a permanent state of instability, Pakistan must focus on symptoms that make it a fragile state. These indicators can be categorized as social, economic and political. Stabilizing Pakistan will require an objective assessment of these Fragile State Indicators (FSI). The myriad of problems require a strategic roadmap in order to reverse the national decline. This will require 4 steps: - Defining the Fragile State Indicators - Assessing Pakistan’s Current state vis-a-vis FSI’s - Root Cause Analysis of the FSI’s - A comprehensive strategy to mitigate and eventually resolving FSI’s The first step is to clearly define and comprehend each indicator. - Social Indicators - Demographic Pressures - Pressure from high density of population in comparison to available food, land, water, electricity, transportation, health. - Pressure from land, border or settlement disputes. - Massive Movement of Refugees or IDP’s - Uprooting of large groups due to war, repression, food shortages, disease, lack of clean water, land competition and general turmoil. - The uprooting leads to conflict with neighboring states. - A Culture of Vengeance Seeking - A perceived sense of historic injustice that continues to be an emotional issue for the public.This could include impunity against specific communal groups. - This could include impunity against specific communal groups by dominant groups. - Institutionalization of political exclusion, public scapegoating, emergence of hateful media, pamphletering and stereotypical or nationalistic political rhetoric. - Growth of anti-nationalist expatriate groups. - Brain Drain - The sustained emigration of professionals, intellectuals, politicians and middle class leaving the country in often the least productive elements in key community, commercial and political positions. - Demographic Pressures - Economic Indicators - Uneven Economic Development - Group based inequality whether real or perceived, in education, jobs and economic status based on group based poverty, infant mortality and education levels of specific groups. - Economic Decline or Stagnancy - Sharp economic decline of the country as a whole in per capita income, GNP, debt, business failures, poverty levels. - A sudden drop in commodity prices, trade revenue, foreign investment or debt payments. - Collapse or devaluation of the national currency. - Growth of hidden economies such as drug trade, smuggling and capital flight. - Failure of the state to pay salaries of government employees and armed forces of obligations to citizens such as pension payments. - Uneven Economic Development - Political Indicators - Corrupt Leadership & Governance - Endemic corruption or profiteering by the ruling elites and resistance to transparency, accountability and political representation. - Loss of public confidence in state institutions and processes. - Progressively Deteriorating Public Services - Lack of basic state functions to serve the people such as protection from terrorism, violence, access to essential government services such as health, education, sanitation, public transportation. - The use of state apparatus for agencies that serve the ruling elites such as security forces, ministerial staff, central bank, diplomatic services, customs and collections agencies. - Widespread Human Rights Violations - Emergence of authoritarian, dictatorial or military rule in which constitutional and democratic institutions and processesare suspended or manipulated. - Outbreaks of religio-political violence against innocent civilians. - Rise of political prisoners or dissidents who are denied due process consistent with international norms and policies. - Widespread abuse of legal, policial and social rights. - Harassment of press, politicization of judiciary, military, police, public repression of political opponents, religious or cultural persecution. - Security State Apparatus - Emergence of a elite or praetorian guard that operate with impunity. - Emergence of state sponsored privae militias that terrorize political opponents, suspected “enemies of the state”, civilians seen to be sympathetic to a polictial point of view. - An army that serves the interest of a dominant political group. - Emergence of rival militias, guerella forces or private armies in an armed and protracted violent campaign against each other or the state security forces. - Factionalized Elites - Fragmentation of ruling elites and state institutions along group lines. - Use of aggressive nationalistic rhetoric by ruling elites especially destructive forms of communal irrendetism or communal solidarity such as ethnic cleansing or “defense of a particular faith” - - External Intervention - Military or paramilitary engagement in the internal affairs of the state at risk by outside armies, states, identity group or entities that affect the internal balance of power or resolution to conflict. - Intervention by donors or over-dependence on foreign aid or peacekeeping missions. - Rule of Law Deterioration - Rampant and blatant criminality and/or violence, general insecurity and the inability of state to subdue to criminal elements. - Corrupt Leadership & Governance When we inspect the list of indicators, we unfortunately see a series of factors that could very well be a description of Pakistan. he second step is to honestly assess Pakistan’s current status for each indicator. |Indicator||Pakistan Examples||Intensity||Prevalence| |High Density Population to Resources||♦♦♦♦♦||♦♦♦| |Land, Water, Border Disputes||♦♦♦♦♦||♦♦♦♦♦| |Uprooting of Population|| ||♦♦♦♦♦||♦| |Uprooting Causing Conflict w/Neighbor States|| ||♦♦♦||♦♦♦| |Sense of Historic Injustice|| ||♦♦♦♦♦||♦♦♦♦♦| |Persecution of Groups w/Impunity|| ||♦♦♦♦♦||♦♦♦♦♦| |Institutionalized Marginalization|| ||♦♦♦♦♦||♦♦| |Growth of anti-nationalist Expatriates|| ||♦♦♦♦♦||♦| |Emigration due to Economics|| ||♦♦♦♦♦||♦♦♦♦♦| |Emigration due to Persecution|| ||♦♦♦♦||♦| |Group Based Inequality|| ||♦♦♦♦♦||♦♦♦♦♦| |Economic Decline||♦♦♦||♦♦♦| The fourth step is the most difficult; Establishing a long-term strategy to address and mitigate each indicator. The solution must not be tactical but strategic. It must not be myopic but all encompassing in its scope.The third step is to understand the root causes and linkages of the indicators to each other. The NexGen Institute will continue to work on discussing these difficult topics and determining solutions to the perplexing challenges faced by Pakistan.
http://www.nexgenpak.org/blog/fragile-state-indicators-draft
Game Over for Nation States? Technology Is Making Country Borders Less And Less Relevant August 14, 2017, Marianna Mäki-Teeri Ray Kurzweil, one of the world’s leading inventors, thinkers, and futurists, stated in a recent interview that we are going to witness the end of the nation state as we've known it. He believes that the elementary particles of the classical world order are about to change as technology will keep on making our borders less and less relevant. If this means game over for traditional nation states, what’s going to replace them? Kurzweil believes that we’re heading towards a one world society. As he puts it: “We’re building up a world culture, a world legal system. Nations are still powerful, but I think they are going to continue to get less influential”. Kurzweil has an impressive track-record of predicting the pace of technology and the world of tomorrow; and steadier international interdependencies and technological development aren’t the only signs supporting his statement. The decreasing importance of physical country borders impacts also on the identities of an ever growing portion of people. Recent studies have shown that more and more people consider themselves global citizens and link their identities primarily to the world around them instead of a single nation state. Last year, the results of a global survey by GlobeScan revealed for the first time in 15 years of tracking that nearly one in two people (49%) perceive themselves more as global citizens than citizens of their country. Similarly, a study published by Word Economic Forum indicated that young people in the 18-35 age group most often define themselves as global citizens (36%). So far the discussion around global citizenship has not spread to widely supported ideas for forming a global state or legislative entirety, instead it has concentrated on a global sense of community, cooperation, and activism. However, there already exist at least one option for the global citizens who wish to be a part of a post-nation state nation without geographically restraining boundaries: Bitnation. Bitnation is a virtual nation whose decentralized governance relies on an open-source movement and a peer-to-peer system operated by a blockchain-based technology. Susanne Tarkowski Tempelhof, CEO of Bitnation, among many other blockchain enthusiasts believe that the rise of the blockchain and cryptocurrencies is the beginning of the end of the nation state. Do you think we are going to witness the end of current nation states? And if so, what would be the next best alternative?
https://www.futuresplatform.com/blog/game-over-nation-states
The invention discloses a method for judging hydrologic time series non-stationarity. The method comprises the following steps of determining a specific ensemble empirical mode decomposition method; resolving a sequence; using an energy spread function of white noise to recognize period components in the original sequence; removing the period components in the original sequence and taking the residue components as a new sequence; fitting by selecting an appropriate model equation; carrying out a unit root test on the new sequence by using the selected model equation; further analyzing the statistical property of the new sequence by using an autocorrelation coefficient graph and a partial correlation coefficient graph; comparing the consistency of autocorrelation (partial) coefficient results and a unit root test result; if the new sequence shows excellent consistency and the unit root test inspection characteristic statistic accepts original hypothesis, indicating that the sequence has non-stationarity; if the two results have inconformity, indicating that the sequence does not have the non-stationarity. The method disclosed by the invention overcomes the influence of a sequence periodic term on the unit root test result so as to obtain the accurate sequence non-stationarity inspection result.
Mathematician Michael Frazier of Michigan State University was educated in the tradition that maintains that "real" mathematics by "real" mathematicians is and should be useless. "I never expected to do any applications—I was brought up to believe I should be proud of that," he says. ''You did pure harmonic analysis for its own sake, and anything besides that was impure, by definition." But in the summers of 1990 and 1991 he found himself using a mathematical construction to pick out the pop of a submarine hull from surrounding ocean noise. In St. Louis, Victor Wickerhauser was using the same mathematics to help the Federal Bureau of Investigation store fingerprints more economically, while at Yale University Ronald Coifman used it to coax a battered, indecipherable recording of Brahms playing the piano into yielding its secrets. In France, Yves Meyer of the University of Paris-Dauphine found himself talking to astronomers about how they might use these new techniques to study the large-scale structure of the universe. Over the past decade a number of mathematicians accustomed to the abstractions of pure research have been dirtying their hands—with great enthusiasm—on a surprising range of practical projects. What these tasks have in common is a new mathematical language, its alphabet consisting of identical squiggles called wavelets, appropriately stretched, squeezed, or moved about. A whole range of information—your voice, your fingerprints, a snapshot, x-rays ordered by your doctor, radio signals from outer space, seismic waves—can be translated into this new language, which emerged independently in a number of different fields, and in fact was only recently understood to be a single language. In many cases this transformation into wavelets makes it easier to transmit, compress, and analyze information or to extract information from surrounding "noise"—even to do faster calculations. In their initial excitement some researchers thought wavelets might virtually supplant the much older and very powerful mathematical language of Fourier analysis, which you use every time you talk on the telephone or turn on a television. But now they see the two as complementary and are exploring ways to combine them or even to create more languages "beyond wavelets." Different languages have different strengths and weaknesses, points out Meyer, one of the founders of the field: "French is effective for analyzing things, for precision, but bad for poetry and conveying emotion—perhaps that's why the French like mathematics so much. I'm told by friends who speak Hebrew that it is much more expressive of poetic images. So if we have information, we need to think, is it best expressed in French? Hebrew? English? The Lapps have 15 different words for snow, so if you wanted to talk about snow, that would be a good choice." Some information processing is best done in the language of Fourier; other with wavelets; and yet other tasks might require new languages. For the first time in a great many years—almost two centuries, if one goes back to the very birth of Fourier analysis—there is a choice. A MATHEMATICAL POEM Although wavelets represent a departure from Fourier analysis, they are also a natural extension of it: the two languages clearly belong to the same family. The history of wavelets thus begins with the history of Fourier analysis. In turn the roots of Fourier analysis predate Fourier himself (and much of what is now called Fourier analysis is due to his successors). But Fourier is a logical starting point; his influence on mathematics, science, and our daily lives has been incalculable, if to many people invisible. Yet he was not a professional mathematician or scientist; he fit these contributions into an otherwise very busy life. His father's twelfth child, and his mother's ninth, Joseph Fourier was born in 1768 in Auxerre, a town roughly halfway between Paris and Dijon. His mother died when he was nine and his father the following year. Although two younger siblings were abandoned to a foundling hospital after their mother's death, Fourier continued school and in 1780 entered the Royal Military Academy of Auxerre, where at age 13 he became fascinated by mathematics and took to creeping down at night to a classroom where he studied by candlelight. Fourier's academic success won the favor of the local bishop. But when at the end of his studies his application to join the artillery or the army engineers was rejected, he entered the abbey of St. Benoît-sur-Loire. (The popular story that he was rejected by the army because he was not of noble birth—and therefore ineligible "even if he were a second Newton"—is questioned by at least two of Fourier's contemporaries.) The French Revolution erupted before Fourier took his vows. At first indifferent, he became increasingly committed to the cause of establishing "a free government exempt from kings and priests"1 and in 1793 joined the revolutionary committee of Auxerre. Twice he was arrested, once in the bloody days shortly before the fall of Robespierre and again the following year, on charges of terrorism. In defending himself Fourier pointed out that during the Terror no one in Auxerre was condemned to death; a friend related that once, to prevent a man he believed to be innocent from arrest and the guillotine, Fourier invited the agent charged with the arrest to lunch at an inn and, "having exhausted every means of retaining his guest voluntarily," left the room on a pretext, locked the door, and ran to warn the suspect, returning later with excuses. After several years teaching in Paris, Fourier next accompanied Napoleon to Egypt, serving as permanent secretary of the Institute of Egypt that Napoleon had set up, in part to study Egypt's past and natural history. Upon Fourier's return to France, Napoleon appointed him prefect of the department of Isère. He served as prefect, living in Grenoble, for 14 years, earning a reputation as an able administrator; he was responsible for the draining of some 20,000 acres of swamps that | | Mathematical Analysis … mathematical analysis … defines all observable relationships, measures time, space, force, temperature. This difficult science grows slowly but once ground has been gained it is never relinquished. … Analysis brings together the most disparate phenomena and discovers the hidden analogies which unify them. If material escapes us like air or light because of its extreme fineness, if bodies are placed far from us in the immensity of space, if man desires to know the aspect of the heavens for times separated by many centuries, if the effects of gravity and temperature occur in the interior of the solid earth at depths which will remain forever inaccessible, yet mathematical analysis can still grasp the laws governing the phenomena. Analysis makes them actual and measurable and seems to be a faculty of human reason meant to compensate for the brevity of life and the imperfection of our senses. … —Joseph Fourier, La Théorie Analytique de la Chaleur had caused annual epidemics of fever. After Waterloo, denied a government pension because he had served under Napoleon, he found a safe haven in the Bureau of Statistics in Paris, and in 1817 (after an initial rebuff by King Louis XVIII) he was elected to the Academy of Sciences. Despite his administrative duties—and his isolation from Paris for many years—Fourier managed to pursue his scientific and mathematical interests. Victor Hugo called him a man "whom posterity has forgotten,"2 but Fourier's name is as familiar to countless scientists, mathematicians, and engineers as the names of their own children. This fame rests on ideas he set forth in a memoir in 1807 and published in 1822 in his book, La Théorie Analytique de la Chaleur (The Analytic Theory of Heat). Physicist James Clark Maxwell called Fourier's book "a great mathematical poem," but the description does not begin to give an idea of its influence. In the seventeenth century Isaac Newton had a new insight: that forces are simpler than the motions they cause, and the way to understand the natural world is to use differential and partial differential equations to describe these forces—gravity, for example. Newton's differential equation showing how the gravitational pull between two objects is determined by their mass and the distance between them replaced countless observations, and predictive science became possible. Theoretically possible, at least. Solving differential equations—actually predicting where we will be taken by forces that themselves depend at each moment on our changing position—is not easy. As Fourier himself wrote in La Théorie Analytique de la Chaleur , although the equations that describe the propagation of heat have a very simple form, "existing methods do not give any general way to integrate them; as a result it is impossible to determine the values of the temperature after a given period of time. This numerical interpretation … is nevertheless essential. … As long as we cannot achieve it … the truth that we wish to discover is as thoroughly hidden in the formulas of analysis as in the physical question itself."3 Some 150 years after Newton, Fourier provided a practical way to solve numerically a whole class of such equations, linear partial differential equations. His ideas dominated mathematical analysis for 100 years and had surprising ramifications even for number theory and probability. Outside mathematics their influence is difficult to exaggerate. Virtually every time scientists or engineers model systems or make predictions, they use Fourier analysis. Fourier's ideas have also found applications in linear programming, in crystallography, and in countless devices from telephones to radios and hospital x-ray machines; they are, in mathematician T. W. Körner's words, "built into the commonsense of our society." A RABBLE OF FUNCTIONS There are two parts to Fourier's contribution: first, a mathematical statement (actually proved later by Dirichlet), and, second, showing why this statement is useful. The mathematical statement is that any periodic (repeating) function can be represented as a sum of sines and cosines (see Figure 7.1). Roughly what this means is that any periodic curve, no matter how irregular (the output of an electrocardiogram, for example), can be expressed as the sum, or superposition, of a series of perfectly regular sine and cosine curves, of different frequencies. The irregular curve and the sum of sines and cosines are two different representations of the same object in different "languages." Even a jagged line can be represented as a Fourier series. The trick is to multiply the sines and cosines by a coefficient to change their amplitude (the height of their waves) and to shift them so that they either add or cancel (changing the phase). One can also treat nonperiodic functions this way, using the Fourier transform (see Box on p. 202). Fourier himself found this statement "quite extraordinary," and it met with some hostility. Mathematicians were used to functions that when graphed took the form of regular curves; the function f(x) = x2, for example, produces a well-behaved, symmetrical parabola. (A function gives a rule of changing an arbitrary number into something else; f(x) = x2 says to square any number x; if x = 2, then f(x) = 4.) The idea that any arbitrary curve could be expressed as a series of sines and cosines and thus treated as a function came as a shock and contributed to a profound and sometimes disturbing change in mathematics; mathematicians spent much of the nineteenth century coming to terms with just what a function was. "We have seen a rabble of functions arise whose only job, it seems, is to look as little as possible like decent and useful functions," wrote French mathematician Henri Poincaré in 1889. "No more continuity, or perhaps continuity but no derivatives. … Yesterday, if a new function was invented it was to serve some practical end; today they are specially invented only to show up the arguments of our fathers, and they will never have any other use."4 Ironically, Poincaré himself was ultimately responsible for showing that seemingly "pathological" functions are essential in describing nature (leading to such fields as chaos and fractals), and this new direction for mathematics proved enormously fruitful, giving new vigor to a discipline that some had found increasingly anemic, if not moribund. In 1810 the French astronomer Jean-Baptiste Delambre had issued a report on mathematics expressing the fear that "the power of our methods is almost exhausted,"5 and some 30 years earlier Lagrange, comparing mathematics to a mine whose riches had all been exploited, wrote that it was ''not impossible that the mathematical positions in the academies will one day become what the university chairs in Arabic are now."6 "Looking back," writes Körner in his book Fourier Analysis, we can see Fourier's memoir "as heralding the surge of new mathematical methods and results which were to mark the new century." THE EXPLANATION OF NATURAL PHENOMENA The German mathematician Carl Jacobi wrote that Fourier believed the chief goal of mathematics to be "the public good and the explanation | | The Fourier Transform The Fourier transform is the mathematical procedure by which a function is split into its different frequencies, like a prism breaking light into its component colors. But the Fourier transform goes further and tells both how much of each frequency the function contains (the amplitude of the frequency) and the phase of the signal at each frequency (the extent to which it is shifted with respect to a chosen origin). The term also describes the result of that operation: the Fourier transform of a particular function (that varies with time) is a new function (that varies with frequency). A Fourier series is a special case of Fourier transform, representing a periodic, or repeating, function. For functions that vary with time, such as sound recorded as changes in air pressure, or fluctuations in the stock market, frequency is generally measured in hertz or cycles per second. For functions that vary with space, "frequency" is often related to the inverse of a distance. For instance, the Fourier transform of a fingerprint will have large values near the "frequency" 15 ridges per centimeter. The Fourier series of a periodic function of period 2π is given by the formula The Fourier coefficients, an and bn, tell how much a signal contains of each frequency n: its amplitude at frequency n is the square root of . Calculating coefficients of the Fourier series of a function f(x) involves integral calculus: (This means that you multiply the function f(x) by the appropriate sine or cosine and integrate, measuring the area enclosed by the resulting curve; the result is divided by π. To multiply the function by a sine or cosine, the value of each point on the function is multiplied by the value of the corresponding point on the sine or cosine.) The phase can, in principle, be calculated from the coefficients. If you plot the point an, bn on a coordinate system, the amplitude is the length of the line from the origin to that point, while the phase is the angle formed by that line and the positive x axis (Figure 7.2). The function can be reconstructed from the coefficient using formula (1) above; the sine or cosine at each frequency is multiplied by its coefficient, and the resulting functions are added together, point by point; the first term is divided by 2. of natural phenomena," and Fourier showed how his mathematical statement could be used to study natural phenomena such as heat diffusion, by turning a difficult differential equation into a series of simple equations. Suppose, for example, we want to predict the temperature at time t of each point along a metal bar that has been briefly heated at one end. We start by establishing the initial temperature, at time zero, which we consider a function of distance along the bar. (This is why Fourier needed a technique that would work with all functions, even irregular or discontinuous ones: he couldn't expect the initial temperature to be so obliging as to take the form of a regular curve.) When that function is translated into a Fourier series a remarkable thing happens: the intractable differential equation describing the evolution of the temperature decouples, becoming a series of independent differential equations, one for the coefficient of each sine or cosine making up the function. These equations tell how each Fourier coefficient varies as a function of time. The equations, moreover, are very simple—the same as the equation that gives the value of a bank account earning compound interest (negative interest in this case). One by one, we simply plug in the coefficients describing the temperature at time zero and crank out the answers; these are the Fourier coefficients of the temperature at time t, which can be translated back into a new function giving the new temperature at each point on the bar. The procedure is no harder than the one banks use to compute the balance in their clients' accounts each month. Essentially we have made a little detour in Fourier space, where our calculations are immensely easier—as if, faced with the problem of multiplying the Roman numerals LXXXVI and XLI, we translated them into Arabic numerals to calculate 86 × 41 = 3526, and then translated the answer back into Roman numerals: LXXXVI × XLI = MMMDXXVI. The techniques Fourier invented have of course had an impact well beyond studies of heat or even solutions to differential equations. Real data tend to be very irregular: consider an electrocardiogram or the readings of a seismograph. Such signals often look like "complicated arabesques," to use Yves Meyer's expression—tantalizing curves that contain all the information of the signal but that hide it from our comprehension. Fourier analysis translates these signals into a form that makes sense. In addition, in many cases the sines and cosines making up a Fourier series are not simply a mathematical trick to make calculations easier; they correspond to the frequencies of the actual physical waves making up the signal. When we listen to music or conversation, we hear changes in air pressure caused by sound waves—high sounds having high frequency and low sounds having lower frequency. (In fact, a piano can perform a kind of Fourier analysis: a loud sound near a piano with the damper off will cause certain strings to vibrate, corresponding to the different frequencies making up the sound.) Similarly—although this was not known in Fourier's time—radio waves, microwaves, infrared, visible light, and x-rays are all electromagnetic waves differing only in frequency. Being able to break down sound waves and electromagnetic waves into frequencies has myriad uses, from tuning a radio to your favorite station to interpreting radiation from distant galaxies, using ultrasound to check the health of a developing fetus, and making cheap long-distance telephone calls. With the discovery of quantum mechanics, it became clear that Fourier analysis is the language of nature itself. On the "physical space" side of the Fourier transform, one can talk about an elementary particle's position; on the other side, in "Fourier space" or "momentum space," one can talk about its momentum or think of it as a wave. The modern realization that matter at very small scales behaves differently from matter on a human scale—that at small scales we cannot have precise knowledge of both sides of the transform at once, we cannot know simultaneously both the position and momentum of an elementary particle—is a natural consequence of Fourier analysis. BEING ACADEMIC OR BEING REAL While irregular functions can be expressed as the sum of sines and cosines, those sums usually are infinite. Why translate a complex signal into an endless arithmetic problem, calculating an infinite number of coefficients, and summing an infinite number of waves? Fortunately, a small number of coefficients is often adequate. From the heat diffusion equation, for example, it is clear that the Fourier coefficients of high-frequency sines and cosines rapidly get exceedingly close to zero, and so all but the first few frequencies can safely be ignored. In other cases engineers may assume that a limited number of calculations give a sufficient approximation until proved otherwise. In addition, engineers and scientists using Fourier analysis often don't bother to add up the sines and cosines to reconstruct the signal—they "read" Fourier coefficients (at least the amplitudes; phases are more difficult) to get the information they want, rather the way some musicians can hear music silently, by reading the notes. They may spend hours on end working quite happily in this "Fourier space," rarely emerging into "physical space." But the time it takes to calculate Fourier coefficients is a problem. In fact, the development of fast computers and fast algorithms has been crucial to the pervasive, if quasi-invisible, use of Fourier analysis in our daily lives in connection with today's digital technology. The basis for digital technology was given by Claude Shannon, a mathematician at Bell Laboratories whose Mathematical Theory of Communications was published in 1948; while he is not well known among the general public, he has been called a "hero to all communicators."7 Among his many contributions to information theory was the sampling theorem (discovered independently by Harry Nyquist and others). This theorem proved that if the range of frequencies of a signal measured in hertz (cycles per second) is n the signal can be represented with complete accuracy by measuring its amplitude 2n times a second. This result, a direct consequence of Fourier analysis, is simple to state and not very difficult to prove, but it has had enormous implications for the transmission and processing of information. It is not necessary to reproduce an entire signal; a limited number of samples is enough. Since the range of frequencies transmitted by a telephone line is about 4000 hertz, 8000 samples per second are sufficient to reconstitute your voice when you talk on the telephone; when music is recorded on a compact disc, about 44,000 samples a second are used. Measuring the amplitude more often, or trying to reproduce it continuously, as with old-fashioned records, does not gain anything. Another consequence is that, in terms of octaves, more samples are needed in high frequencies than in low frequencies, since the frequency doubles each time you go up an octave: the range of frequency between the two lowest As on a piano is only 28 hertz, while the range of frequency between the two highest As is 1760 hertz. Encoding a piece of music played in the highest octave would require 3520 samples a second; in the lowest octave, 56 would be enough. The sampling theorem opened the door to digital technology: a sampled signal can be expressed as a series of digits and transmitted as a series of on-and-off electrical pulses (creating, on the other hand, round-off errors). Your voice can even be shifted temporarily into different frequencies so that it can share the same telephone line with many other voices, contributing to enormous savings. (In 1915 a 3-minute call from coast to coast cost more than $260 in today's dollars). In 1948 Shannon and his colleagues Bernard Oliver and John Pierce expected digital | | The Fast Fourier Transform An algorithm is a recipe for doing computations. When schoolchildren learn to "carry," "borrow," or multiply two-digit numbers without a calculator, they are learning algorithms. Fast algorithms are mathematical shortcuts for dealing with large computations. The fast Fourier transform, published by J. W. Cooley and T. W. Tukey in 1965, is a prime example. It cuts the number of computations from n2 in standard Fourier analysis to n log n. When n is big, this makes a substantial difference: if n = 1,000,000, then n2 = 1,000,000,000,000, but n log n = 6,000,000 (log 1,000,000 is 6 since 106 = 1,000,000). The logarithm is roughly the number of digits, so it grows slowly. The larger n is, the more impressive the difference becomes. If n is a billion and a computer can complete a billion calculations a second, this cuts computing time from approximately 32 years to 9 seconds. With the FFT one can computer π to a billion digits in about 45 minutes; without it the job would take almost 10,000 years. transmission to "sweep the field of communications;"8 the revolution came, if later than they had expected. Fueling this revolution was the fast Fourier transform (see Box on this page), a mathematical trick that catapulted the calculation of Fourier coefficients out of horse-and-buggy days into supersonic travel. With it, calculations could be done in seconds that previously were too costly to do at all. "It's the difference," Michael Frazier says, "between being academic and being real." This fast algorithm, known as the FFT, requires computers in order to be useful. "Once the method was established it became clear that it had a long and interesting prehistory going back as far as Gauss," Körner writes. "But until the advent of computing machines it was a solution looking for a problem." On the other hand, the gain in speed from the FFT is greater than the gain in speed from better computers; indeed, significant gains in computer speed have come from such fast algorithms built into computer hardware. DRIVING A CAR HALF A BLOCK It could be argued that the fast Fourier transform was too successful. "Because the FFT is very effective, people have used it in problems where it is not useful—the way Americans use cars to go half a block," says Yves Meyer. "Cars are very useful, but that's a misuse of the car. So the FFT has been misused, because it's so practical." The problem is that Fourier analysis does not work equally well for all kinds of signals or for all kinds of problems. In some cases, scientists using it are like the man looking for a dropped coin under a lamppost, not because that is where he dropped it but because that's where the light is. Fourier analysis works with linear problems. Nonlinear problems tend to be much harder, and the behavior of nonlinear systems is much less predictable: a small change in input can cause a big change in output. The law of gravity is nonlinear and using it to predict the very long term behavior of even three bodies in space is wildly difficult, perhaps impossible; the system is too unstable. (Engineers make clever use of this instability when sending space probes to distant planets: NASA's Pioneer and Voyager spacecraft were both aimed at Jupiter in such a way that Jupiter's gravity accelerated the probes and bent their paths, sending them on to Saturn.) "It is sometimes said," quips Körner, "that the great discovery of the nineteenth century was that the equations of nature were linear, and the great discovery of the twentieth century is that they are not." Engineers faced with a nonlinear problem often resort to the rough-and-ready expedient of treating it like a linear problem and hoping the answer won't be too far off. For example, the engineers charged with defending Venice from the high waves that flood it every year, forcing Venetians to make their way across the Piazza San Marco on sidewalks set on stilts, want to predict the flood waters far enough in advance so that eventually they could raise inflatable dikes to protect the city. Since they can't solve the nonlinear partial differential equation that determines the behavior of the waves (an equation involving winds, position of the moon, atmosphere pressure, and so on), they simply reduce it to a linear equation and solve it with Fourier analysis. Despite some progress, they are still taken by surprise by sudden rises in water level of up to a meter. In information processing Fourier analysis has other limits as well: it is poorly suited to very brief signals or to signals that change suddenly and unpredictably. A Fourier transform makes it easy to see how much of each frequency a signal contains but very much harder to figure out when the various frequencies were emitted or for how long. It pretends, so to speak, that any given instant of the signal is identical to any other, even if the signal is as complex as a Bach prelude or changes as dramatically as the electrocardiogram of a fatal heart attack. Mathematically this is correct. The same sines and cosines can represent very different moments in a signal because they are shifted in phase so that they amplify or cancel each other. A B-flat that appears, from the Fourier transform, to be omnipresent in a prelude may in fact appear only intermittently; the rest of the time it is part of an elaborate juggling act. But as physicist J. Ville wrote in 1948, "If there is in this concept a dexterity that does honor to mathematical analysis, one can't hide the fact that it also distorts reality."9 One moment of a prelude does not sound like another; the flat line of an electrocardiogram that announces death is not the same as the agitated lines produced by a beating heart. The time information is not destroyed by the Fourier transform, just well hidden, in the phases. But the fact that information about any point in a signal is spread out over the entire transform—contained in all the frequencies—is a serious drawback in analyzing signals or functions. Brief changes often carry the most interesting information in a signal; in medical tests, for example, being able to detect them could make the difference between a correct or an incorrect diagnosis. In theory, the phases containing this time information can be calculated from the Fourier coefficients; in practice, calculating them with adequate precision is virtually impossible. In addition, the lack of time information makes Fourier transforms vulnerable to errors. "If when you record an hour-long signal you have an error in the last five minutes, the error will corrupt the whole Fourier transform," Meyer points out. And phase errors are disastrous; if you make the least error in phase, you can end up with something that has nothing to do with your original signal. To get around this problem, the windowed Fourier transform was created in the 1940s. The idea is to analyze the frequencies of a signal segment by segment; that way, one can at least say that whatever is happening is happening somewhere in a given segment. So while the Fourier transform uses sines and cosines to analyze a signal, windowed Fourier uses a little piece of a curve. This curve serves as the "window," which remains fixed in size for a given analysis; inside it one puts oscillations of varying frequency. But these rigid windows force painful compromises. The smaller your window, the better you can locate sudden changes, such as a peak or discontinuities, but the blinder you become to the lower-frequency components of your signal. (These lower frequencies won't fit into the little window.) If you choose a bigger window, you can see more of the low frequencies but the worse you do at "localizing in time." So Yves Meyer, who as a harmonic analyst was well aware of the power and limitations of Fourier analysis, was interested when he first heard of a new way to break down signals, or functions, into little waves—ondelettes in French—that made it possible to see fleeting changes in a signal without losing track of frequencies. TALKING TO HEATHENS "I got involved almost by accident," recalls Meyer. "I was a professor at the Ecole Polytechnique, where we shared the same photocopy machine with the department of theoretical physics. The department chairman liked to read everything, know everything; he was constantly making photocopies. Instead of being exasperated when I had to wait, I would chat with him while he made his copies. "One day in the spring of 1985 he showed me an article by a physicist colleague of his, Alex Grossmann in Marseille, and asked whether it interested me. It involved signal processing, using a mathematical technique I was familiar with. I took the train to Marseille and started working with Grossmann." Often pure math "trickles down" to applications, but this was not the case for wavelets, Meyer added. "This is not something imposed by the mathematicians; it came from engineering. I recognized familiar mathematics, but the scientific movement was from application to theory. The mathematicians did a little cleaning, gave it more structure, more order." Structure and order were needed; the predecessors of today's wavelets had grown in a topsy-turvy fashion, to the extent that in the early days wavelet researchers often found themselves unwittingly recreating work of the past. "I have found at least 15 distinct roots of the theory, some going back to the 1930s," Meyer said. "David Marr, who worked on artificial vision and robotics at MIT, had similar ideas. The physics community was intuitively aware of wavelets dating back to a paper on renormalization by Kenneth Wilson, the Nobel prize winner, in 1971." But all these people in mathematics, physics, vision, and information processing didn't realize they were speaking the same language, partly for the simple reason that they rarely spoke to one another but also partly because the early work existed in such disparate forms. (Grossmann had in fact spoken about wavelets to other people in Meyer's field, but they "didn't make the connection," he said. "With Yves it was immediate, he realized what was happening.") Wavelet researchers sometimes joke that the main benefit of wavelets is to allow them to have wavelet conferences. Behind that joke lies the reality that the modern coherent language of wavelets has provided an unusual opportunity for people from different fields to speak and work together, to everyone's benefit. "Under normal circumstances the fields are pretty much water-tight one to the other," Grossmann said. "So one of the main reasons that many people find this field very interesting is that they have found themselves outside of their usual universe, talking to heathens of various kinds. Anybody who is not in one's little village is a heathen by definition, and people are always surprised to see—'look, they have two ears and a single nose, just like us!' That has been a very pleasant experience for everyone." "THIS MUST BE WRONG.…" Tracing the history of wavelets is almost a job for an archeologist, but let's take as a starting point Jean Morlet, who developed them independently in prospecting for oil for the French oil company Elf-Aquitaine. (It was he who baptized the field, originally using the term "wavelets of constant shape" to distinguish them from other "wavelets" in geophysics.) A standard way to look for underground oil is to send vibrations into the earth and analyze the echoes that return. Doing this, Morlet became increasingly dissatisfied with the limits of Fourier analysis; he wanted to be able to analyze brief changes in signals. To do this he figured out both a way to decompose a signal into wavelets and a way to reconstruct the original signal. "The thing that is surprising is how very far Jean went all by himself, with no formal baggage. He had a lot of intuition to make it work without knowing why it worked," says Marie Farge of the Ecole Normale Supérieure in Paris, who uses wavelets to study turbulence. But when Morlet started showing his results to others in the field, he was told that "this must be wrong, because if it were right, it would be known." Convinced that his wavelets were important—and aware that he didn't understand why they worked—Morlet spoke to a physicist at the Ecole Polytechnique who sent him, in 1981, to see Alex Grossmann in Marseille. "Jean was sent to me because I work in phase space quantum mechanics," Grossmann said. "Both in quantum mechanics and in signal processing you use the Fourier transform all the time—but then somehow you have to keep in mind what happens on both sides of the transform. When Jean arrived, he had a recipe, and the recipe worked. But whether these numerical things were true in general, whether they were approximations, under what conditions they held, none of this was clear.'' The two spent a year answering those questions. Their approach was to show mathematically that when wavelets represent a signal, the amount of "energy" of the signal (a measure of its size) is unchanged. This means that one can transform a signal into wavelet form and then get exactly the same signal back again—a crucial condition. It also means that a small change in the wavelet representation produces a correspondingly small change in the signal; a little error or change will not be blown out of proportion. The work involved a lot of experimenting on personal computers. "One of the many reasons why the whole thing didn't come out earlier is that just about this time it became possible for people who didn't spend their lives in computing to get a little personal computer and play with it," Grossmann says. "Jean did most of his work on a personal computer. Of course, he could also handle huge computers, that's his profession, but it's a completely different way of working. And I don't think I could have done anything if I hadn't had a little computer and some graphics output." A MATHEMATICAL MICROSCOPE Wavelets can be seen as an extension of Fourier analysis. As with the Fourier transform, the point of wavelets is not the wavelets themselves; they are a means to an end. The goal is to turn the information of a signal into numbers—coefficients—that can be manipulated, stored, transmitted, analyzed, or used to reconstruct the original signal (see Box on p. 212; Figure 7.3). The basic approach is the same. The coefficients tell in what way the analyzing function (sines and cosines, a Fourier window or wavelets) needs to be modified in order to reconstruct the original signal. The idea underlying the calculation of coefficients is the same (although in practice the mathematical details vary). For his wavelets Morlet even used the Gaussian, or bell-shaped, function often used in windowed Fourier analysis. But he used it in a fundamentally different way. Instead of filling a rigid window with oscillations of different frequencies, he did the reverse. He kept the number of oscillations in the window constant and varied the width of the window, stretching or compressing it like an accordion or a child's slinky (see Figure 7.4). When he stretched the wavelet, the oscillations inside were stretched, | | The Wavelet Transform The wavelet transform decomposes a signal into wavelet coefficients. Each wavelet is multiplied by the corresponding section of the signal. Then one integrates (measures the area enclosed by the resulting curve). The result is the coefficient for that particular wavelet. Essentially, the coefficient measures the correlation, or agreement, between the wavelet (with its peaks and valleys) and the corresponding segment of the signal. "With wavelets, you play with the width of the wavelet in order to catch the rhythm of the signal," Meter says. "Strong correlation means that there is a little piece of the signal that looks like the wavelet." Constant stretches give wavelet coefficients with the value zero. (By definition, a wavelet has an integral of zero—half the area enclosed by the curve of the wavelet is above zero and half is below. Multiplying a wavelet by a constant changes both the positive and the negative components equally, so the integral remains zero.) Wavelets can also be made that give coefficients of zero when they meet linear and quadratic stretches and even higher polynomials. The more zero coefficients, the greater the compression of the signal, which makes it cheaper to store or transmit, and can simplify calculations. A typical signal may have about 100,000 values, but only 10,000 wavelets are needed to express it; parts of the signal that give coefficients of zero are automatically disregarded. Using wavelets that are "blind" to linear and quadratic stretches, and higher polynomials, also makes it easier to detect very irregular changes in a signal. Such wavelets react violently to irregular changes, giving big coefficients that stand out against the background of very small coefficients and zero coefficients indicating regular changes. decreasing their frequency; when he squeezed the wavelet, the oscillations inside were squeezed, resulting in higher frequencies. As a result, wavelets adapt automatically to the different components of a signal, using a big "window" to look at long-lived components of low frequency and progressively smaller windows to look at short-lived components of high frequency. The procedure is called multiresolution; the signal is studied at a coarse resolution to get an overall picture and at higher and higher resolutions to see increasingly fine details. Wavelets have in fact been called a "mathematical microscope"; compressing wavelets increases the magnification of this microscope, enabling one to take a closer and closer look at small details in the signal. And unlike a Fourier transform, which treats all parts of a signal equally, wavelets only encode changes in a function. That is, unchanging stretches of a signal give coefficients with the value zero, which can be ignored. This makes them good for "seeing" changes—peaks in a signal, for example, or edges in a picture—and also means that they can be effective for compressing information. "Wavelet analysis is a way of saying that one is sensitive to changes," Meyer says. "It's like the way we respond to speed." We don't feel the speed of a train as long as the speed is constant, but we notice when it speeds up or slows down. MOTHER OR AMOEBA? In wavelet analysis a function is represented by a family of little waves: what the French call a "mother" wavelet, a "father," and a great many babies of various sizes. But the French terminology "shows a scandalous misunderstanding of human reproduction," objects Cornell mathematician Robert Strichartz. "In fact the generation of wavelets more closely resembles the reproductive life style of an amoeba.'' To make baby wavelets one clones la mère and then either stretches or compresses the new wavelets; in mathematical jargon they are "dilated." These new wavelets can then be shifted about or "translated." But the father function plays an important role. If you were looking at changes in temperature, you might be interested in broad changes over millions of years (the ice ages, for example), fluctuations over the past hundred years, or changes between day and night. If your real interest were the effect of climate on wheat production in the nineteenth century, you might look at temperatures on the scale of a year, a decade, or possibly a century; you wouldn't bother with changes on the scale of a thousand years, much less a million. The father function (now more often referred to as the "scaling function") gives the starting point (see Figure 7.5). To do this, you construct a very rough approximation of your signal, using only father functions. Imagine covering your signal with a row of identical father functions. You then multiply each one by the corresponding section of the signal and integrate (measure the space under the resulting curve). The resulting numbers, or coefficients, give the information you would need to reconstitute a very rough picture of your function. Wavelets are then used to express the details that you would have to add to that first rough picture in order to reconstitute the original signal. At the coarsest resolution, perhaps a hundred fat wavelets are lined up next to each other on top of the signal. The wavelet transform (see Box on p. 212) assigns to each one a coefficient that tells how much of the function has the same frequency as the fat wavelets. At the next resolution the next generation of wavelets—twice as many, half as wide, and with twice the frequency—is put on top of the signal, and the process is repeated. Each step gives more details; at each step the frequency doubles and the wavelets are half as wide. (Typically, up to five different resolutions are used.) If the procedure is compared to expressing the number 78/7 in decimal form (11.142857142 …), then the number 11 corresponds to the information given by the father functions, and the first set of wavelets would encode the details 0.1; the next, skinnier wavelets would encode 0.04, the next wavelets 0.002, the next wavelets 0.0008, and so on. The farther you go in layers of detail the more accurate the approximation will be, and each time you will be using a tool of the appropriate size. At the end, your signal has been neatly divided into different-frequency components—but unlike Fourier analysis, which gives you global amounts of different frequencies for the whole signal, wavelets give you a different frequency breakdown for different times on your signal. To reconstruct the signal, you add the original rough picture of the function and all the details, by multiplying each coefficient by its wavelet or father function and adding them all together, just as to reconstruct the number 78/7 you add the number 11 given by the father and all the details: 0.1, 0.04, 0.002, and so on. Of course, the number will still be in decimal form; in contrast, when you reconstruct a signal from a father function, wavelets, and wavelet coefficients, you switch back to the original form of representation, out of "wavelet space." AVOIDING REDUNDANCY When Meyer took the train to Marseille to see Grossmann in 1985 the idea of multiresolution existed—it originated with Jean Morlet—but wavelets were limited and sometimes difficult to use, compared with the choices available today (which are still changing rapidly). For one thing computing wavelet coefficients was rather slow. For another the wavelet transforms that existed then were all continuous. Imagine a wavelet slowly gliding along the signal, new wavelet coefficients being computed as it moves. (The process is repeated at all possible frequencies or scales; instead of brutally changing the size of the wavelets by a factor of 2, you stretch or compress it gently to get all the intermediate frequencies.) In such a continuous representation, there is a lot of repetition, or redundancy, in the way information is encoded in the coefficients. (The number of coefficients is in fact infinite, but in practice "infinite may mean 10,000, which is not so bad," Grossmann says.) This can make it easier to analyze data, or recognize patterns. A continuous representation is shift invariant: exactly where on the signal one starts the encoding doesn't matter; shifting over a little doesn't change the coefficients. Nor is it necessary to know the coefficients with precision. "It's like drawing a map," says Ingrid Daubechies, professor of mathematics at Princeton and a member of the technical staff at AT&T Bell Laboratories, who has worked with wavelets since 1985. "Many men draw these little lines and if you miss one detail you can't find your way. Most women tend to put lots of detail—a gas station here, a grocery store there, lots and lots of redundancy. Suppose you took a bad photocopy of that map, if you had all that redundancy you still could use it. You might not be able to read the brand of gasoline on the third corner but it would still have enough information. In that sense you can exploit redundancy: with less precision on everything you know, you still have exact, precise reconstruction." But if the goal is to compress information in order to store or transmit it more cheaply, redundancy can be a problem. For those purposes it is better to have a different kind of wavelet, in an orthogonal transform, in which each coefficient encodes only the information in its own particular part of the signal; no information is shared among coefficients (see Box on p. 218). At the time, though, Meyer wasn't thinking in terms of compressing information; he was immersed in the mathematics of wavelets. A few years before it had been proved that it is impossible to have an orthogonal representation with standard windowed Fourier analysis; Meyer was convinced that orthogonal wavelets did not exist either (more precisely, infinitely differentiable orthogonal wavelets that soon get close to zero on either side). He set out to prove it—and failed, in the summer of 1985, by constructing precisely the kind of wavelet he had thought didn't exist. UNIFICATION The following year, in the fall of 1986, while Meyer was giving a course on wavelets at the University of Illinois at Urbana, he received several telephone calls from a persistent 23-year-old graduate student in computer vision at the University of Pennsylvania in Philadelphia. Stéphane Mallat (now at the Courant Institute of Mathematical Sciences in New York) is French and had been a student at the Ecole Polytechnique, one of France's prestigious grandes écoles, when Meyer taught there, but the two hadn't met. "The system at Ecole Polytechnique is very rigid and very elitist, the students have a rank when they graduate," Meyer said. "According to their rank they are directed to this or that profession. Mallat had found this system absurd and had decided that he would do what he wanted." So after graduating he did something that, from a French perspective, was extraordinary: abandoning the social and professional advantages of being a polytechnician (the first 150 in each graduating class are even guaranteed a salary for life), he left France for the United States. "Working in the United States, for Stéphane Mallat, was starting over from zero," Meyer said. "No one there knows what Ecole Polytechnique is, they couldn't care less, he was in the same situation as a student from Iran, for example. … He's completely original, in his behavior, his way of thinking, his way of progressing in his career." Mallat had heard about Meyer's work on wavelets from a friend in the summer of 1986 while he was vacationing in St. Tropez; to him it sounded suspiciously familiar. So on returning to the United States, he called Meyer, who agreed to meet him at the University of Chicago. The two spent 3 days holed up in a borrowed office ("I kept telling Mallat that he absolutely had to go to the Art Institute in Chicago, but we never had time," Meyer says) while Mallat explained that the multiresolution Meyer and others were doing with wavelets was the same thing that electrical engineers and people in image processing were doing under other names. "This was a completely new idea," Meyer said. "The mathematicians were in their corner, the electrical engineers were in theirs, the people in vision research like David Marr were in another corner, and the fact that a young man who was then 23 years old was capable of saying, you are all doing the same thing, you have to look at that from a broader perspective—you expect that from someone older." In 3 days the two worked out the mathematical details; since Meyer was already a full professor, at his insistence the resulting paper, "Multiresolution Approximation and Wavelets," appeared under Mallat's name alone. The paper made it clear that work that existed in many different guises and under many different names—the pyramid algorithms used in image processing, the quadrature mirror filters of digital speech processing, zero-crossings, wavelets—were at heart all the same. For using wavelets to look at a signal at different resolutions can be seen as applying a succession of filters: first filtering out everything but low frequencies, then filtering out everything but frequencies twice as high, and so on. (And, in accordance with Shannon's sampling theorem, wavelets automatically "sample" high frequencies more often than low frequencies, since as the frequency doubles the number of wavelets doubles.) The realization benefited everyone. A whole mathematical literature on wavelets existed by 1986, some of it developed before the word wavelet was even coined; this mathematics could now be applied to other fields. "All those existing techniques were tricks that had been cobbled together; they had been made to work in particular cases," says Marie Farge. "Mallat helped people in the quadrature mirror filters community, for example, to realize that what they were doing was much more profound and much more general, that you had theorems, and could do a lot of sophisticated mathematics." Wavelets got a big boost because Mallat also showed how to apply to wavelet multiresolution fast algorithms that had been developed for other fields, making the calculation of wavelet coefficients fast and automatic—essential if they were to become really useful. And he paved the way for the development by Daubechies of a new kind of regular orthogonal wavelet that was easier and faster to use: wavelets with "compact support." Such wavelets have the value zero everywhere outside a certain interval (between -2 and 2, for example). EVADING INFINITY Daubechies, who is Belgian, was trained as a mathematical physicist; she had worked with Grossmann in France on her Ph.D. research and then spent 2 years in the United States working on quantum mechanics. She is the recipient of a 5-year MacArthur fellowship. "Ingrid's role has been crucial," Grossmann said. "Not only has she made very important contributions, but she has made them in a form that was legible, and usable, to various communities. She is able to speak to engineers, to mathematicians; she is trained as a physicist and one sees her training in quantum mechanics." Daubechies had heard about Meyer and Mallat's multiresolution work very early on. "Yves Meyer had told me about it at a meeting. I had been thinking about some of these issues, and I got very interested," she said. The orthogonal wavelets Meyer had constructed trial off at the sides, never actually ending; this meant that calculating a single wavelet coefficient was a lot of work. "I said why can't we just start from the fact that we want those numbers and that we want a scheme that has these properties [and] proceed from there," Daubechies said. "That's what I did. … I was extremely excited; it was a very intense period. I didn't know Yves Meyer so very well at the time. When I had the first construction, he had gotten very excited, and somebody told me he had given a seminar on it. I knew he was a very strong mathematician and I thought, oh my God, he's probably figuring out things much faster than I can. … Now I know that even if that had been true, he would not have taken credit for it, but it put a very strong urgency on it; I was working very hard. By the end of March 1987 I had all the results." Together, multiresolution and wavelets with compact support formed the wavelet equivalent of the fast Fourier transform: again, not just doing calculations a little faster, but doing calculations that otherwise very likely wouldn't be done at all. HEISENBERG Multiresolution and Daubechies's wavelets also made it possible to analyze the behavior of a signal in both time and frequency with unprecedented ease and accuracy, in particular, to zoom in on very brief intervals of a signal without becoming blind to the larger picture. But although one mathematician hearing Daubechies lecture objected that she seemed "to be beating the uncertainty principle," the Heisenberg uncertainty principle still holds. You cannot have perfect knowledge of both time and frequency. Just as you cannot know simultaneously both the position and momentum of an elementary particle (if you could hold an electron still long enough to figure out where it was, you would no longer know how fast it would have been going if you hadn't stopped it). The product of the two uncertainties (or spreads of possible values) is always at least a certain minimum number. One must always make a compromise; knowledge gained about time is paid for in frequency, and vice versa. "At low frequencies I have wide wavelets, and I localize very well in frequency but very badly in time. At very high frequency I have very narrow wavelets, and I localize very well in time but not so well in frequency," Daubechies said. This imprecision about frequency results from the increasing range of frequencies at high frequencies: as we have seen, frequency doubles each time one goes up an octave. This widening spread of frequencies can be seen as a barrier to precision, but it's also an opportunity that engineers have learned to exploit. It is the reason why the telephone company shifts voices up into higher frequencies, not down to lower ones, when it wants to fit a lot of voices on one line: there's a lot more room up there. It also explains the advantage of fiber optics, which carry high-frequency light signals, over conventional telephone wires. PUTTING WAVELETS TO WORK Wavelets appear unlikely to have the revolutionary impact on pure mathematics that Fourier analysis has had. "With wavelets it is possible to write much simpler proofs of some theorems," Daubechies said. "But I know of only a couple of theorems that have been proved with wavelets, that had not been proved before." But the characteristic features of wavelets make them suited for a wide range of applications. Because wavelets respond to change and can narrow in on particular parts of a signal, researchers at the Institute du Globe in Paris are using them to study the minuscule effect on the speed of the earth's rotation of the El Niño ocean current that flows along the coast of Peru. British scientists are using wavelets to study ocean currents around the Antarctic, and researchers and mechanics are exploring their use in detecting faults in gears by analyzing vibrations. Multiresolution lends itself to a variety of applications in image processing. One can imagine transmitting pictures electronically quickly and cheaply by sending only a coarse picture, calling up a more detailed picture only when needed. Mathematician Dennis Healy, Jr., and radiologist John Weaver of Dartmouth College are exploring the use of wavelets for "adaptive" magnetic resonance imaging, in which higher resolutions would be used selectively, depending on the results already found at coarser scales. (Since a half-hour magnetic resonance imaging exam costs $500 to $1000 or more, anything that reduces the time spent is of obvious interest.) Multiresolution is also useful in studying the large-scale distribution of matter in the universe, which for years was thought to be random but which is now seen to have a complicated structure, including "voids" and "bubbles."10 Wavelets have enabled astronomers at the Observatoire de la Côte d'Azur in Nice to identify a subcluster at the center of the Coma supercluster, a cluster of about 1400 galaxies. Subsequently, that subcluster was identified as an x-ray source. "Wavelets were like a telescope pointing to the right place," Meyer said. And at the Centre de Recherche Paul Pascal in Pessac (near Bordeaux), Alain Arnéodo and colleagues have exploited "the fascinating ability of the wavelet transform to reveal the construction rule of fractals"11. In addition, it can be instructive to compare wavelet coefficients at different resolutions. Zero coefficients, which indicate no change, can be ignored, but nonzero coefficients indicate that something is going on—whether an abrupt change in the signal, an error, or noise (an unwanted signal that obscures the real message). If coefficients appear only at fine scales, they generally indicate the slight but rapid variations characteristic of noise. "The very fine scale wavelets will try to follow the noise," Daubechies explains, while wavelets at coarser resolutions are too approximate to pick up such slight variations. But coefficients that appear at the same part of the signal at all scales indicate something real. If the coefficients at different scales are the same size, it indicates a jump in the signal; if they decrease, it indicates a singularity—an abrupt, fleeting change. It is even possible to use scaling to sharpen a blurred signal. If the coefficients at coarse and medium scales suggest there is a singularity, but at high frequencies noise overwhelms the signal, one can project the singularity into high frequencies by restoring the missing coefficients—and end up with something better than the original. CUT THE WEEDS AND SPARE THE DAISIES Wavelets also made possible a revolutionary method for extricating signals from pervasive white noise ("all-color," or all-frequency, noise), a method that Meyer calls a "spectacular application" with great potential in many fields, including medical scanning and molecular spectroscopy. An obvious problem in separating noise from a signal is knowing which is which. If you know that a signal is smooth—changing slowly—and that the noise is fluctuating rapidly, you can filter out noise by averaging adjacent data to kill fluctuations while preserving the trend. Noise can also be reduced by filtering out high frequencies. For smooth signals, which change relatively slowly and therefore are mostly lower frequency, this will not blur the signal too much. But many interesting signals (the results of medical tests, for example) are not smooth; they contain high-frequency peaks. Killing all high frequencies mutilates the message—"cutting the daisies along with the weeds," in the words of Victor Wickerhauser of Washington University in St. Louis. A simple way to avoid this blind slaughter has been found by a group of statisticians. David Donoho of Stanford University and the University of California at Berkeley and his colleague Iain Johnstone of Stanford had proved mathematically that if a certain kind of orthogonal basis existed, it would do the best possible job of extracting a signal from white noise. (A basis is something with which you can represent any possible function in a given space; each mother wavelet provides a different basis, for example, since any function can be represented by it and its translates and dilates.) This result was interesting but academic since Donoho and Johnstone did not know whether such a basis existed. But in the summer of 1990, when Donoho was in St. Flour, in France's Massif Central, to teach a course in probability, he heard Dominique Picard of the University of Paris-Jussieu give a talk on the possibility of using wavelets in statistics. After discussing it with her and with Gérard Kerkyacharian of the University of Picardy in Amiens, "I realized it was what we had been searching for a long time," Donoho recalled. "We knew that if we used wavelets right, they couldn't be beaten." The method is simplicity itself: you apply the wavelet transform to your signal, throw out all coefficients below a certain size, at all frequencies or resolutions, and then reconstruct the signal. It is fast (because the wavelet transform is so fast), and it works for a variety of kinds of signals. The astonishing thing is that it requires no assumptions about the signal. The traditional view is that one has to know, or assume, something about the signal one wants to extract from noise—that, as Grossmann put it, "if there is absolutely no a priori assumption you can make about your signal, you may as well go to sleep. On the other hand, you don't want to put your wishes into your algorithm and then be surprised that your wishes come out." The wavelet method stands this traditional wisdom on its head. Making no assumptions about the signal, Donoho says, "you do as well as someone who makes correct assumptions, and much better than someone who makes wrong assumptions." (Furthermore, if you do know something about the signal, you can adjust the coefficient threshold and get even better results.) The trick is that an orthogonal wavelet transform makes a signal look very different while leaving noise alone. "Noise in the signal becomes noise in the wavelet transform and it has about the same size at every resolution and location," Donoho says. (That all orthogonal representations leave noise unchanged has been known since the 1930s.) So while noise masks the signal in "physical space," the two become disentangled in "wavelet space." In fact, Donoho said, a number of researchers—at the Massachusetts Institute of Technology, Dartmouth, the University of South Carolina, and elsewhere—independently discovered that thresholding wavelet coefficients is a good way to kill noise. "We came to it by mathematical decision theory, others simply by working with wavelet transforms and noticing what happened," he said. Among those was Mallat, who uses a somewhat different approach. Donoho's method works for a whole range of functions but isn't necessarily optimal for each. When it is applied to blurred images, for example, it damages some of the edges; the elimination of small coefficients creates ripples that can be annoying. Mallat and graduate student Wen Liang Hwang developed a way to avoid this by computing the wavelet transform of the signal and selecting the points where the correlation between the curve and the wavelet is greatest, compared to nearby points. These maximum values, or wavelet maxima, are kept if the points are thought to belong to a real edge and discarded if they are thought to correspond to noise. (That decision is made automatically, but it requires more calculations than Donoho's method; it is based on the existence and size of maxima at different resolutions.) WAVELETS DON'T EXIST … Although Donoho and Johnstone's technique is simple and automatic, wavelets aren't always foolproof. With orthogonal wavelets it can matter where one starts encoding the signal: shifting over a little can change the coefficients completely, making pattern analysis hazardous. This danger does not exist with continuous wavelets, but they have their own pitfalls; what looks like a correlation of coefficients (different coefficients "seeing" the same part on the signal) may sometimes be an artifact introduced by the wavelets themselves. "It's the kind of thing where you can shoot yourself in the foot without half trying," Grossmann said. Generally, using wavelets takes practice. "With a Fourier transform, you know what you get," Meyer says. "With a wavelet transform you need some training in order to know what you get. I have a report from EDF [Electricité de France] giving conclusions of engineers about wavelets—they say they have trouble with interpretation." Part of that difficulty may be fear of trying something new; Gregory Beylkin of the University of Colorado at Boulder reports that one student, who learned to use wavelets before he knew Fourier analysis, experienced no difficulty. Farge has had similar experiences, but Meyer thinks the problem is real. Because Fourier analysis has existed for so long, and most physicists and engineers have had years of training with Fourier transforms, interpreting Fourier coefficients is second nature to them. In addition, Meyer points out, Fourier transforms aren't just a mathematical abstraction: they have a physical meaning. "These things aren't just concepts, they are as physical, as real, as this table. But wavelets don't exist in nature; that's why it is harder to interpret wavelet coefficients," he said. Curiously, though, both our ears and our eyes appear to use wavelet techniques in the first stages of processing information. The work on "wavelets" and hearing goes back to the 1930s, Daubechies said. "They didn't talk about wavelets—they talked about constant q filtering, in which the higher the frequency, the better resolution you have in time." This is unlikely to lead to new insights about hearing or vision, she said, but it could make wavelets effective in compressing information. "If our ear uses a certain technique to analyze a signal, then if you use that same mathematical technique, you will be doing something like our ear. You might miss important things, but you would miss things that our ear would miss too." COMPRESSING INFORMATION One way to cope with an ever-increasing volume of signals is to widen the electronic highways—for example, by moving to higher frequencies. Another, which also reduces storage and computational costs, is to compress the signal temporarily, restoring it to its original form when needed. In fact, only a small number of all possible "signals" are capable of being compressed, as the Russian mathematician Andrei Kolmogorov pointed out in the 1950s. A compressible signal can by definition be expressed by something shorter than itself: one sequence of digits (the signal) is encoded by a shorter sequence of digits (e.g., a computer program). It is easy to see that, using any given language (such as the computer language Pascal), the number of short sequences is much smaller than the number of long sequences: most long sequences cannot be encoded by anything shorter than themselves. (Even a highly efficient encoding scheme like a library card catalog cannot cope with an infinite number of books; eventually, the only way to distinguish one book from another would be to print the entire book in the card catalog.) Like Heisenberg with his uncertainty principle, Kolmogorov has set an absolute limit that mathematicians and scientists cannot overcome, however clever they are. Accepting some loss of information makes more compression possible, but a limit still remains. In practice, however, many signals people want to compress have a structure that lends itself to compression; they are not random. For instance, any point in such a signal might be likely to be similar to points near it. (In a picture of a white house with a blue door, a blue point is likely to be surrounded by other blue points, a white point by other white points.) Wavelets lend themselves to such compression; because wavelet coefficients indicate only changes, areas with no change (or very small change) are automatically ignored, reducing the number of figures that have to be kept to encode the information. So far, Daubechies says, image compression factors of about 35 or 40 have been achieved with wavelets with little loss. That is, the information content of the compressed image is about 1/35th or 1/40th the information content of the original. But wavelets alone cannot achieve those compression factors; an even more important role is played by clever quantization methods, mathematical ways of giving more weight in the encoding to information that is important for human perception (edges, for example) than information that is less so. "If you just buy a commercially available image compressor you can get a factor of 10 to 12, so we're doing better than that," Daubechies said. "However, people in research groups who fine-tune the Fourier transform techniques in commercial image compressors claim they can also do something on the order of 35. So it's not really clear that we can beat the existing techniques. I do not think that image compression—for instance, television image compression—is really the place where wavelets will have the greatest impact." But the fact that wavelets concentrate the information of a signal in relatively few coefficients makes them good at detecting edges in images, which may result in improved medical tests. Healy and Weaver have found that with wavelets they can use magnetic resonance imaging to track the edge of the heart as it beats by sampling only a few coefficients. And wavelet compression is valuable in speeding some calculations. In the submarine detection work that Frazier and colleague Jay Epperson did for Daniel H. Wagner Associates, they were able to compress the original data by a factor of 16 with good results. Ways to compress huge matrices (square or rectangular arrays of numbers) have been developed by Beylkin, working with Ronald Coifman and Vladimir rokhlin at Yale. The matrix is treated as a picture to be compressed; when it is translated into wavelets, "every part of the matrix that could be well represented by low-degree polynomials will have very small coefficients—it more or less disappears," Beylkin says. Normally, if a matrix has n2 entries, then almost any computation requires at least n2 calculations and sometimes as many as n3. With wavelets one can get by with n calculations—a very big difference when n is large. Talking about numbers "more or less" disappearing, or treating very small coefficients as zero, may sound sloppy but it is "very powerful, very important"—and must be done very carefully, Grossmann says. It works only for a particular large class of matrices: "If you have no a priori knowledge about your matrix, if you just blindly use one of those things, you can expect complete catastrophe." Just how important these techniques will prove to be is still up in the air. Daubechies predicts that "5, certainly 10 years from now you'll be able to buy software packages that use wavelets for doing big computations, in simulations, in solving partial differential equations." Meyer is more guarded. "I'm not saying that algorithmic compression by wavelets is a dead end; on the contrary, I think it's a very important subject. But so far there is very little progress; it's just starting." Of the matrices used in turbulence, he said, only one in 10 belongs to the class for which Beylkin's algorithm works. "In fact, Rokhlin has abandoned wavelet techniques in favor of methods adapted to the particular problem; he thinks that every problem requires an ad hoc solution. If he is right, then Ingrid Daubechies is wrong, because there won't be 'prefabricated' software that can be applied to a whole range of problems, the way prefabricated doors or windows are used in housing construction." TURBULENCE AND WAVELETS Rokhlin works on turbulent flows in connection with aerodynamics; Marie Farge in Paris, who works in turbulence in connection with weather prediction, remains confident that wavelets will prove to be an effective tool. She was working on her doctoral thesis when she heard about wavelets from Alex Grossmann in 1984. "I was very much excited—in turbulence we have needed a tool like wavelets for a long time," she said. (Much later she learned that some turbulence researchers in the former Soviet Union, in Perm, had been working with similar techniques completely independently since 1976.) "When you look at turbulence in Fourier space," Farge explains, "you see cascades of energy, where energy is transferred from one wavenumber [frequency] to another. But you don't see how those cascades relate to what is happening in physical space; we had no tool that let us see both sides at once, so we could say, voilà , this cascade corresponds to this interaction. So when Alex showed me that wavelets were objects that allowed one to unfold the representation both in physical space and in scale, I said to myself, this is it, now we're going to get somewhere. "I invited him to speak in a seminar and told everyone in the turbulence community in Paris to come. I was shocked by their reaction to his talk. 'Don't waste your time on that,' they told me. Now some of the people who were the most skeptical are completely infatuated and insist that everyone should use wavelets. It's just as ridiculous. It's a new tool, and one cannot force it into problems as shapeless as turbulence if it isn't calibrated first on academic signals that we know very well. We have to do a lot of experiments, get a lot of practice, develop methods, develop representations." She uses orthogonal wavelets, or related wavelet packets, for compression but continuous wavelets for analysis: "I would never read the coefficients themselves in an orthogonal basis; they are too hard to read." With continuous wavelets she can take a one-dimensional signal that varies in space, put it into wavelet space, and get a two-dimensional function that varies with space and scale: she can actually see, on a computer screen or a printout what is happening at different scales at any given point. Orthogonal wavelets also give time-scale information, of course (although in a rougher form, since one doubles the scale each time, ignoring intermediate scales). The difference is largely one of legibility. Once, Meyer says, Jean Jacques Rousseau invented a musical notation based on numbers rather than notes on a staff, only to be told that it would never catch on, that musicians wanted to see the shape and movement of music on the page, to see the patterns formed by notes. The coefficients of orthogonal wavelets correspond to Rousseau's music by numbers; continuous wavelets to the musical notation we know. Farge compares the current state of turbulence research to "prescientific zoology." Many observations are needed, she says, to see what structures in turbulence are dynamically important and to try to recreate the theory in terms of their interactions. Possible candidates are ill-defined creatures called "coherent structures" (a tornado, for example, or the vortex that forms when you drain the bath). She uses wavelets to isolate them and to see how many exist at different scales or whether a single structure exists at a whole range of scales. Identifying the dynamically important structures would tell researchers "where we should invest lots of calculations and where we can skimp," Farge said. For studying turbulence requires calculations that defy the most powerful computers. The Reynolds number for interactions of the atmosphere—a measure of its turbulence—ranges from 109 to 1012; direct computer simulations of turbulence can now handle Reynolds numbers on the order of 102 or 103. But so far the results have been disappointing, Meyer says: "There should be something between turbulence and wavelets, everyone thinks so, but so far no one has a real scientific fact to offer." Certainly wavelets do not offer an easy trick for solving nonlinear equations (such as the Navier-Stokes equation used to describe turbulent flows), in the way that Fourier turned many linear equations into cookbook problems. "Wavelets are structurally a little better adapted to nonlinear situations," such as those found in turbulence, Meyer said. "But is something that is better in principle actually better in practice?" He is disturbed that when wavelets are used in nonlinear problems they "are used in a neutral way; they are always the same wavelets—they aren't adapted to the problem. … It is in this sense that there is perhaps a doubt about using wavelets to solve nonlinear problems. What can one hope for from methods that don't take the particular problem into account? At the same time, there are general methods in science. So one can give a different answer depending on one's personality." FINGERPRINTS AND HUNGARIAN DANCES One contribution of wavelets, Farge says, is that they have "forced people to think about what the Fourier transform is, forced them to think that when they choose a type of analysis they are in fact mixing the signal and the function used for the analysis. Often when people use the same technique for several scientific generations, they become blind to it." As work with wavelets progressed, it became clear that if Fourier analysis had limitations, wavelet analysis did also. As David Marr wrote in Vision, "Any particular representation makes certain information explicit at the expense of information that is pushed into the background and may be quite hard to recover." Very regular, periodic signals are more easily recognized, and more efficiently encoded, by a Fourier transform than by wavelets, for example. So Coifman, Meyer, and Wickerhauser developed an information-compression scheme to take advantage of the strengths of both Fourier and wavelet methods: the "Best Basis" algorithm. In Best Basis a signal enters a computer like a train entering a switchyard in a train station. The computer analyzes the signal and decides what basis could encode it most efficiently, with the smallest possible amount of information. At one extreme it might send the signal to Fourier analysis (for signals that resemble music, with repeating patterns). At the other extreme it might send it to a wavelet transform (irregular signals, fractals, signals with small but important details). Signals that don't fall clearly into either group are represented by "wavelet packets" that combine features of both Fourier analysis and wavelets. Loosely speaking, a wavelet packet is the product of a wavelet by a wiggle, an oscillating function. The wavelet itself can then react to abrupt changes, while the wiggle inside can react to regular oscillations. "The idea is that you are introducing a new freedom," Meyer said. Since the choice of wiggles is infinite, "it gives a family that is very rich." Working with the FBI, Wickerhauser and Coifman applied Best Basis to the problem of fingerprint compression, and in a test conducted by the FBI's Systems Technology Unit it outperformed other methods. (Because Best Basis was being patented, Wickerhauser said, the FBI did not adopt it but instead custom-made a similar technique.) So far, the wavelet technique is intended only to compress fingerprints for storage or transmission, reconstructing them before identification by people or machines. But the FBI plans to hold a competition for automatic identification systems. "Those who understand how to use wavelet coefficients to identify will probably win, on speed alone if nothing else, because the amount of data is so much less," Wickerhauser said. In studies with military helicopters, Best Basis has been used to simplify the calculations needed to decide, from radar signals, whether a possible target is a tank or perhaps just a boulder. In trials the Best Basis algorithm could compress the 64 original numbers produced by the radar system to 16 and still give "identical or better results than the original 64, especially in the presence of noise," Wickerhauser said. But probably the most unusual use of Best Basis has been in removing noise from a battered recording of Brahms playing his own work, recorded in 1889 on Thomas Edison's original phonograph machine, which used tinfoil and wax cylinders to record sound. The Yale School of Music had entrusted it to Coifman after all else had failed. "Brahms was recorded playing his music for Edison," Wickerhauser said. "It was played on the radio sometime in the 1920s—possibly using a wooden needle, and was recorded off the radio. Then it was converted to a 78 record. That was the condition in which Yale had it—beaten to death." Coifman's approach was to say that noise can be defined as everything that is not well structured and that "well structured" means easily expressed, with very few terms, with something like the Best Basis algorithm. So the idea is to use Best Basis to decompose the signal and to remove anything that is left over. The result was not musical—no one hoped for that, from a recording that contained perhaps 30 times as much noise as signal—but they were able to identify the music as variations on Hungarian dances. The project, with Jonathan Berger of the Yale School of Music, is still going on. "We don't know yet how far we can restore it," Coifman said. BEYOND WAVELETS For some purposes, however, Best Basis is not ideal. Because it treats the signal as a whole, it has trouble dealing with highly nonstationary signals—signals that change unpredictably. To deal with such signals Stéphane Mallat and Zhifeng Zhang have produced a more flexible system, called Matching Pursuits, which finds the best match for each section of the signal, out of "dictionaries" of waveforms. "Instead of trying to globally optimize the match for the signal, we're trying to find the right waveform for each feature," Mallat said. "It's as if we're trying to find the best match for each 'word' in the signal, while Best Basis is finding the best match for the whole sentence." Depending on the signal, Matching Pursuits uses one of two "dictionaries": one that contains wavelet packets and wavelets, another that contains wavelets and modified "windowed Fourier" waveforms. (While in standard windowed Fourier the size of the window is fixed and the number of oscillations within the window varies, in Matching Pursuits the size of the window is also allowed to vary.) First, the appropriate dictionary is scanned and the "word" chosen that best matches the signal; then that "word'' is subtracted out, and the best match is chosen for the remaining part of the signal, and so on. (To some extent, the dictionaries can produce words on command, modifying waveforms to provide a better fit.) Because the waveforms used are not orthogonal to each other, the system is slower than Best Basis: n2 calculations compared to n log n for Best Basis. On the other hand, the lack of orthogonality means that it doesn't matter where on the signal you start encoding. This makes it better suited for pattern recognition, for encoding contours and textures, for example. But the quest for new ways to encode information is far from over. "When you speak, you have a huge dictionary of words and you pick the right words so that you can express yourself in a few words. If your dictionary is too small, you'll need a lot of words to express one idea," Mallat said. "I think the challenge we are facing right now is, when you have a problem, how are you going to learn the right representation? The Fourier transform is a tool, and the wavelet transform is another tool, but very often when you have complex signals like speech, you want some kind of hybrid scheme. How can we mathematically formalize this problem?" In some cases the task goes beyond mathematics; the ultimate judge of effective compression of a picture, or of speech, is the human eye or ear, and developing the right mathematical representation is often intimately linked to human perception. Information is not all equal. Even a young child can draw a recognizable outline of a cat, for example, while the very notion of a drawing without edges is perplexing, like the Cheshire cat who vanished, leaving only his smile. Other differences are less well understood. People have no trouble differentiating textures, for example, while "after 20 years of research on texture, we still don't really know what it is mathematically," Mallat said. Wavelets may help with this, especially since some wavelet-like techniques are used in human vision and hearing, but any illusions researchers may have had that wavelets will solve all problems unsuited to Fourier analysis have long since vanished; the field has become wide open. "Wavelets have gone off in many directions; it becomes a little bit of a scholastic question, what you call wavelets," Grossmann says. "Some of the most interesting very recent things would technically not be called wavelets, the scale is introduced in a somewhat different way—but who cares?" It may not be the least of the contributions made by wavelets that they have inspired both a closer and a broader look at mathematical languages for expressing information: a more judicious look at Fourier analysis, which was often used reflexively ("The first thing any engineer does when he gets hold of a function is to Fourier transform it—it's an automatic reaction," one mathematician said) and a more free-ranging look at what else might be possible. "When you have only one way of expressing yourself, you have limits that you don't appreciate," Donoho says. "When you get a new way to express yourself, it teaches you that there could be a third or a fourth way. It opens up your eyes to a much broader universe." NOTES REFERENCES Cousin, V. 1831. Notes biographiques pour faire suite à l'éloge de M. Fourier (Biographical notes to follow praise of Mr. Fourier), Fourier file AdS, archives, Académie des Sciences, Institut de France, Paris. Delambre, Jean-Baptiste. 1810. Rapport historique sur le progrès des sciences mathématiques depuis 1789 (Historical report on the progress of mathematical sciences since 1789), part of Rapport à l'Empereur sur le progrès des sciences, des lettres et des arts depuis 1789, published by Belin, Paris, 1989. Encyclopedia Britannica, 11th ed. 1972. Harmonic analysis. Vol. 11, pp. 106–107. Farge, M., J.C.R. Hunt and J.C. Vassilicos (eds). 1993. Wavelets, Fractals, and Fourier Transforms . Clarendon Press, Oxford. Fourier, J. 1822. Théorie analytique de la chaleur (published in Oeuvres de Fourier, Vol. 1, Gauthier-Villars et fils, Paris 1888). Healy, D., Jr., and J. B. Weaver. 1992. Two Applications of Wavelets Transforms in Magnetic Resonance Imaging. IEEE Transactions on Information Theory, Vol. 38, No. 2, March 1992. Herivel, J. 1975. Joseph Fourier — the Man and the Physicist. Clarendon Press, Oxford. Körner, T.W. 1988. Fourier Analysis. Cambridge University Press, Cambridge. Lagrange, J.-L. Oeuvres de Lagrange, Gauthier-Villars, Paris, 1867–1892. Marr, D. 1982. Vision. W. H. Freeman, New York. Meyer, Y. (ed). 1992. Wavelets and Applications: Proceedings of the International Conference, Marseille, France. Masson and Springer-Verlag. Meyer, Y. 1993. Wavelets: algorithms & applications. Society for Industrial and Applied Mathematics, Philadelphia. (This is an English translation of Les Ondelettes Algorithmes et Applications, Meyer, Y., Armond Colin, Paris, 1992.) Pierce, J. R., and A. M. Noll. 1990. Signals — The science of telecommunications. Scientific American Library, New York. Poincaré, H. 1956. Oeuvres de Henri Poincaré, vol. 11, Gauthier-Villars, Paris. Strichartz, R. 1993. How to make wavelets, American Mathematical Monthly, Vol. 100, no. 6, June-July 1993, pp. 539–556. Strömberg, J.-O. 1981. A modified Franklin system and higher-order spline systems on Rn as unconditional bases for Hardy spaces. Conference on Harmonic Analysis in Honor of Antoni Zygmund, vol. II, pp. 475–494; W. Beckner, A. Caldéron, R. Fefferman, and P. Jones, eds. University of Chicago, 1983. RECOMMENDED READING Books Accessible to the General Public Gratten-Guinness, I., and J.R. Ravetz. 1972. Joseph Fourier, 1768–1830. MIT Press, Cambridge, Mass. Herivel, John. 1975. Joseph Fourier—The Man and the Physicist. Clarendon Press, Oxford. Körner, T.W. 1988. Fourier Analysis. Cambridge University Press, Cambridge. (This is a book for mathematicians, but it includes delightful sections that require no mathematical background, such as the description of the laying of transatlantic cables, where "half a million pounds was being staked on the correctness of the solution of a partial differential equation." ) Pierce, John R., and A. Michael Noll. 1990. Signals—The Science of Telecommunications. Scientific American Library, New York. A Sampling Of Technical Books Benedetto, J., and M. Frazier (eds.) 1993. Wavelets: Mathematics and Applications. CRC Press, Boca Raton. Chui, C.K. (ed.). 1992. Wavelets-A Tutorial in Theory and Applications. Academic Press, Boston. Daubechies, I. 1992. Ten Lectures on Wavelets. Society for Industrial and Applied Mathematics, Philadelphia. Farge, M., J.C.R. Hunt, and J.C. Vassilicos (eds.). 1993. Wavelets, Fractals, and Fourier Transforms. Clarendon Press, Oxford. Körner, T.W. 1988. Fourier Analysis. Cambridge University Press, Cambridge. Meyer, Y. 1990. Ondelettes et Opérateurs. Three volumes. Hermann, Paris. (English translation published by Cambridge University Press in 1992.) Meyer, Y. 1992 Wavelets and Operators, Cambridge University Press, Cambridge and New York. (This is a translation of Ondelettes et Operateurs, Hermann, Paris, 1990.) Meyer, Y. 1993. Wavelets: algorithms & applications. Society for Industrial and Applied Mathematics, Philadelphia. Ruskai, B. (ed.) 1992. Wavelets and Their Applications. Jones and Bartlett, Boston.
https://nap.nationalacademies.org/read/2110/chapter/9?chapselect=yo#196
We have represented Clark County, Nevada—the owner of Las Vegas McCarran International Airport—for decades. Our work for this client illustrates the broad spectrum of our airports practice and demonstrates Kaplan Kirsch & Rockwell’s comprehensive expertise regarding the wide range of legal challenges facing airports today. Supervised the preparation and approval of environmental assessments associated with airfield and terminal improvements at McCarran. Provided strategic advice for reducing litigation risks and anticipating and minimizing project opposition to airport development projects. Assisted with litigation and litigation avoidance strategies related to inverse condemnation and defenses of zoning and land use controls. Prepared and secured FAA approval for an innovative incentive program to encourage carriers to upgauge aircraft. Advised on the legal, practical, and regulatory consequences of the FAA’s proposed Safety Management System (SMS) regulations. Advised on the management, use, and disposition of Bureau of Land Management (BLM) lands needed for airport development. Successfully challenged (both in administrative proceedings and ultimately in the D.C. Circuit Court of Appeals) the FAA’s Determination of No Hazard for a proposed wind farm on the grounds that the agency had not adequately examined potential adverse impacts to airport facilities. Assisted with the development of a voluntary property acquisition program for incompatible land uses surrounding McCarran. Participated extensively in procurement efforts for airport projects, including preparing the bid documents, participating in the selection panel, and ensuring compliance with federal contracting obligations. Provided continuing counsel on matters relating to noise, air quality compliance, coordination of development and potential height and obstruction impacts on airport operations, as well as general federal regulatory compliance issues. Advised Clark County’s lobbying team on the potential impacts of proposed state legislation prior to adoption. Provided comprehensive strategic counsel on the planning and design of a new greenfield commercial service airport and dedicated heliport.
https://www.kaplankirsch.com/Projects/Las-Vegas-McCarran-International-Airport?printver=true&printver=true
Record climate extremes are reducing urban liveability, compounding inequality, and threatening infrastructure. Adaptation measures that integrate technological, nature-based, and social solutions can provide multiple co-benefits to address complex socioecological issues in cities while increasing resilience to potential impacts. However, there remain many challenges to developing and implementing integrated solutions. In this Viewpoint, we consider the value of integrating across the three solution sets, the challenges and potential enablers for integrating solution sets, and present examples of challenges and adopted solutions in three cities with different urban contexts and climates (Freiburg, Germany; Durban, South Africa; and Singapore). We conclude with a discussion of research directions and provide a road map to identify the actions that enable successful implementation of integrated climate solutions. We highlight the need for more systematic research that targets enabling environments for integration; achieving integrated solutions in different contexts to avoid maladaptation; simultaneously improving liveability, sustainability, and equality; and replicating via transfer and scale-up of local solutions. Cities in systematically disadvantaged countries (sometimes referred to as the Global South) are central to future urban development and must be prioritised. Helping decision makers and communities understand the potential opportunities associated with integrated solutions for climate change will encourage urgent and deliberate strides towards adapting cities to the dynamic climate reality. Latest news - Research news | 2022-05-16 The effects of less, but better meat production Study captures the real-world experiences and effects of a farm’s journey towards sustainability - Research news | 2022-05-14 Our engagements during Stockholm +50 When and where to find us during the international environmental meeting in Stockholm 2-3 June - Research news | 2022-05-10 Centre joins SEK 45 million landscape programme LAND-PATHS programme will engage with ordinary citizens to develop more sustainable and integrated decision-making processes - Research news | 2022-05-09 Three ways games can break sustainability deadlocks Played by the right people, strategy games can break free from established norms and support more transparent democratic dialogues - Research news | 2022-05-04 What will it take to save the human ocean?
https://www.stockholmresilience.org/publications/publications/2021-11-12-integrating-solutions-to-adapt-cities-for-climate-change.html
Smee, Delbert L. Ray, Brandon R. Johnson, Matthew W. Cammarata, Kirk MetadataShow full item record Abstract The objective of this study was to measure the communities associated with different seagrass species to predict how shifts in seagrass species composition may affect associated fauna. In the northwestern Gulf of Mexico, coverage of the historically dominant shoal grass (Halodule wrightii) is decreasing, while coverage of manatee grass (Syringodium filiforme) and turtle grass (Thalassia testudinum) is increasing. We conducted a survey of fishes, crabs, and shrimp in monospecific beds of shoal, manatee, and turtle grass habitats of South Texas, USA to assess how changes in sea grass species composition would affect associated fauna. We measured seagrass parameters including shoot density, above ground biomass, epiphyte type, and epiphyte abundance to investigate relationships between faunal abundance and these seagrass parameters. We observed significant differences in communities among three seagrass species, even though these organisms are highly motile and could easily travel among the different seagrasses. Results showed species specific relationships among several different characteristics of the seagrass community and individual species abundance. More work is needed to discern the drivers of the complex relationships between individual seagrass species and their associated fauna. RightsAttribution 4.0 International http://creativecommons.org/licenses/by/4.0/ CitationRay, B.R., Johnson, M.W., Cammarata, K. and Smee, D.L., 2014. Changes in seagrass species composition in northwestern Gulf of Mexico estuaries: effects on associated seagrass fauna. PloS one, 9(9), p.e107751. Collections The following license files are associated with this item:
https://tamucc-ir.tdl.org/handle/1969.6/90122
What is your current job and how did your planning degree prepare you for it? I am a Senior Associate Planner at Houseal Lavigne Associates, which is a community planning and economic development firm working across the country with our office in the Chicago Loop. I work with cities and counties to lead a variety of planning projects, including comprehensive, downtown, corridor, subarea, and neighborhood plans, as well as regional strategic plans and economic development programming. This work involves elected and appointed officials, municipal management teams, business leaders, universities, nonprofits, community stakeholders, and residents in a wide variety of settings, including core city neighborhoods, isloated rural towns, and complex suburban landscapes in metropolitan regions. The MUP program provides a wide variety of latitude and helps develop a broad skillset, while also reinforcing the interrelationships and connections between all aspects of planning. In my work, virtually every final plan speaks to land use and zoning, growth management, transportation, environmental, park and open space management, sustainability, and utility and infrastructure planning. Furthermore, the MUP program provides a solid foundation for understanding all of the different stakeholders, decision-makers, and actors that design and build our cities, and often times I find myself sitting in places like a hospital boardroom with 40 different entities discussing how to respond to their region's economic challenges. The breadth and comprehensiveness of the program definitely helped prepare me for those opportunites. Why did you choose to study urban planning? I began my professional career working on political campaigns, hanging around Capitol Hill, and attending college in Washington, D.C. For about 10 years I worked in ICMA city management for suburban communities in Chicago and St. Louis, ultimately serving as a city administrator in Fairview Heights, Illinois during the Great Recession. I am still driven by public service and contributing to public decision-making processes, but what I discovered was the connections between municipal operations, like the city budget and service delivery, were so closely rooted in community-based strategic planning and land use policy that I wanted to focus the entirety of my hours-in-the-week on urban planning matters. I feel like achieving a MUP degree and AICP certification enhanced my established municipal management skillsets, while bolstering my abilities as a planner and economic development specialist. What advice would you share with someone who is considering a career in urban planning? I think one of the most important considerations is fully digesting how broad the field is and trying to narrow in on an area that fuels your passion for the work. Urban planning covers a wide variety of topics, which can range from more legislative public policy spheres (like affordable housing) to more design-oriented functions (like physical planning and site capacity calculations). Further, the field addresses an incredibly diverse spectrum of geographies, ranging from global impacts and metropolitan scales, down to neighborhood meetings and even designing a 1/2 acre commercial lot. Comprehensive planning provides opportunities to consider and address a lot of these topics at many of these scales, but other career tracks are far more specialized and focused. I think exploring these opportunites and having a good sense of what motivates you about urban planning will lead to a more productive, enjoyable academic experience as well as land an individual in a job best suited to their interests and strengths.
http://urban.illinois.edu/prospective-students/alumni-profiles/116-drew-awsumb
Amy Stewart, bestselling author of Girl Waits with Gun, shares the second exciting book based on the real-life Kopp sisters, Lady Cop Makes Trouble. After besting (and arresting) a ruthless silk factory owner and his gang of thugs in Girl Waits with Gun, Constance Kopp became one of the nation’s first deputy sheriffs. She's proven that she can t be deterred, evaded, or outrun. But when the wiles of a German-speaking con man threaten her position and her hopes for this new life, and endanger the honorable Sheriff Heath, Constance may not be able to make things right. Event date: Thursday, September 29, 2016 - 7:00pm Event address: Books Inc. 74 Town & Country Village Palo Alto, CA Event Terms: This Book Is Not Sold Online - Inquire In Store ISBN: 9780544409941 Availability: Hard to Find Published: Houghton Mifflin Harcourt - September 6th, 2016 $14.95 ISBN: 9780544800830 Availability: In Stock Now - Click Title to See Store Inventory. Books must show IN STOCK at your desired location for same day pick-up.
https://www.booksinc.net/event/amy-stewart-books-inc-palo-alto
This thesis consists of four independent papers. They deal with some aspects of industrial policy, namely public supports to firms that are intended to support innovation and growth at the firm level, using Swedish data. Two papers study the efficiency of current Swedish policies by estimating the effects of subsidies and public loans to firms, respectively. The results on subsidized firms suggests that there are some positive effects on profits and productivity, but these diminish and disappear over time. The results of public loans are more positive with long lasting effects on productivity and sales but only for smaller firms. Public loans do not lead to an increase in the number of employees in the firms that receive them. The third paper studies the selection of firms for subsidies and the incentives firms have to seek them. By modeling the decision to seek subsidies as a trade off between producing in the market and seeking grants, the results suggest that firms with low market productivity might self-select into seeking grants. The empirical results are in line with the theoretical predictions. The final paper studies the incentives that politicians have to implement programs and policies that they know will be inefficient. Since a lack of political action can make the politicians look incompetent, incumbentens have incentives to implement policies even though they know that these will be ineffective, to signal competence towards the voters. Denna avhandling består av 4 oberoende uppsatser. De studerar några aspekter av aktiv näringspolitik, mera bestämt effekten av offentliga lån och stöd som syftar till att öka tillväxten och innovationsförmågan i företag med hjälp av svenska data. Dessa åtgärder syftar till att lösa marknadsmisslyckanden på kapitalmarknaden, som annars kan leda till att företag saknar de finansiella resurserna som de behöver för att investera i fysiskt- eller humankapital. Om staten kan identifiera dessa företag och hjälpa dem med finansiering så kan dessa företag investera och växa, vilket i sin tur ökar den ekonomiska tillväxten. Två uppsatser studerar effektivitet i nu existerande svenska åtgärder genom att mäta effekterna av statliga bidrag samt lån till företag. En kombination av matchning och difference-in-difference regressioner används för att reducera problem som beror på selektering. Resultaten visar att företag som får bidrag får ökade vinster och högre produktivitet, men bara på kort sikt. Resultaten för offentliga lån är mera positiva, med långvariga positiva effekter på produktivitet och försäljning, men bara för de mindre företagen. Offentliga lån leder inte till att företag anställer flera. Den tredje uppsatsen studerar vilka incitamenten som företag som söker stöd har. Genom att modellera beslutet som ett val mellan att producera för marknaden eller söka stöd så visar modellen att företag med låg marknadsproduktvitet bör ägna mer tid åt att söka stöd eftersom de har lägre alternativ kostnad. De empiriska resultaten är i linje med vad modellen förutsäger. Den fjärde och sista uppsatsen studerar vilka incitamenten som politiker har att implementera åtgärder som de på förhand är ineffektiva. Ifall det är svårt att lösa ett samhällsproblem kan det fortfarande vara rationellt att införa ineffektiva åtgärder eftersom brist på aktivitet kan signalera inkompetents gentemot väljarna. Ifall väljarna har imperfekt information om olika åtgärders effektivitet kan själva handlingen i sig vara mera viktig än handlingens effektivitet. The governments of most advanced countries offer some type of financial subsidy to encourage firm innovation and productivity. This paper analyzes the effects of innovation subsidies using a unique Swedish database that contains firm level data for the period 1997–2011, specifically informa tion on firm subsidies over a broad range of programs. Applying causal treatment effect analysis based on matching and a diff-in-diff approach combined with a qualitative case study of Swedish innovation subsidy programs, we test whether such subsidies have positive effects on firm performance. Our results indicate a lack of positive performance effects in the long run for the majority of firms, albeit there are positive short-run effects on human capital investments and also positive short-term productivity effects for the smallest firms. These findings are interpreted from a robust political economy perspective that reveals that the problems of acquiring correct information and designing appropriate incentives are so complex that the absence of significant positive long-run effects on firm performance for the majority of firms is not surprising. Incomplete capital markets and credit constraints are often considered obstacles to economic growth, thus motivating government interventions in capital markets. One such intervention is governmental bank loans targeting credit-constrained small and medium-sized enterprises (SMEs). However, it is less clear to what extent these interventions result in firm growth and whether governmental loans should target firms that are not receiving private bank loans (the extensive margin) or work in conjunction with private bank loans (the intensive margin). Using a unique dataset with information on state bank loans targeting credit-constrained SMEs with and without complementary private bank loans, this paper contributes to the literature by studying how these loans affect the targeted firms. The results suggest that positive effects are found on firm productivity and sales for firms with 10 or fewer employees, while no evidence is found of employment effects. This lack of employment effect suggests that a lack of external credit is not the main obstacle to SME employment growth. In this paper, we study the selection, incentives, and characteristics of small and medium sized firms (SMEs) that apply for and eventually receive one or multiple governmental grants intended to stimulate innovation and growth. The analysis departs from a rent-seeking model in which firms are free to allocate their effort between production and rent-seeking. We show that highly productive firms choose not to seek grants, while moderately productive firms allocate a share of their effortto rent-seeking, and low-productivity firms are incentivized to allocate most, if not all, of their effort to seeking grants and can thus be called subsidy entrepreneurs. Due to their large efforts in seeking grants, these low-productivity firms also havea relatively high probability of receiving grants. Using detailed data over all grants administered by the three largest grant-distributing agencies in Sweden, the empirical analysis suggests that supported firms have relatively low productivity, high wages, and a larger share of workers with higher education than non-supported firms. These characteristics become further pronounced as we move from single to multiple supported firms, thus providing support for the notion of subsidy entrepreneurs. A substantial body of literature suggests that politicians are blocked from implementing efficient reforms that solve substantial problems because of special interest groups or budget constraints. Despite the existing mechanisms that block potentially efficient reforms, real-world data show that a large number of new programs and policies are implemented every year in developed countries. These policies are often selective and considered to be fairly inefficient by ex post evaluation, and they tend to be small in size and scope. With this background, this paper studies the reasons why a rational politician would implement an inefficient public policy that is intended to obfuscate the difficulties in achieving reforms. The paper uses a simple competence signaling model that suggests that if an effective reform is impossible, engaging in strategic obfuscation through an inefficient program increases the probability of winning a re-election compared to doing nothing at all. This is because an inefficient reform does not lead voters to believe that the politician is incompetent, which a lack of action risks doing. Intentional inefficiency aiming to obfuscate the difficulty of efficient reforms can therefore complement the previous theories’ explanations of political failure.
http://hj.diva-portal.org/smash/record.jsf?pid=diva2%3A1238611&dswid=-1293
Michael Kosfeld holds the Chair of Organization and Management at Goethe University of Frankfurt. He graduated in Mathematics from the University of Bonn in 1995 and received his PhD in Economics at Tilburg University in 1999. Before joining Goethe University of Frankfurt, he was employed at the Institute for Empirical Research in Economics at the University of Zurich from 2000 to 2008. His primary area of research is behavioral and organizational economics with particular interest in the theoretical and experimental analysis of social interaction, human boundedly rational decision-making and the psychology of incentives. Michael Kosfeld is Director of the Frankfurt Laboratory for Experimental Economic Research (FLEX) and the Center for Leadership and Behavior in Organizations (CLBO).
https://www.wiwi.uni-frankfurt.de/en/departments/mm/professuren/professur-kosfeld/team/prof-dr-michael-kosfeld.html
he Bayer Young Environment Envoy is a worldwide environmental educational program for young people. The program is organised by Bayer and the United Nations Environment Programme with the aim of encouraging young people to become environmental leaders and increase their awareness for the environment. Be a Kenyan citizen, between the ages of 18 & 24. Be actively involved in environmental activities/ community service. Be willing to travel overseas and work well in a group. Children & relatives of UNEP / Bayer employees are not eligible for entry. Come up with a visual project proposal aimed at addressing sustainable development and enhancing conservation prospects of the earth’s natural resources within a specific community (not necessarily your own). Please note that the proposal should not be limited to scientific & technological advancements or solutions. All applications must reach UNEP / Bayer no later than 30 June.
https://www.advance-africa.com/Bayer-Young-Environmental-Envoy.html
Gun ownership risk factors – A mathematical model Goal: To understand which risks associated with owning a gun are significant versus insignificant. This is written for first-time gun owners, families with young children, or those considering buying their first gun. Methods included using Probabilistic risk assessment and event trees. To read the event tree, you start on the left assuming that you own a gun. The factors analyzed include the chance of an accident, the chance of you being the victim of a violent crime, a factor measuring the percent chance that you have the gun accessible to you during the violent crime (do you conceal carry 24/7 or do you just keep it at home in a bedroom where it isn’t accessible to you for most of the day), and the contingent chance of being injured/killed during a violent crime (given if you have access to your defensive firearm or not). All of the positive outcomes are branches that are highlighted green (where you stop the threat). The negative outcomes are red (where you are injured or worse). Probabilities for each of these factors are then multiplied to get the likely outcomes. Data was gathered from diverse sources (discussed later) and a range of values for each factor were tested in the model for sensitivity. Putting it all together then, as an example, the probability of surviving a year of gun ownership without being victim of a violent crime is given as: P(Safe due to no accident)*P(Survive due to not being the victim of a violent crime) Testing different ranges of values for factors in the model and then comparing the magnitude of the adverse outcomes, to each other, gives you a sense of what factors are most significant/sensitive in the universe of possible conclusions. Results: Gun ownership does not significantly contribute to an increased risk of injury or death, for adults. · To be conservative (mathematically, not politically), we combine the 430 unintentional firearm deaths per year with the 23,941 suicides by firearm annually to get a probability of accidental death of 0.01%. · It is difficult to the estimate the rate of violent crime since it can be underreported. Annual rates in the literature vary from 0.37% using emergency room visits for assault as a metric to 2.3% using violent crime rates in Justice statistics. To be conservative, we used the largest value and multiplied it by 3, to account for underreporting, with a resulting probability of encountering a violent crime of 6.9% annually. · Estimating how likely it is for an individual to have access to a defensive gun, at the moment of encountering a violent crime, is based on a few factors. First, approximately 44% of US households own a firearm and 7.3% of US adults have concealed carry permits. Violent crimes can occur in the home (where the largest percentage of guns would be located and where people spend a significant portion of their time) but also can occur out in the world (where most people are not carrying a defensive firearm). So, for the purposes of this risk model, we used values for the gun utilization factor ranging from 10 to 40%. · Finally, the rate of injury or death resulting from being the victim of a violent crime can be estimated between 0.3%to 4%. The conclusion, that the probability of dying due to a firearm in a year is 0.3% in our model, roughly matches data from the CDC’s WISQARS Data Visualization tool with a value of 0.8%, for adults. Now, let’s shift our focus onto the potential risk to children in a home where a gun in present. This is an important question for first-time gun owners or current gun owners who are reconsidering their weapons as they have a child. The following graphs are sourced from the CDC and provide insights on the risks of firearms, to children. To understand the impact of firearms on children, we first subtract the firearms components from the number of suicides, homicides, and accidents in the 1 to 12 year old range. We then take that value (247) and create a new bin for the sum of these named “all firearm-related deaths.” When the data is viewed this way, all firearm-related deaths becomes the 6th leading cause of death at 4.6% and is perceptively higher than the equivalent adult risk (of 0.3% to 0.8%). Key takeaways: 1. Owning a gun does increase the risk of injury or death to members of the household 2. The amount of increased risk for injury or death, for adults, is fairly small (approximately 0.5% per year) 3. Children have a 9 times higher relative-risk of injury or death (approximately 4.6% per year) as compared to adults 4. Preventing children from accessing firearms would eliminate a large portion of the risk of all firearm-related deaths (suicide, homicide, or accidents) 5. Parents and caregivers need to carefully consider - Is it safe to have a gun in my home with children? How can I improve gun safety around toddlers? Etc.
https://www.gunintellectual.com/gun-ownership-risk-factors-a-mathematical-model/
Who's it for? Anyone who needs help learning or mastering abnormal psychology material will benefit from taking this course. There is no faster or easier way to learn about abnormal psychology. Among those who would benefit are: - Students who have fallen behind in understanding the causes of acute stress and psychophysiological disorders - Students who struggle with learning disabilities or learning differences, including autism and ADHD - Students who prefer multiple ways of learning psychology (visual or auditory) - Students who have missed class time and need to catch up - Students who need an efficient way to learn about stress disorders - Students who struggle to understand their teachers - Students who attend schools without extra psychology learning resources How it works: - Find videos in our course that cover what you need to learn or review. - Press play and watch the video lesson. - Refer to the video transcripts to reinforce your learning. - Test your understanding of each lesson with short quizzes. - Verify you're ready by completing the Stress Disorders chapter exam. Why it works: - Study Efficiently: Skip what you know, review what you don't. - Retain What You Learn: Engaging animations and real-life examples make topics easy to grasp. - Be Ready on Test Day: Use the Stress Disorders chapter exam to be prepared. - Get Extra Support: Ask our subject-matter experts any stress disorders question. They're here to help! - Study With Flexibility: Watch videos on any web-ready device. Students will review: This chapter helps students review the concepts in a stress disorders unit of a standard abnormal psychology course. Topics covered include: - Theories of emotion - Categories of emotion - Fight or flight responses - Acute stress disorder - Psychophysiological disorders - Positive psychology 1. Stress Disorders: Definition and Perspectives Stress disorders are psychological issues related to how a person reacts to stress in their life. Explore the definition and perspectives of stress disorders and learn about stress, approaches to stress, and managing stress. 2. Emotions in Psychology: Definition, Biological Components & Survival Emotions are part of human affection and are controlled and regulated by specific areas of the brain. Learn the definition of emotions in psychology, study the biological components, and discover the role of emotions in survival and non-verbal communication. 3. Categories of Emotion: 6 Basic Emotions, Oppositional Pairs & Biology Psychologists who study emotions have different ways of categorizing them. Explore Ekman's theory of the six basic emotions, Plutchik's theory of oppositional pairs, and how to think about emotions in terms of processing speed. 4. Theories of Emotion: James-Lange, Cannon-Bard, Two-Factor & Facial Feedback Hypothesis The James-Lange theory asserts that emotions are reactions to physiological arousal rather than the other way around. Explore the theories that challenged the James-Lange theory, such as Cannon-Bard and the two-factor theories of emotion, and discover the relevance of the facial feedback hypothesis. 5. Fight or Flight Response: Definition, Physiology & Examples The body's physiological reaction to danger, known as the fight or flight response, is mediated by the sympathetic nervous system. Learn how to define the fight or flight response, explore its physiology, and review examples of how it works. 6. Acute Stress Disorder: Definition, Causes and Treatment When a person experiences intense stress and disassociation after a traumatic event, they may suffer from acute stress disorder. Learn the definition of acute stress disorder, then explore the diagnosis, causes, and treatments. 7. Psychophysiological Disorders: Definition, Types, Causes and Treatment Psychophysiological disorders involve an interaction between psychological and physical conditions. Learn the definition, causes and types of psychophysiological disorders, examine how stress affects physical diseases, and explore the treatment options. 8. Positive Psychology: Optimism, Self-Efficacy & Happiness Positive psychology involves certain concepts related to positive feelings that help people cope with situations in their life. Learn about optimism and its relationship with happiness and self-efficacy. 9. Exhaustion Stage of Stress: Psychology Overview The exhaustion stage of stress occurs when the body is under stress for a long time. Review exhaustion, discover how it takes part in Hans Selye's general adaption syndrome, and explore what the consequences of chronic stress on health are. 10. Resistance Stage of Stress: Overview The second stage of the stress response is the resistance stage. Examine resisting the breaking point, the stages of stress, what causes resistance, and the dangers of resistance stress. 11. Complex PTSD & Dissociation In this lesson, you will learn the definition of complex PTSD and how it compares to regular PTSD. You will learn the definition of dissociation, how it presents in complex PTSD, and how it can develop into Dissociative Identity Disorder. Following the lesson will be a brief quiz. 12. Stress Management: Techniques & Tips Do you have a lot of stress in your life? Would you like to learn how to eliminate, reduce, and manage the stress you experience? Here are some tips and techniques on how to do just that. 13. Can Stress Cause Pain? - Effects of Stress on the Body The human body and its reaction to stress is highly variable. The lesson goes over the definition of stress and the possible responses it can create in the body. 14. Caregiver Burden & Stress In this lesson, you will learn the definition of caregiver burden and stress and the prevalence of this in the sandwich generation. You will learn the reasons for caregiver burden and stress, symptoms, effects, and ways to cope with it. Following the lesson will be a brief quiz to test your knowledge. Earning College Credit Did you know… We have over 220 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
https://study.com/academy/topic/stress-disorders-help-and-review.html
Credentialed LVN offering 4 years of experience providing compassionate care in patient centered environment. Skilled LVN with solid background working in rehabilitation and medical surgical environments under supervision of qualified RNs to deliver exceptional patient care and nursing support. Highly observant, detail-oriented and dedicated to high-quality, cost-effective healthcare. Certifications include IV certification and CPR certified. Care for various age groups of patients with exceptional clinical, interpersonal and time management abilities. Licensed in CA and diligent about helping people of all backgrounds with diverse healthcare needs. Top skills in communication, medication administration and wound care. Enthusiastic individual with superior skills in working in both team-based and independent capacities. Bringing strong work ethic and excellent organizational skills to any setting. Excited to begin new challenge with successful team. Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. As such, it is not owned by us, and it is the user who retains ownership over such content. Companies Worked For: School Attended Job Titles Held: Degrees © 2020, Bold Limited. All rights reserved.
https://www.livecareer.com/resume-search/r/licensed-vocational-nurse-5debf3c2b5ec45eeb92b0690fdee3b19
My STEP signature project was a creative endeavors project. It included buying a camera, taking a variety of photography classes, and capturing the special moments of my parents’ thirtieth wedding anniversary. I have always viewed myself as someone who catches on quickly and easily learns new things, but photography was a real challenge for me. Learning to use my camera was the easy part, but learning to have the eye of a photographer and capture true moments was a very challenging task for me to grasp. It is something I have grown a lot in with my mentor’s help, but will always continue to struggle and grow with. Throughout this project I have learned to rely on others. It sounds simple enough, but for a woman raised as independently as I was, it is no easy feat. I can almost always rely on myself to figure things out, but with photography I desperately needed help and sought advice from my instructors. This experience has taught me a lot about myself and how I interact with and view the world. The first true relationship formed during my project was with my mentor, Braddley Adams. Braddley is a professional photographer that teaches a series of classes to beginner and more advanced photographers to help others learn to use their cameras. He always says that the camera is just a machine; one is not really better than another, it all depends on how you use them. I never knew how involved photography was until I met Braddley and started taking his classes. Each photo I see has a whole new meaning to me now, as I truly understand what the photographer had to do to capture each shot. Braddley helped me to understand that photography is an art that anyone can learn, but few will learn to master it. Another significant figure in my project was Matt Cangelosi. He is world renowned photographer that teaches beginner and advanced photography courses on the side. The main lesson I learned from Matt’s lessons was how to change my perspective. As humans, we hurry through life and our eyes block out most of what we see. Our brains cannot possibly process all the information we would grasp if our eyes did not do this. But when we truly stop and look, we see amazing things. You can zoom in closely and see every detail on a flower petal, or look up towards the sky and see what the branches and leaves look like from below. Through Matt, I learned to look at the world differently and take in the scenery around me instead of passing by. As I prepared myself to take pictures of my parents’ anniversary event, I began to feel anxious about my ability to capture each precious moment at the party. Photography was a much more difficult art to master than I had anticipated. I knew I needed to practice, so I did. Braddley was generous enough to provide exercises to practice between classes and offered a free outing to take pictures alongside him as he guided us in the moments we were capturing and how we did it. Without the extra preparation he provided, I would not have been able to capture the moments I did at my parents’ vow renewal with the fast paced and frequently changing environment. Overall, I learned to truly dedicate myself to a new hobby in order to fully learn it. I learned that not every new task in life will be easy, but that does not make it impossible. I learned that if I want to succeed, I must put in the time and effort to master it. I learned that I need to rely on others who have more knowledge to be able to grow. In all, I learned what it takes to be a more efficacious person. This change is significant in every part of my life. I have always been an independent person and never asked others for help. Through this journey of learning photography I now understand that I might not need others, but I can greatly benefit from their knowledge. This is really important as I will soon transition into professional life and will be working with several different people daily. One person alone cannot achieve much, but together everyone is more successful. I am very grateful for the opportunity I have been given through STEP to be able to gain a skill I can carry with me throughout my life, and all the valuable lessons I learned during it this process.
https://u.osu.edu/stepcreative18/author/farwell-4/
When it comes to the Indian Ocean, maritime crime and security has long been a central subject of attention. by Dr. Jessica Larsen, PhD[i] While corona has closed down vast parts of social life, production and borders in most countries, ports and shipping are some of the only functions that have kept going. Even during lockdown, people need food and goods. With shipping unfortunately comes maritime crime, corona crisis or not. Inter alia, smuggling in the Indian Ocean[ii] and piracy incidents in the Gulf of Guinea[iii] are just some of the illegal activities that have been reported during lockdown. When it comes to the Indian Ocean,maritime crime and security has long been a central subject of attention. Be it counter-terrorism operations in the north, or counter-piracy in the east off the coast of Somalia, the international community and regional states have addressed various threats to shipping through elaborate law enforcement activities. While these efforts are important to suppress maritime crime and ensure the freedom of navigation, their efforts are limited to the sea. This neglects how illicit activities at sea are realised: smuggling, piracy and other types of maritime crime are, of course,organised, launched and supported from land. If such criminal enterprises are dependent upon coastal support, ports can play a central role in their suppression and prevention. Port authorities, operators and the shipping industry are valuable in this regard and could be drawn more closely into the maritime security architecture in the Indian Ocean than is the case today. The role of ports in maritime crime With 90% of all trade moving by sea, ports are important logistical nodes in international trade and transportation.[iv]In the Indian Ocean, there is currently increasing attention to ports. China, the United Arab Emirates and other Gulf states are constructing and expanding deepwater ports to create new, bigger and better terminals that are able cater to international trade.[v] This is the case in for instance Sri Lanka and Pakistan as well as Djibouti, Somalia, Somaliland, Puntland and Sudan. While ports can bring economic growth to the greater region, they can also serve the logistics of licit and illicit enterprises alike. On the one hand, it is known that if ports lack proper training, technology, inspection procedures or suffer from corruption, this can be used to avoid import/export controls to bring illicit goods from the sea into port ‘unnoticed’, or ship them out to sea. On the other hand, ports have a role in preventing and combatting crime. Proper governance structures and implementing standards to carry out inspection can counter port-based criminal activities – before they reach the sea. And on-the-ground knowledge and awareness can help identify irregularities. From the perspective of maritime security, ports are therefore both part of a greater problem and part of the solution. And action is needed regarding the latter. Approximately500 million containers are shipped globally every year, yet only two percent of them are inspected.[vi] This gap is likely to grow with port capacity increasing in the region, as newport facilities are created in the Indian Ocean, and existing ones become better equipped to handle shipping flows. It leaves a potentially rewarding opportunity for criminal groups to conduct illicit business activities, such as smuggling contraband or weapons,the proceeds of which are used to strengthening transnational criminal networks, thus further destabilising the security situation at sea. Recent maritime security efforts But ports are hardly part of the larger maritime security infrastructure in the Indian Ocean. Devised in particular in the wake of Somali piracy, this infrastructure is extensive, yet focuses mainly on the maritime domain, i.e. 1) law enforcement at sea;2) prosecution of crimes committed at sea; and 3) capacity-building of regional security sectors to strengthen local capabilities in the carrying out the former two.[vii] Overarching these three components are important coordination forums, not least the Contact Group on Piracy off the Coast of Somalia and the Shared Awareness and Deconfliction mechanism, as well as the Indian Ocean Naval Symposium, each of which bring together states, organisations, navies and the shipping industry to discuss and align. There are a range of regional inter-governmental organisations that, by virtue of their member states’ policy interests, have developed strong concern for addressing maritime security, such as the Southern African Development Community,the Indian Ocean Commission and the Gulf Cooperation Council. And finally, vital coordination and information-sharing centres have been established, for instance the National Information Sharing and Coordination Centre and the Regional Centre for Operations Coordination in Seychelles and the Regional Maritime Information Fusion Center in Madagascar. But there lacks an overall body which can provide effective cooperation and coordination across maritime security issues – and, indeed, one which can bring ports into this effort. Once which can build common narratives across the vast Indian Ocean Region and ensure comprehensive maritime domain awareness, sufficient legal frameworks and governance. Here, the 2009 Djibouti Code of Conduct stands out. Under the auspices of the International Maritime Organization, it originally committed 20 littoral and island states to cooperate around the suppression of piracy.[viii]In 2017, the Djibouti Code of Conduct was updated with the so-called Jeddah Amendment.[ix] It broadened the scope of maritime security issues to include not only piracy but also other forms of maritime crime, and it specifies authorities’ responsibilities and areas of collaboration. Importantly, the Amendment included a port dimension. However,not all stakeholders are signatories to the Jeddah Amendment and the port dimension is weak. It follows the International Ship and Port Facility Security Code about the prevention of threats against ships and ports,such as terrorist attacks.[x]But it does not detail broader measures to coordinate the prevention of for instance transnational organised criminal activities that use ports to move between land and sea. Since the two are closely connected, there is a need for better integration. A way forward Port authorities, customs, operators and the shipping industry are ideally placed to contribute to regional maritime domain awareness by participating in maritime security frameworks from a port-based perspective on activities such as coordination, information-sharing, risk assessment and incident reporting, registration and investigation. The role of ports should, therefore, be acknowledged as an important frontier in law enforcement and be placed centrally in the maritime security infrastructure that already exists in the region. With the current expansion of deep-water ports in the Indian Ocean, now is an important time to integrate ports and the shipping industry in such efforts of suppressing and preventing criminal activities spanning the land-sea divide. There are already some relevant initiatives. Apart from the Jeddah Amendment, the UNOffice on Drugs and Crime and the Worlds Customs Organization run a container control programme to increase security in the international supply chain of container traffic. They facilitate training and information exchange on container profiling to strengthen port authorities’ ability to intercept shipments carrying illicit goods.[xi] For the Indian Ocean region specifically, the European Union signed a new EUR 28 million programme in 2019 on port security and safe navigation in the Indian Ocean. It aims to improve information-sharing about sea freight and passengers; reinforce control operations and monitoring; support the countering of organised crime and terrorism; and foster cooperation between regional stakeholders.[xii] It will be interesting to follow how these initiatives unfold in the years to come. Careful consideration of results and action-based research on its effects are needed, because there is still a lack of knowledge about the dynamics of port management and operations specifically from the perspective of maritime security. What we do now is that security governance in ports shape the security situation in the maritime domain. The ways in which ports are governed can, therefore, affect regional security and, ultimately, the conditions for growth and development. Building on the existing maritime security infrastructure in the region, the inclusion of ports in the law enforcement and coordination in the Indian Ocean would make overall efforts more effective.
https://news.slpa.lk/index.php/2020/05/22/maritime-security-is-about-ports-too/
Word Count - 22,600. Warnings - Some bad language. Violence, but not graphic. Summary - Blair is in the Avatar program and is paired with a human soldier, Jim Ellison. They find that he is Cha’la’lei (a Sentinel) and endeavor to find out what that means for them and for the Omaticaya. Writers Notes: This is somewhere between a Sentinel/Avatar Crossover, a Sentinel/Avatar Fusion, and a just plain AU with bits of both. Some characters are more or less the same, some are a little different and some are simply made up. Any differences in this story from either Sentinel or Avatar were intentional because it worked better with the story I wanted to tell, and because my muse can be dang bossy when she wants to be and this is the way she insisted it happened! How can you argue with a muse? Writers Notes 2: Huge thanks to my beta and cheerleader, nightwing. And to my fantastic artist, Patt. She managed to come up many unbelievably fantastic pics. Artist notes: I want to thank Brynn for being so easy to work with. She was an angel and gave me tons of ideas and things to work on. Her story is wonderful and I know everyone will love it too. Thank you to Morgan for hosting The Sentinel Big Bang. It's been a lot of fun.
http://sentinelbigbang.livejournal.com/8928.html
Importance of location when placing someone in residential care Location is often one of the first considerations when placing someone in residential care – so that they can be close to friends and family – but it’s not necessarily the most important. It’s usually a combination of factors that contribute to the quality of the care provided that takes precedent over the location. This is especially true of specialist residential care and rehabilitation for adults with acquired brain injuries, learning disabilities, complex needs and behaviour that challenges – there simply aren’t the facilities available across the country to meet local needs. The Richardson Partnership for Care is located in Northampton – we’re in the centre of the country and have good road and rail links, so easily accessible for families to visit. We welcome visits to our care homes but these are not always practical, especially if family members work full-time, have children to look after or are elderly. Or they may have a long way to travel – our service users come from all over the UK as well Ireland and Eastern Europe. Supported home visits We believe that family contact is very important for our service users’ well-being so we include regular supported home visits when devising each individual’s care plan. Our care support staff arrange their transport and accompany them on their journey (overseas if necessary) and often continue to support them in their family home during their stay. If it’s not practical for individuals to stay with their relatives, then we arrange accommodation for them. This provides valuable assistance to the families too, helping them to enjoy the time spent with their loved one. Video calls As well as phone calls, we also use online applications and video calls to help service users and their families keep in touch – this can enhance communication for people with speech and language difficulties, making them easier to understand. It also means that their families get to see them and become more involved and reassured about their care. We also use video calls to enable family members to participate in the review process. Our service users have an external review every 12 months where their care team and case workers review their care plan and discuss their progress. The service user can choose whether or not they take part in the review, but under The Care Act 2014, reviews must be attended by a family member or advocate. A video call enables family members to take part in a review when they may have otherwise been unable to perhaps due to other family or work commitments. They can contribute fully to all areas discussed, see and hear the review team and ask questions as well as providing their thoughts and feelings on the care package. If the service user declines to take part in the review, they can still have a video call with their family afterwards and speak with their care manager and review coordinator about what happened in the review. Local environment The immediate local environment can have a greater impact on someone’s day to day wellbeing than where they are located in the country. For example, all of our homes are situated in areas within easy reach of the town centre, but with their own communities. This means that we can visit local shops, pubs, cafes and leisure facilities and benefit from the friendly and personal service that they provide. We have found that some service users with acquired brain injuries and/or complex needs, on arrival at The Richardson Partnership for Care, have not accessed local communities for years. We facilitate and actively encourage service users to access local facilities as it is an important part of their well-being, rehabilitation and progress towards independence. Centre of excellence Due to historical factors, Northampton has evolved to become a centre of excellence in brain injury rehabilitation. This draws neurological experts to Northampton, which means that we have a larger pool of talented and experienced professionals in the area enabling us to deliver high quality rehabilitation care and support. We work in partnership with other support services if crisis care is required, providing continuity and orientation for service users and improving outcomes. So, although location may be a starting point when placing someone in residential care or for residential rehabilitation, geographical distances can be overcome. It’s the quality of care, well-being and outcomes for service users that should take priority. We also find that in some cases, after a period of specialist rehabilitation, service users require less intensive support and are therefore able to go and live closer to their families.
https://www.richardsoncares.co.uk/importance-location-placing-someone-residential-care/
Sanchez declared negotiations dead on Monday, raising the prospect of Spain facing its fourth election in as many years as the fragmented political system struggles to resolve big problems ranging from Catalan separatism to budget reform. Three months after the Socialists won the biggest share of votes but fell short of a majority in a parliamentary election, talks remain stalled as Sanchez ruled out Iglesias’ demand of a full power-sharing coalition and accused Podemos of acting in bad faith in negotiations. “We haven’t stopped making concessions and we are prepared to enter into coalition negotiations and give more ground still, but it’s key that the Socialists understand that the voters did not vote for a one-party government,” Iglesias told laSexta television on Tuesday. Despite the compromising tone, Iglesias also used the interview to counterattack Sanchez, saying that his calling the negotiations dead was an error. “Anyone with a mandate to form a government can never assume talks as broken off,” he said. Last week Podemos asked its members to vote on whether the party should continue pushing for a power-sharing agreement with the Socialists or instead support a minority Sanchez government. Iglesias said the party will respect the outcome of the vote, which ends on Thursday. A Podemos official said on Monday he was confident Sanchez would come around and ultimately agree to a coalition. Senior party officials across the political spectrum say that a compromise, seemingly unlikely, could yet be found as public pressure builds on parties to avoid an election re-run. Sanchez, whose Socialists lack an outright majority in parliament and have no other obvious allies among the major parties, has until July 25 to win support for a swearing-in vote in the lower house. If he fails, a two-month countdown begins until repeat elections are triggered. Both parties have described as historic the opportunity to form a progressive government months after the far-right won multiple seats in parliament for the first time in decades. (Reporting by Emma Pinedo and Sam Edwards, editing by Andrei Khalip and Angus MacSwan) Some Alzheimer’s researchers are proposing the creation of human-monkey chimeras — part-human beings with entire portions of the brain entirely human derived This Week's Flyers Comments Postmedia is pleased to bring you a new commenting experience. We are committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. We ask you to keep your comments relevant and respectful. Visit our community guidelines for more information.
Throughout the pandemic, many have observed a significant reduction in the number of individuals traveling. Whether it be opting to drive instead of flying or choosing to eliminate travel from one’s annual routine, many community members have drastically shifted their moral policy regarding vacationing. Nick Brady ’24 commented on his family’s personal cutback, “I feel as if it’s a question of both health and morality—do I even want to risk putting others in harm’s way?” Nick’s comment depicts a clear shift in travel psyche when compared to only a year ago. With so many fewer people travelling than usual, in addition to the new societal norms regarding travelling, the question arises: Will COVID-19 result in any long-term social changes in emissions? The answer is, unfortunately, likely not. “Projections of global economic activity with and without the pandemic show only a small impact of COVID-19 on emissions,” co-director of MIT’s Program of Global Change John Reilly commented. According to IEA.org, 2020 saw the most significant dip in carbon emissions and the largest decrease in demand for fossil fuel, with a 5.8% reduction in each when compared to previous years. Despite this achievement, projections for global greenhouse gas emissions still are estimated to rebound back to what they were pre-pandemic, with the global climate theorized to reach 3.1–3.7 degrees Celsius above average by 2100. Despite this dismaying data, Reilly mentioned a glimmer of possible hope, “The effect on the level of investment that nations are willing to commit to meet or beat their Paris [Climate Agreement] emissions targets has shifted quite significantly.” The pandemic has surprisingly made these goals cheaper and more politically palatable. Financial incentives resulting from the pandemic have also motivated many companies to take a greener approach. According to Shawmut Communications, green programs increased 54% in just the past year. “I just feel better if I’m sourcing the stuff I buy from somewhere I know cares about pressing issues like climate change,” Nick continued. This widespread mindset allows the growth of companies who have implemented such policies to rise to twenty-eight times that of a standard business, with 46% of businesses reporting partnerships being built with like-minded individuals. However, evidence of a rebound back to normalcy makes this achievement possibly temporary. Though travel numbers reported by the Transportation Security Administration have been down by almost nine hundred thousand in recent weeks compared to 2019, the total number of those passing through airports is clearly rising. “It’s strange to see people getting back to a kind of societal normalcy—it feels like it’s been eons since that was last true,” Nick remarked. With Governor Baker ending major COVID-19 restrictions on May 29, the future continues to progress into perplexity and the question of how we can achieve carbon neutrality as a nation descends further into uncertainty.
https://thecentipede.org/2021/06/09/the-repercussions-of-covid-19-on-climate-change/
The Salvation Army is an international Christian church. Its message is based on the Bible; its ministry is motivated by love for God and the needs of humanity. Mission Statement The Salvation Army exists to share the love of Jesus Christ, meet human needs and be a transforming influence in the communities of our world. Core Values The Salvation Army Canada and Bermuda has four core values: Hope: We give hope through the power of the gospel of Jesus Christ. Service: We reach out to support others without discrimination. Dignity: We respect and value each other, recognizing everyone’s worth. Stewardship: We responsibly manage the resources entrusted to us. Position Purpose Summary: Broadview Village has 40+ years of providing outstanding support to adults with developmental disabilities and mental health challenges. Broadview Village residences provide around-the-clock support to individuals with developmental disabilities and/or mental health challenges and we are currently hiring a Full-Time Sleep Over + Breakfast Residential Counsellor. This position reports to the Residential Manager and is currently assigned to work in the Scarborough (Birchmount/Finch) area. This position is full-time, 35 hours per week and currently works Sunday – Thursday 11:00 PM – 10:00 AM on a schedule noted below (some flexibility will be occasionally required). Location and scheduling assignments may change based on organizational and programmatic needs. This position requires someone who has a passion for and takes initiative in the creation and implementation of programming in areas such as: personal care & hygiene, advocacy, and community integration. The capacity to work collaboratively within a team is required, as well as with a large number of stakeholders including: healthcare professionals, consultants, colleagues, community members, family members and friends. This position also requires a resilient direct support professional who is confident in navigating conflict, overcoming obstacles, and who is creative in providing support to individuals in crisis. The Residential Counsellor must ensure that their activities are in compliance with established legislation, and policies and procedures including: Employment Standards; Occupational Health & Safety Standards; Collective Agreements; Payroll Procedures; Salvation Army Accreditation; and Services and Supports to Promote the Social Inclusion of Persons with Developmental Disabilities Act, 2008 (SIPDDA). Responsibilities - Providing support for residents’ personal development as decided through individual support planning (ISP) as well as advocating on their behalf/facilitating self-advocacy to community supports in collaboration with colleagues and other healthcare professionals - Sleep over at residential site as a backup support to ensure the well-being of residents throughout the night - Supporting residents with morning routines including waking up, personal hygiene, breakfast, etc. and helping them transition to daytime activities such as day programs, community outings, etc. - Completing documentation and maintaining records as required - Medication administration - Ensuring learning opportunities are provided utilizing agency and community resources - Fostering independence in residents including supervising and assisting residents with meal preparation and clean-up - Maintaining cleanliness of program/site, ensuring health and safety protocols are closely followed - Providing orientation and/or support to co-workers, students, volunteers and/or family members - Following and promoting the agency’s and House program goal and philosophy - Other duties as assigned Qualifications - Completed Developmental Services Worker Diploma or other degree/diploma related to human services is preferred - Minimum 6-12 months’ experience working with people with developmental disabilities and preferably working with individuals who have a dual diagnosis of mental health challenges and developmental disabilities - Experience with implementing Individual Support Plans - Strong counselling/de-escalation support required - Medication administration experience preferred - Valid Standard First Aid Certificate (including CPR) - Ability to achieve Nonviolent Crisis Intervention certificate upon hire, the employer will provide training Successful candidate will be required to provide upon hiring: - A clear vulnerable sector screening - Valid driver’s license an asset, and clean driver’s abstract required - Participate in our online Armatus Abuse Training and Health and Safety training required upon hiring, as well as updated annually LOCATION: Scarborough (Birchmount/Finch) HOURS: 35 hours per week Sunday – Thursday 11:00 PM - 10:00 AM (Sleep Over shift hours 11:00 PM - 7:00 AM and Breakfast shift hours 7:00 AM - 10:00 AM). Flexibility will occasionally be required. The Salvation Army will accommodate candidates as required under applicable human rights legislation. If you require a disability-related accommodation during this process, please inform us of your requirements. We thank all applicants, however, only those candidates to be interviewed will be contacted. This is a Bargaining Unit position represented by OPSEU Local 550 If there is a competition number associated with this posting, please include within the subject line of your email, fax or regular mail correspondence. The Salvation Army will accommodate candidates as required under applicable Human Rights Legislation. If you require a disability related accommodation during this process, please inform us of your requirements. In accordance with The Salvation Army policy and legislated requirements, employment is conditional upon the verification of credentials and completion of a background check. Internal Applicants: Please advise Department Heads of your intentions prior to submitting your application.
https://salvationarmy.ca/blog/jobs/full-time-sleep-over-breakfast-residential-counsellor-union-position/
Dual Beam Multi-system FIB (JEOL Model JIB-4501) Introduction: The JIB-4501 is a Multi-Beam processing system that incorporates a thermionic SEM and a high-performance Ga ion column. The instrument can be used as a SEM system to observe specimen surfaces; or section milling of a region using FIB can be performed. The JIB-4501 column arrangement has been designed so that a cross section that has been milled using the FIB can be observed with the SEM without changing the stage tilting angle. The FIB with Pt deposition cartridge can be used for fine milling and TEM thin-film sample preparation. The post picking up system with optical microscope is equipped for sample transfer. A third-party software with nano pattern generation system (NPGS) for electron beam lithography (EBL) is also equipped. Features / Applications: - Ga liquid metal ion source - 1 to 30 kV (in 5 kV steps) - Up to 60 nA (at 30 kV) - 12 steps (motor drive) - Pt deposition cartridge - Rectangle, line, and spot milling - Bulk-specimen 5 axis goniometer stage Notes to user: Specimen surface must be even and without volatile matter. Supplier information: https://www.jeol.co.jp/en/products/detail/JIB-4501.html https://www.jcnabity.com/usernote.htm Please click here to download the equipment introduction poster. Equipment location:
https://www.polyu.edu.hk/umf/facility/cem/109-dual-beam-multi-system-fib-jeol-model-jib-4501/
United States Department of Agriculture Physical and mechanical properties were obtained for approximately 600 2 by 4’s of Ramon (Brosimun alicastrum) and 600 2 by 4’s of Danto (Vatairea Iundellii), from Guatemala. The lumber was visually graded according to U.S. grading rules. Full-sized tests were conducted in bending and in tension and compression parallel to the grain. Clearwood tests were conducted in... This study evaluated fire and bending properties of blockboards with various fire retardant treated veneers. Blockboards were manufactured using untreated fir strips and sandwiched between treated ekaba veneers at final assembly. The veneers were treated with either boric acid (BA), disodium octoborate tetrahydrate (DOT), alumina trihydrate (ATH), or a BA/DOT mixture.... The effect of moisture on longitudinal stress-wave velocity (SWV), bending stiffness. and bending strength of commercial oriented strandboard, plywood. particleboard. and southern pine lumber was evaluated. It was shown that the stress-wave verocity decreased in general with increases in panel moisture content (MC). At a given MC level. SWV varied with panel type and... Current procedures used to sort round timber beams into structural grades rely on visual grading methods and property assignments based on modification of clear wood properties. This study provides the technical basis for mechanical grading of 228 mm (9 in.) diameter round timbers. Test results on 225 round Engelmann spruce–alpine fir–lodgepole pine beams demonstrate... Laminated hollow wood composite poles represent an efficient utilization of the timber resource and a promising alternative for solid poles that are commonly used in the power transmission and telecommunication lines. The objective of this study was to improve the performance of composite poles by introducing the bio-mimicry concept into the design of hollow wood... The migration of alluvial channels through the geologic landform is an outcome of the natural erosive processes. Mankind continually attempts to stabilize channel meandering processes, both vertically and horizontally, to reduce sediment discharge, provide boundary definition, and enable economic development along the river's edge. A critical component in the... This paper describes an effort to refine a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the frequency response of several simple-span, sawn timber beam (with plank deck) bridges located in St. Louis County, Minnesota. Static load deflections were also measured to... Many factors result in trees with non-straight stems. An important prerequisite to investigating the causes of stem deformity is an ability to assess stem displacement. An ideal system would be easy to implement, be objective, and result in an index that incorporates the essential characteristics of the stem deformity into a dimensionless number. We tested a number of... It is generally accepted that there should be an upward repetitive member allowable property adjustment. ASTM D245 (2011c) and ASTM D1990 (2011b) specify a 1.15 factor for allowable bending stress. This factor is also listed in ASTM D6555 (2011a, Table 1). In this technical note, sources of confusion regarding appropriate repetitive member factors are identified. This... Fluvial systems respond to changes in boundary conditions in order to sustain the flow and sediment supplied to the system. Local channel responses are typically difficult to predict due to possible affects from upstream, downstream, or local boundary conditions that cause changes in channel or planform geometry. Changes to the system can threaten riverside... In a unique way, IRENI (Infrared environmental Imaging), operated at the Synchrotron Radiation Center in Madison, combines IR spectroscopy and IR imaging, revealing the chemical morphology of a sample. Most storage ring based IR confocal microscopes have to overcome a trade-off between spatial resolution versus... In order to better utilize agricultural fibers as an alternative resource for composite panels, several variables were investigated to improve mechanical and physical properties of agm-based fiberboard. This study focused on the effect of fiber morphology, slenderness ratios (UD), and fiber mixing combinations on panel properties. The panel construction types were also... Whereas many research activities focus on developing value-added processes that use forest residues, scientists must also investigate the mechanical properties of products made from recycled fiber resources. This study compared the tensile and bending properties of binderless panels made from recycled corrugated containers with properties of panels made from lodgepole... In order to utilize agricultural waste fibers as an alternative resource for composites, a number of variables were investigated to determine whether the mechanical and physical properties of agro-based fiberboard could be improved. Fibers were classified into four different mesh sizes and used to evaluated the effect of fiber size on the mechanical and physical... When wood fiber is exposed to significant heat, its strength decreases. It has long been known that prolonged heating at temperatures over 66°C (150°F) can cause a permanent loss in strength. The National Design Specification (NDS) provides factors (Ct) for adjusting allowable properties when structural wood members are exposed to temperatures between 38°C (100°F) and... Mule deer populations in central Oregon are in decline, largely because of habitat loss. Several factors are likely contributors. Encroaching juniper and invasive cheatgrass are replacing deer forage with high nutritional value, such as bitterbrush and sagebrush. Fire suppression and reduced timber harvests mean fewer acres of early successional forest, which also... Physical, mechanical, and fire properties of the injection-molded wood flour/polypropylene composites incorporated with different contents of boron compounds; borax/boric acid and zinc borate, and phosphate compounds; mono and diammonium phosphates were investigated. The effect of the coupling agent content, maleic anhydride-grafted polypropylene, on the properties of... Oriented strand board (OSB) is a commodity product subject to market fluctuation. Development of a specialty OSB could lead to a better, and more stable, market segment for OSB. It was demonstrated in a previous study (Barbuta et al. in Eur. 1. Wood Prod. 2010), that OSB may be designed to obtain a high bending modulus of elasticity in the parallel direction, close to... Two important wood properties are stiffness (modulus of elasticity or MOE) and bending strength (modulus of rupture or MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two or three parameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of... Biological durability is an important feature for wood-plastic composites (WPC) intended for outdoor applications. One route to achieving WPC products with increased biological durability is to use wood preservative agents in the formulation of the WPC. Another option could be to use a chemically modified wood component that already exhibits increased resistance to...
https://www.fs.usda.gov/treesearch/search?keywords=%22bending%22&f%5B0%5D=year%3A%222012%22&f%5B1%5D=year%3A%222006%22
WRKY transcription factors (TFs) participate in various physiological processes of plants. Although WRKY genes have been well studied in model plants, knowledge of the functional roles of these genes is still extremely limited in cotton. In this study, a group IId WRKY gene from cotton, GhWRKY42, was isolated and characterized. Our data showed that GhWRKY42 localized to the nucleus. A transactivation assay in yeast demonstrated that GhWRKY42 was not a transcriptional activator. A β-glucuronidase (GUS) activity assay revealed that the promoter of GhWRKY42 showed fragment deletion activity in Nicotiana tabacum and was mainly expressed in the roots, stems and leaves of ProGhWRKY42::GUS transgenic Arabidopsis plants. Quantitative real-time PCR (qRT-PCR) analysis indicated that GhWRKY42 was up-regulated during leaf senescence and was induced after exposure to abiotic stresses. Constitutive expression of GhWRKY42 in Arabidopsis led to a premature aging phenotype, which was correlated with an increased number of senescent leaves, reduced chlorophyll content and elevated expression of senescence-associated genes (SAGs). In addition, virus-induced gene silencing (VIGS) was used to silence the endogenous GhWRKY42 gene in cotton, and this silencing reduced plant height. Our findings indicate that GhWRKY42 is involved in abiotic stress responses, premature leaf senescence and stem development. This work establishes a solid foundation for further functional analysis of the GhWRKY42 gene in cotton. Plants are constantly challenged by various factors that affect plant growth and development throughout their life cycle. To combat these challenges, some responsive genes, including WRKY transcription factors (TFs), are induced to help plants adapt through physiological and morphological changes . WRKY TFs are plant-specific proteins and constitute one of the largest TF families in plants . WRKY TFs share the common feature of a highly conserved WRKY domain that consists of the peptide sequence motif WRKYGQK at the N-terminus and a zinc-finger-like motif at the C-terminus . WRKY TFs have one or two conserved WRKY domains, and these domains contain a Cx4-5Cx22-23HxH or Cx7Cx23HxC zinc-finger-like motif. Based on the number of conserved WRKY domains and the structural characteristics of the zinc-finger-like motifs, WRKY TFs can be categorized into group I, group II or group III. Group II can be further divided into subgroups IIa, IIb, IIc, IId and IIe [3–7]. WRKY TFs can recognize and bind to the W-box sequences [TTGAC(C/T)] in the promoter region of target genes to participate in regulatory networks . In plants, WRKY TFs are mainly involved in defense responses, trichome development, plant growth and development and leaf senescence . Various TFs are involved in modulating leaf senescence, and 1533 TFs have been identified via leaf senescence transcriptome analyses in Arabidopsis [10, 11]. WRKY TFs are quantitatively important members of those TFs involved in leaf senescence . In Arabidopsis, AtWRKY6 is associated with the senescence process by targeting the promoter of the SIRK gene, which likely encodes a signaling component related to leaf senescence . AtWRKY45 was recently reported to interact with the DELLA protein RGA-LIKE1 (RGL1) and to directly target the SAG12, SAG13, SAG113 and SEN4 genes, to positively modulate leaf senescence via the gibberellic acid-mediated signaling network . In rice, OsWRKY42 promotes senescence in transgenic rice plants by binding to the promoter of OsMT1d to repress ROS scavenging . OsWRKY23 is markedly increased during dark-induced leaf senescence, and OsWRKY23-overexpressing lines can accelerate leaf senescence under dark conditions . Furthermore, TaWRKY7 from wheat can significantly promote senescence in transgenic Arabidopsis under dark conditions . According to previous reports, WRKY TFs are thought to be involved in the regulation of plant tissue growth and development. For example, VvWRKY2 is specifically expressed in the lignified cells of young grapevine stems, and overexpression of VvWRKY2 in N. tabacum affects the lignin biosynthesis pathway, thus influencing xylem development . Li et al. reported that Atwrky13 mutants exhibit weaker stems due to altered development of parenchyma cells . Another WRKY TF, WRKY71/EXB1, positively regulates plant branching by controlling axillary meristem initiation and bud activities . In addition, the pollen-specific WRKY TF AtWRKY34 is phosphorylated by two mitogen-activated protein kinases, MPK3 and MPK6, in the regulation of male gametogenesis . Furthermore, emerging evidence has demonstrated that WRKY TFs are widely involved in stress responses. For example, GhWRKY40 is involved in pathogen responses , and GhWRKY68 is involved in salt and drought stress responses . These reports further emphasize the importance of studying WRKY TFs. Cotton (Gossypium hirsutum) is an important economic crop that is widely cultivated around the world. As a significant source of fiber, oil and biofuel products, cotton has become an important industrial raw material. In field production, the growth and yield of cotton are severely restricted by both external environmental factors and internal factors. A growing number of studies have shown that WRKY TFs play important roles in the responses to these factors. Therefore, it is particularly important to study the functional roles of WRKY genes in cotton. In the present study, a group IId WRKY gene, GhWRKY42, was isolated and characterized. We performed a preliminary analysis of the gene structure, evolutionary relationships and expression patterns of GhWRKY42. Overexpression of GhWRKY42 in Arabidopsis accelerated leaf senescence. In addition, silencing GhWRKY42 in VIGS plants significantly reduced plant height. We previously identified several WRKY genes in cotton that were up regulated by abiotic stresses, during leaf senescence and in vegetative organs using cDNA microarray and RNA-Seq data . Among them, we selected GhWRKY42 for further study. The sequence analysis results showed that GhWRKY42 contained a 1038-bp ORF, encoding 345 amino acids. The predicted protein isoelectric point was 9.38, and the molecular weight was 37.88 kDa. The results of comparative analysis of the GhWRKY42 coding and genomic sequences indicated that GhWRKY42 harbored three exons and two introns (Fig. 1a). The multiple sequence alignment results revealed that the GhWRKY42 protein contained one WRKY domain, consisting of a conserved WRKYGQK core sequence and a C2H2 (C-X5-C-X23-H-X1-H) zinc-finger-like motif. Therefore, GhWRKY42 belongs to the group II WRKY subfamily according to Eulgem et al. . Furthermore, a putative nuclear localization signal (NLS) sequence (KKRK) and a conserved HARF structural motif were found within the GhWRKY42 amino acid sequence, which are shared among group IId WRKY proteins (Fig. 1b). A phylogenetic tree was built to evaluate the evolutionary relationship between GhWRKY42 and other group II WRKY members from different species (Fig. 2). As shown in Fig. 2, GhWRKY42 was closely associated with group IId members, which was consistent with the results of the amino acid alignment analysis. Consistent with the identified NLS sequence, the subcellular location prediction software Plant-mPloc (http://www.csbio.sjtu.edu.cn/bioinf/plant-multi/) predicted that the GhWRKY42 protein localizes to the nucleus. To confirm our prediction, the 35S-GhWRKY42::GFP vector was constructed and transferred into onion epidermal cells. The 35S::GFP construct served as a control. The onion epidermal cells harboring the 35S-GhWRKY42::GFP construct emitted green fluorescence predominantly in nuclei (Fig. 3a), whereas 35S::GFP fluorescence occurred widely throughout the cell . The transcriptional activation of GhWRKY42 was examined with a GAL4 yeast system. The plasmids pGADT7-largeT+pGBKT7-GhWRKY42 (experimental group), pGADT7-largeT+pGBKT7-p53 (positive control) and pGADT7-largeT+pGBKT7-laminC (negative control) were transformed into Y2HGold yeast cells. All transformants grew well on SD/−Trp/−Leu medium. The transformants of the positive control grew well on SD/−Trp/−Leu/-His/−Ade medium, but similar to the negative control, the experimental group did not grow on this medium (Fig. 3b). A 1943-bp GhWRKY42 promoter fragment was obtained, and putative cis-elements were analyzed using the PlantCARE database. A group of putative cis-elements were identified in the promoter region, which were mainly involved in defense, stress, light and metabolic responses (Additional file 1: Table S1). The results of GUS staining for the promoter deletion constructs showed that pBI121 (positive control) as well as ProGhWRKY42::GUS (− 1943 bp to − 1 bp), ProGhWRKY42–1::GUS (− 1407 bp to − 1 bp) and ProGhWRKY42–2::GUS (− 778 bp to − 1 bp) produced blue dots, whereas ProGhWRKY42–3::GUS (− 391 bp to − 1 bp) did not (Fig. 4a). The GUS staining results for different tissues showed that GUS was mainly active in the roots, stems and leaves of ProGhWRKY42::GUS transgenic Arabidopsis plants and was also detectable in the stamens but not in the pistils, petals or pods (Fig. 4b). To evaluate the expression patterns of GhWRKY42 following various stresses, ten-day-old cotton seedlings were exposed to MeJA, ABA, drought and salt treatments. As shown in Fig. 5, GhWRKY42 was found to be differentially up-regulated under MeJA, ABA, drought and salt treatments. GhWRKY42 expression was rapidly induced at 2 h after MeJA treatment, reaching its maximum accumulation at 4 h (4.6-fold induction) and then gradually decreasing (Fig. 5a). Similarly, GhWRKY42 expression was induced at 2 h after ABA treatment but exhibited maximum transcript levels at 6 h with 2.7-fold induction (Fig. 5b). Under drought treatment, the GhWRKY42 transcript was differently elevated at different time points and peaked at 12 h (6.5-fold induction) (Fig. 5c). However, under salt treatment, the expression of GhWRKY42 was dramatically increased at 2 h, and a high expression level was maintained in the subsequent 4–12 h (Fig. 5d). qRT-PCR was performed to detect the transcript levels of GhWRKY42 in the roots, stems, leaves, petals, pistils, stamens, fiber and ovules. GhWRKY42 was found to be differentially expressed in different tissues. GhWRKY42 was strongly expressed in vegetative organs, including the stems, roots and leaves but was weakly expressed in the petals, pistils, stamens, fiber and ovules (Fig. 6a). To evaluate the expression pattern of GhWRKY42 during leaf senescence, qRT-PCR was performed using cotton leaves at different senescence stages. The transcriptome data analysis [23, 25] showed that the expression level of GhWRKY42 gradually increased with the senescence of leaves (Fig. 6b). We further examined the expression level of GhWRKY42 in true leaves of CCRI74 plants at different senescence stages ; the results revealed that the transcript levels of GhWRKY42 gradually increased as the leaves aged (Fig. 6c). In addition, the expression level of GhWRKY42 was detected in cotyledon samples from the early-aging cotton variety CCRI10 and the non-early-aging variety Liao4086. The qRT-PCR results showed that the transcript levels of GhWRKY42 increased gradually during cotyledon senescence and were significantly higher in CCRI10 than in Liao4086 (Fig. 6d). The transcript of GhWRKY42 was highly accumulated in the senescent leaves of cotton. To further clarify the functional role of GhWRKY42 in response to leaf senescence, GhWRKY42 was transformed into Arabidopsis plants. The transgenic lines were confirmed by qRT-PCR (Fig. 7a). As shown in Table 1, the GhWRKY42 transgenic plants flowered earlier and had fewer rosette leaves than the WT plants. In addition, the senescence phenotypes of the transgenic and WT plants were observed at different developmental stages, and the ratio of senescent leaves was counted. Compared with the WT, the transgenic lines exhibited severe aging phenotypes at four, five and seven weeks (Fig. 7b), which were reflected by a significantly higher ratio of senescent cotyledons at four weeks (Fig. 7c), a higher ratio of senescent true leaves (rosette leaves) at five weeks (Fig. 7d) and a lower chlorophyll content at seven weeks (Fig. 7e). To elucidate the possible mechanisms of GhWRKY42-mediated precocious senescence, we examined the effects of GhWRKY42 on the transcript levels of senescence-associated marker genes during natural leaf senescence. The genes included AtNAP (NAC domain TF) (At1g69490), AtSAG12 (At5g45890), AtSAG13 (At2g29350), AtWRKY6 (WRKY DNA-binding protein 6) (At1g62300) and AtORE1/AtNAC6 (NAC domain TF) (At5g39610), which are all factors that are up-regulated during aging in Arabidopsis [26–30]. As shown in Fig. 8a-e, the expression of all senescence-associated marker genes in the transgenic plants was significantly up-regulated compared with that in the WT plants. In addition, we identified the expression levels of two ABA-responsive genes, AtABF2 (ABA-responsive element binding factor 2) (AT1G45249) and AtHAB1 (hypersensitive to ABA1) (AT1G71770) , in Arabidopsis. The expression levels of both genes were significantly elevated in the transgenic plants compared with the WT plants (Fig. 8f, g). To further identify the functional role of GhWRKY42, VIGS of GhWRKY42 was performed using the cotton variety CCRI10. Two weeks later, the cotton plants harboring pCLCrVA-PDS showed an albino phenotype, suggesting that the VIGS assay was successful (Fig. 9a). qRT-PCR was performed to evaluate the effect of gene silencing. Expression level of GhWRKY42 was significantly lower in the silenced plants (pCLCrVA-GhWRKY42) than in the control plants (pCLCrVA) (Fig. 9b). The expression of the senescence-associated marker gene GhNAP was also markedly reduced in the silenced plants (Fig. 9c). As shown in Fig. 9a, the silenced plants exhibited a relatively lower plant height phenotype than the control plants, and the lower plant height phenotype was statistically analyzed (Fig. 9d). The WRKY TF family is one of the largest superfamilies of regulatory proteins in plants . In the past several years, growing evidence has shown that members of the WRKY gene family mainly participate in stress responses, plant growth and development and leaf senescence . However, studies on WRKY TFs have mainly focused on model plant species, while only a few of these genes have been evaluated in cotton. In this study, we isolated a group IId GhWRKY42 gene from upland cotton and characterized its functional roles. The results of multiple sequence alignment and phylogenetic tree analyses revealed that the GhWRKY42 gene is a member of the group IId WRKY family. Subcellular localization analysis revealed that the GhWRKY42 protein is located in the nucleus. These findings are consistent with the predicted nuclear-targeting signal sequence and with the results of studies on another group IId TF, GhWRKY11, in cotton . The results of transcriptional activation analysis in yeast have shown that the GhWRKY42 protein has no transcriptional activation activity; these findings are similar to those reported for PtrWRKY40 from Populus trichocarpa . These results suggest that GhWRKY42 may be a nuclear protein that functions in the cell nucleus but may be not a transcriptional activator. The expression patterns of genes are often used as an indicator of their functional roles . For example, GhWRKY17 has been shown to be induced by salt and drought treatments, and overexpression of GhWRKY17 in N. tabacum results in a more sensitive phenotype to drought and salt stresses . Previous studies have shown that a large number of genes can be induced by various abiotic stresses . In our study, GhWRKY42 was demonstrated to be differentially induced under MeJA, ABA, drought and salt treatments in cotton. In addition, many stress response cis-elements were found in the promoter region of GhWRKY42. These findings suggested that GhWRKY42 might be involved in the regulation of abiotic stress networks. The 5′ promoter deletion assay is often used to investigate promoter expression characteristics and the functional roles of regulatory elements in promoter regions. The structure and function of promoter deletion fragments can be suggested by evaluating promoter deletion construct-driven reporter genes in transgenic plants . In our study, a promoter deletion assay showed that ProGhWRKY42–3::GUS (− 391 bp to − 1 bp) was unable to activate expression of the GUS gene and that ProGhWRKY42–2:GUS (− 778 bp to − 1 bp) contained the shortest sequence exhibiting promoter activity. Therefore, it is speculated that critical cis-elements may exist within the − 778 bp to − 391 bp upstream region of the GhWRKY42 promoter. The cis-elements in this region were predicted, it was found to contain not only TATA-box and light response elements but also stress response elements such as ABRE (ABA response element), the CGTCA motif (MeJA response element), HSE (heat response element) and the TGACG motif (MeJA response element). These elements may play an important role in ensuring that the promoter drives the expression of downstream genes. WRKYs can directly bind to the W-box [TTGAC(C/T)] in the promoter of target genes to modulate stress responses, plant development and leaf senescence [40, 41]. Liu et al. reported that W-box and G-box cis-elements are important positive regulators during leaf senescence in rice. Both elements are significantly plentiful in the promoter regions of up-regulated TFs (including WRKYs) that regulate leaf senescence . W-box and G-box cis-elements were identified in the promoter region of GhWRKY42, suggesting that GhWRKY42 may be involved in leaf senescence and be regulated by other GhWRKYs or by the gene itself during this process. These findings laid the foundation for further analysis of the upstream regulatory mechanism of GhWRKY42. Senescence is a natural phenomenon and prevails among all living organisms, including plants. During leaf senescence, genetic and environmental factors affect mature leaves, leading to the initiation of leaf senescence; this senescence is accompanied by chlorophyll, membrane, protein and nucleic acid degradation as well as nutrient relocation from senescing leaves to growing organs or storage tissues [42–45]. Crop productivity is mainly determined by the yield per area, but leaf senescence severely affects crop yield . Thus, studying the mechanism of leaf senescence is particularly important. In the present study, the expression level of GhWRKY42 was found to be up-regulated during natural senescence and exhibited significantly higher expression in the early-aging cotton variety CCRI10 than in the non-early-aging variety Liao4086. It has been reported that the GhNAC12 gene, which is more highly expressed in CCRI10 than in Liao4086 during leaf senescence, causes an early-aging phenotype in Arabidopsis . Therefore, GhWRKY42 may be involved in the aging process and may play a positive regulatory role during leaf senescence. Consistent with our prediction, overexpression of GhWRKY42 did lead to an advance of leaf senescence in transgenic Arabidopsis. In a previous study, overexpression of AtWRKY45 in Arabidopsis was observed to up-regulate expression of representative SAGs during age-triggered leaf senescence . Phenotypic observations of overexpressing Arabidopsis lines and RNAi cotton lines show that GhNAP positively regulates leaf senescence through ABA-mediated pathways . Phytohormones, such as ABA, ethylene, MeJA, and salicylic acid, have been demonstrated to promote leaf senescence . The ABA content increases in aged leaves, and endogenously applied ABA promotes expression of some SAGs . ABA-responsive genes, which are involved in the ABA signaling pathway, are induced in senescing Arabidopsis . In our study, SAGs and ABA-responsive genes were significantly accumulated in the GhWRKY42 transgenic lines, suggesting that GhWRKY42 may be associated with leaf senescence via ABA-mediated pathways. Previous studies have shown that some genes are closely associated with plant height. Wei et al. identified the QTL DTH8 in rice, which includes the HAP3 gene and regulates yield, plant height and flowering time . WRKY TFs such as LP1 in foxtail millet and OsWRKY78 in rice have all been shown to play an important role in stem elongation and plant height [51, 52]. In our study, we detected high expression levels and strong GUS activity of GhWRKY42 in the stem and reduced height in VIGS plants. Therefore, we hypothesized that GhWRKY42 might be related to stem development. Plant height is an important plant architecture trait, and decreased height is beneficial for mechanical harvesting and lodging resistance . Our findings provide a basis for breeding new cotton varieties with an ideal plant type. However, further studies are needed to elucidate the pathways involved in the GhWRKY42-mediated mechanism. GhWRKY42, a group IId WRKY member, is closely associated with leaf senescence and plant development. GhWRKY42 is located in the nucleus and exhibits no transcriptional activity. GhWRKY42 is induced by leaf senescence and various stresses. Ectopic expression of GhWRKY42 in Arabidopsis promotes leaf senescence, and VIGS cotton plants exhibit a decreased plant height phenotype. Our work could lead to a better understanding of the functional roles of WRKY genes in cotton. However, how the GhWRKY42 gene regulates leaf senescence and plant height development requires further study and clarification. Two early-aging cotton varieties, CCRI10 and CCRI74, and a non-early-aging variety, Liao4086, were used in our experiments. The cotton varieties were cultivated in the field of the Cotton Research Institute of the Chinese Academy of Agricultural Sciences (Anyang, Henan, China). Different tissues were collected from CCRI10 plants. Roots and stems were collected from two-week-old seedlings. Leaves were collected from newly flattened leaves. Petals, pistils and stamens were sampled at anthesis, and fiber and ovules were harvested at 10 days post anthesis. To evaluate the expression pattern of GhWRKY42 during leaf senescence, cotyledons were collected from two cotton varieties, CCRI10 and Liao4086, which exhibit different aging characteristics. We collected cotyledon samples weekly at eight different developmental stages, ranging from the flattened cotyledon stage to the completely aged stage. The expression patterns of GhWRKY42 were further evaluated in true leaves of the early-aging cotton variety CCRI74 at five aging stages, as described previously . Each sample included material from eight different individual plants, and we performed three repetitions for each sample. To evaluate the stress response of GhWRKY42 in cotton, 10-day-old CCRI10 cotton seedlings were planted in pots for subsequent stress treatments. The CCRI10 cotton seedlings were planted in a growth chamber at 25 °C, with a 16 h light/8 h dark cycle. For the abiotic stress treatment, the seedlings were irrigated with 15% polyethylene glycol 6000 (PEG6000) and 200 mM sodium chloride (NaCl); for the signaling molecule treatment, the seedlings were sprayed with 100 μM methyl jasmonate (MeJA) and 200 μM abscisic acid (ABA). Each cotyledon sample included material collected from eight uniform plants, and each treatment was repeated three times. The samples were harvested at 0 h, 2 h, 4 h, 6 h, 8 h and 12 h. All samples were quickly frozen in liquid nitrogen for subsequent RNA extraction. To amplify the full-length cDNA and genomic sequences of GhWRKY42, primers were designed based on the coding sequence of GhWRKY42 (accession KF669797) submitted to NCBI by Dou et al. . The primers used for this purpose are listed in Additional file 2: Table S2. The full-length cDNA and genomic fragments of GhWRKY42 were amplified from cDNA and DNA, respectively, obtained from CCRI10 leaves at the five-leaf stage. The fragments were subsequently inserted into the pMD18-T vector (TaKaRa, China) and transformed into Escherichia coli competent cells (E. coli DH5a) for sequencing. The genomic and coding sequences of GhWRKY42 were submitted to Gene Structure Display Server online software (GSDS2.0) (http://gsds.cbi.pku.edu.cn/) to predict gene structures. Multiple sequence alignment was conducted using DNAMAN software, and a phylogenetic tree was built by using MEGA 7 software. The GhWRKY42 promoter fragment was amplified from DNA, and the online software PlantCARE (http://bioinformatics.psb.ugent.be/webtools/plantcare/html/) was employed to predict cis-acting elements. Total RNA was isolated using RNAprep PurePlant Kit (Polysaccharides & Polyphenolics-rich) (Tiangen, China). One microgram of total RNA was prepared for cDNA synthesis in a 20 μl reaction system using a PrimeScript™ RT reagent kit with gDNA Eraser. The cDNA was diluted 5 times for qRT-PCR. Transcript levels were detected using a 7500 Real-Time PCR system (Applied Biosystems) and SYBR® Premix Ex Taq™ II (Tli RNaseH Plus) (TaKaRa). The 20 μl reaction volume contained the following components: 10 μl of SYBR Premix Ex Taq II (Tli RNaseH Plus) (2×), 0.8 μl of the PCR forward primer (10 μM), 0.8 μl of the PCR reverse primer (10 μM), 0.4 μl of ROX Reference Dye II (50×), 2 μl of cDNA and 6 μl of ddH2O. The optimal PCR amplification procedure used was as follows: a pre-denaturation step at 95 °C for 30 s; 40 cycles of 95 °C for 5 s and 60 °C for 34 s; and a melting curve step at 95 °C for 15 s, 60 °C for 1 min and 95 °C for 15 s. GhActin and AtActin2 were used as reference genes. The 2−ΔΔCT method was applied to calculate relative expression levels . Three independent experiments were performed, and all reactions were performed with three technical replicates. The open reading frame (ORF) of GhWRKY42 without the termination codon was cloned into the pBI121-GFP vector to generate the 35S-GhWRKY42::GFP construct, driven by the cauliflower mosaic virus 35S promoter. The 35S-GhWRKY42::GFP plasmid was extracted to obtain a plasmid concentration of at least 1 μg/μl. The inner epidermis of a fresh onion was cut into approximately 1.5 × 1.5 cm pieces with a scalpel on a clean bench. The epidermal pieces were then transferred to solid Murashige and Skoog (MS) medium and cultivated at 28 °C for 3–6 h in darkness. The gene gun device was sterilized and was placed on a clean bench, and the bombarding chamber and some accessories were cleaned with 75% alcohol. After the particulate carrier membrane was washed with 70 and 100% alcohol, plasmids encased in gold powder were added to the middle of the particulate carrier membrane. After the membrane dried slightly, the onion epidermis was bombarded using the gene gun with the following parameters: particle bombardment running distance, 9 cm; rupture disk pressure, 1300 psi; and vacuum degree, 28 mmHg. The epidermis after bombardment was transferred to fresh MS agar medium at 25 °C for 12 h in darkness. The resulting green fluorescence was detected using a confocal laser scanning microscope (Zeiss LSM 700) at a wavelength of 488 nm. The ORF of GhWRKY42 was cloned into the pGBKT7 vector to construct pGBKT7-GhWRKY42. The pGADT7-largeT+pGBKT7-GhWRKY42 (experimental group), pGADT7-largeT+pGBKT7-p53 (positive control) and pGADT7-largeT+pGBKT7-laminC (negative control) plasmids were transformed into Y2HGold yeast competent yeast cells. The transformed yeast products were spread on corresponding dropout selective medium plates that did not contain tryptophan or leucine (SD/−Trp/−Leu) and incubated for 3–5 days at 30 °C. Positive clones were identified and streaked on SD/−Trp/−Leu medium plates and plates containing medium without tryptophan, leucine, histidine or adenine (SD/−Trp/−Leu/-His/−Ade). The plates were inverted and incubated at 30 °C for 3–5 days to identify transcriptional activity. The ORF of GhWRKY42 was inserted into the binary expression vector pBI121 driven by the 35S promoter to generate the 35S::GhWRKY42 construct. The GhWRKY42 promoter fragment was also inserted into the pBI121 vector by replacing the 35S promoter to generate the ProGhWRKY42::GUS construct. The 35S::GhWRKY42 and ProGhWRKY42::GUS constructs were individually introduced into Agrobacterium tumefaciens strain LBA4404 and transformed into Arabidopsis ecotype Columbia using the floral-dip method . For the screening of positive plants, seeds of the T0 generation (harvested from the wild-type (WT)) were sterilized and selected on 1/2 MS solid medium plates (0.22% MS modified basal salt mixture, 3% sucrose and 0.8% agar powder) containing kanamycin (50 mg/L). The plates containing the seeds were chilled at 4 °C for 3 days in darkness, after which they were transferred to an incubator at 22 °C under a 16 h light/8 h dark cycle with a light intensity of 100 μmol m− 2 s− 1. Two weeks later, the green seedlings on the plates were selected and transplanted into the nutrient soil in a growth chamber. The positive plants were further verified using PCR, and selfed seeds harvested from the positive plants were employed as the T1 generation. Using the same method, the seeds were screened until the T3 homozygous generation. The phenotypic characteristics of the transgenic and WT plants were observed at different developmental stages. Based on the position of stress response cis-elements in the GhWRKY42 promoter, four promoter deletion fragments were delimited. The four fragments were amplified from the pMD18-T vector containing the GhWRKY42 promoter and inserted into the pBI121 vector by replacing the 35S promoter. As a result, four promoter deletion plasmids, ProGhWRKY42::GUS (− 1943 bp to − 1 bp), ProGhWRKY42–1::GUS (− 1407 bp to − 1 bp), ProGhWRKY42–2::GUS (− 778 bp to − 1 bp) and ProGhWRKY42–3::GUS (− 391 bp to − 1 bp), were constructed and transformed into LBA4404. Transient expression in N. tabacum was performed in accordance with previously described methods . Transgenic Arabidopsis plants harboring the ProGhWRKY42::GUS construct were used to analyze organizational expression characteristics. GUS staining was performed as follows: the prepared materials were soaked in the GUS dye solution, after which the materials were placed in darkness at 25–37 °C overnight; the materials were then decolorized approximately 2–3 times using 70% alcohol until the negative control materials turned white, and the blue dots in the white background observed under microscopy were identified as GUS expression sites. For the VIGS assay, approximately 300-bp fragments amplified from the pMD18-T vector containing the GhWRKY42 gene were integrated into the pCLCrVA vector to construct pCLCrVA-GhWRKY42, which was then transformed into LBA4404. The LBA4404 strains carrying pCLCrVA-GhWRKY42, pCLCrVA (negative control) or pCLCrVA-PDS (positive control) were mixed with the strain harboring pCLCrVB (helper vector) (1:1 ratio, OD600 = 1.5) and co-injected into two fully expanded cotyledons of CCRI10 plants. In the VIGS assay, at least 20 seedlings were used per group. For qRT-PCR detection, samples from at least 6 uniform injected plants were used. The cotton plants were then cultivated at 22 °C with a 16 h light/8 h dark cycle in a greenhouse. The experiment was repeated three times. The detailed VIGS procedure was performed as previously described [57, 58]. Determination of the chlorophyll content was performed as described by Shah et al. . We thank all authors for their contributions to the article. We also appreciate the reviewers and editors for their patience regarding this work. This work was supported by the National Key Research and Development Program of China (grant number 2016YED0101006). The funders had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. SY designed the research program. HW, HW and JS analyzed the data and revised the manuscript. LG performed the experiment and wrote the paper. All authors have read and approved the final manuscript. The experimental research on plants (either cultivated or wild), including collection of plant material, complied with institutional, national, or international guidelines. Field studies were conducted in accordance with local legislation. The plant materials used in this study were previously preserved in our laboratory.
https://bmcgenet.biomedcentral.com/articles/10.1186/s12863-018-0653-4
CRICO MDs, sign in for your insurance & clinical risk info. Log into My CRICO All visitors: sign in here for guideline tests, online CME, and event registration. Go to CME *Indicates High Risk View All CME Learning Hub CME Educators Hub CME CRICO Bundles Guidelines Tests CME FAQs Education Contacts QUICK VIEW Article What’s My Risk Library QUICK VIEW Podcast and Video Case studies and interviews are aligned with clinical specialties and high risk areas identified in the Harvard system. QUICK VIEW Newsletter and Publication December 21, 2021 Each AMC PSO Patient Safety Alert takes an in-depth look at one or two topics such as Patient Falls and Results Management. Visit our library to see the available reports and fill out a form to receive a PDF of one or more issues. November 16, 2021 The patient needed to be evaluated by a stroke team and a neurologist promptly to decide whether any treatment was indicated or possible. Triage should be the same whether the ER was empty or overcapacity. Comments For Heads Up from the FDA 0 comments All comments are posted anonymously. Your comment will be attributed to: "Anonymous user." Are you sure you want to delete this comment? Related to: Ambulatory, Clinical Guidelines, Emergency Medicine, Primary Care, Medication, Obstetrics, Other Specialties, Surgery Pharmaceutical companies must address risks that emerge even after a drug (e.g., Vioxx) has been approved by the United States Food and Drug Administration (FDA). Among the requirements: pharmaceutical companies must make drug study results publicly available, and they must complete follow-up studies even after a drug is approved. One likely result is an increase in the amount of risk-related information physicians need to know about an ever-expanding list of medications. Not being fully informed about what you are prescribing puts you at risk for an allegation of negligence. Medication-related errors have been in the forefront of the patient safety movement, especially the process-related problems (order entry, preparation, and administration). On the other hand, medication-errors related to physician knowledge or judgment have had less attention paid to them. Patients expect that the ordering physician will be aware of any risks related to prescribed drugs. Consequently, when adverse drug events occur, the prescribing physicians may well be deemed at fault for what they should have known. Each time a new drug is FDA approved, physicians need to assess its benefits and risks and how they jibe with existing medications. By adding to the mix ongoing testing and public access to those test results—and, thus the potential for even more information to be absorbed—the FDA has increased the burden of staying fully informed. Fortunately, the FDA is also offering a way to ease that burden through the MedWatch website and a free safety alert e-mail subscription service to deliver the latest drug risk news to physicians directly and quickly. The alerts (about two per week, says the FDA) announce changes in prescribing information, drug recalls, emerging risks, and strategies for avoiding known usage errors. In addition to information from pharmaceutical companies and the FDA, the MedWatch alert system also draws on issues raised by physicians who report problems directly to the FDA.
https://rmf.harvard.edu/clinician-resources/article/2007/sps-heads-up-from-the-fda
In a rather interesting news report in the Cape Cod Times, a young man is accused of breaking into a home located in Martha’s Vineyard where he allegedly painted the resident’s dog with purple paint and stole some items from the home. The accused man stands charged with more than a half dozen crimes, including breaking and entering with the intent to commit a felony, cruelty to animals, and possession of several controlled substances. Articles Posted in Breaking and Entering Boston Man Faces Charges Of Breaking And Entering In Framingham District Court Paul Lentini, a 30-year-old Boston man, was arrested in Framingham on April 24 after an alleged breaking and entering. Police claim that Lentini forced his way into a back door of a home and later jumped out of a second-floor window to escape. The defendant allegedly knocked on the front door before entering the home through the back. A 16-year-old girl was inside and called the police and her mother. The defendant was allegedly trying to take jewelry from a second-floor bedroom when police and the girl’s mother’s boyfriend arrived on the scene. He allegedly jumped out of the bedroom window into bushes, at which point the mother’s boyfriend tackled him. Lentini was arraigned Thursday April 25 in Framingham District Court. He is charged with breaking and entering during the daytime, receiving stolen property under $250 and possession of burglarious instruments. His next court date is May 24. Breaking and entering in the daytime is a statutory modification to the common law of burglary. Before the statutory modifications, an element was that the breaking and entering of a dwelling house take place in the nighttime. Even under the current expanded law, an entering in the daytime without a breaking is only a trespass. However, opening an unlocked door or window still counts as a “breaking.” Other statutory modifications expanded the common law of burglary to punish: breaking and entering into any building or vehicle at night to commit a felony; breaking and entering into any building or vehicle at any time to commit a misdemeanor; entering without breaking any building at night with the intent to commit a felony. Here, it is unclear how anyone came to know that the defendant was trying to take jewelry from the bedroom. A breaking and entering conviction requires proof that a defendant had the intent to commit a felony. While movement of jewelry may be suggestive of an intent to steal, there is no indication in these news reports that anyone saw the defendant moving jewelry or that the defendant was found in possession of jewelry or any other item that could be the target of theft. When a breaking and entering takes place in the nighttime, the intent to steal may be presumed. That is not so in cases involving breaking and entering in the day. The basis for charging this defendant with receiving stolen property and possession of burglarious tools is also unclear from these facts. Sometimes, the government claims that innocent items are “burglarious instruments.” Where a tool has an innocent purpose, it can be difficult for the government to prove burglarious intent or knowledge that the tool was designed for a burglarious purpose. Grinches Break And Enter A Lawrence Non-Profit Food Bank Apparently, everyone did not get the memo that said that this is the season for giving, not stealing. The Lawrence Eagle Tribune reported that hundreds of toys that were earmarked for the “needy”, food and gift cards and computers were among items stolen from an Essex Street building in Lawrence Massachusetts last week. The building was the office location for a computer company, a recording studio and a school for chaplains. The stolen items included eighteen gold plated badges that were to be awarded to students at an upcoming Chaplaincy graduation. If stealing were not enough to dampen the holiday season, the culprits left water faucets running causing overflow and additional damage to the building. Tenants who went to the recording studio during the early morning hours noticed dripping water from the ceiling. Investigation revealed that the building had been broken into and ransacked. When the perpetrators are caught they may face a number of charges including breaking and entering a building in the nighttime with intent to commit a felony, malicious destruction of property over $250.00 and larceny over $250.00. If a person is convicted for breaking and entering in the nighttime with intent to commit a felony he or she faces the possibility of serving twenty years in state prison or two and one half years in jail. In order to prove breaking and entering the Commonwealth must prove beyond a reasonable doubt that the defendant broke and entered into a building in the nighttime with intent to commit a felony. In Massachusetts the breaking and entering are considered two distinct acts. Areas that are often litigated in these types of cases are whether the defendant actually broke into the building and/or whether he or she actually entered the premises. For example, the opening of a window or door, which was partly open, further than it was before in a manner in which was intended to be used is not considered a breaking. However, going through and open window that is not intended for use as an entrance is considered a breaking. Although the facts of this case are not all known, in the event that anyone is arrested a viable defense may be that the individual was misidentified. As in many cases when a defendant is not arrested at the scene, an experienced Massachusetts defense lawyer must examine the circumstances under which a witness identified the defendant. The lighting, the opportunity for a witness to observe the defendant and whether the identifying witness was familiar with the defendant are a few area that must be explored. Lawrence Massachusetts Man Arrested And Charged With Breaking And Entering A Vehicle, Malicious Destruction Of Property, Possession Of Burglarious Tools And Malicious Destruction Of Property An undercover sting operation to catch a Lawrence man allegedly breaking into neighborhood cars paid off. According to The Lawrence Eagle Tribune, police officers went undercover to catch the person breaking into the vehicles. A citizen reported that a man was loitering in the area of Methuen Street in Lawrence. The police responded and saw twenty-five year old Jose Rivera walking away from a parked car with its alarm sounding. The officer, who was in an unmarked cruiser, saw that a nearby car had its window broken and Rivera had screwdrivers in his pocket and a GPS base and charge chord in his pants. Rivera was arrested and charged with breaking and entering a vehicle in the daytime with intent to commit a felony, larceny over $250.00, possession of burglarious tools and malicious destruction of property over $250.00. In Massachusetts, if you are charged with any type of theft crime it is important to have an experienced defense attorney on your side. In order for the prosecution to secure a conviction, they must prove all of the elements of a crime beyond a reasonable doubt. It is important that your Massachusetts trial attorney knows the law and all of the elements of the crime with which you are charged. For example, in order for the prosecution to secure a conviction for possession of burglarious instruments in violation of G.L.c. 266, § 49,the Commonwealth must prove that the defendant possessed “an engine, machine, tool or implement adapted and designed for cutting through, forcing or breaking open a building, room, vault, safe or other depository, in order to steal there from money or other property, or to commit any other crime, knowing the same to be adapted and designed for the purpose aforesaid, with intent to use or employ or allow the same to be used or employed for such purpose . . .” In Massachusetts, many defendants are charged with possession of burglarious tools simply because they are found with pliers, wrenches and other tools in their possession. However, mere possession of these objects, even at what is believed to have been a crime scene, is not sufficient to prove that the items were “burglarious.” The trial judge must instruct and the Commonwealth must prove that the defendant possessed the item or tool with the intent to use it to break into a vehicle or residence. Mere possession of a tool is not enough for a conviction. The Commonwealth must also prove that the defendant had the specific intent to use the tool to enter the car or residence. Norfolk County Jury Convicts Ryan Bois Of First Degree Murder And Related Charges There was not a dry eye in the audience when a Norfolk County Jury convicted Ryan Bois for the death of a six year old Weymouth girl. According to the Boston Globe, in a courtroom filled with emotion, Judge Janet Sanders told a packed courtroom that this was the “worst she has seen in her fourteen years a a judge” before she imposed four life term sentences. Bois was convicted for the rape, murder and kidnapping of his six year old cousin, Joanna Mullin. According to news reports, the trial lasted six days and the jury deliberated for 8 hours before convicting Bois of first-degree murder, two counts of rape, home invasion, kidnapping, larceny of a motor vehicle, larceny under $250, malicious destruction of property under $250, failure to stop for a police officer and negligent operation of a motor vehicle. During the trial the defense maintained that Bois, 22 years old, was not guilty by reason of insanity. According to the Boston Globe, the Norfolk County prosecutor countered claiming that Bois’s action were calculated when he raped his young cousin, wrapped her body in bed sheets and a quilt, stole keys to his grandmother’s sport utility vehicle, and put the body in the back seat. The prosecutor presented evidence indicating that after committing this horrific crime, Bois called an acquaintance to get some drugs and during this conversation asked the acquaintance how to dispose of a body. Understandably unable to listen to the details that led up to their daughter’s death Mullins parents stayed away during the trial. However, many relatives and friends attended the trial at the Norfolk Superior Court located in Dedham, Massachusetts. After the jury returned the guilty verdict the prosecutor read the victim impact statement that Joanna’s parents prepared.
https://www.massachusettscriminaldefenseattorneyblog.com/category/breaking-and-entering/
BACKGROUND OF THE INVENTION Field of the Invention Description of the Related Art The present invention relates to a polypeptide (also called variant) comprising a mutated Fc region and having increased affinity for the FcRn receptor, as well as increased affinity for at least one Fc receptor (FcR) relative to a parent polypeptide. An antibody consists of a tetramer of heavy and light chains. The two light chains are identical to each other, while the two heavy chains are identical and connected by disulfide bridges. There are five types of heavy chains (alpha, gamma, delta, epsilon, mu), which determine immunoglobulin classes (IgA, IgG, IgD, IgE, IgM). The light chain group includes two subtypes, lambda and kappa. IgGs are soluble antibodies that may be found in blood and other body fluids. IgG is a Y-shaped glycoprotein with an approximate molecular weight of 150 kDa, consisting of two heavy and two light chains. Each chain stands out by a constant region and a variable region. The two carboxy-terminal domains of the heavy chains form the Fc fragment, while the amino-terminal domains of the heavy and light chains recognize the antigen and are called the Fab fragment. The Fc fusion proteins are created by a combination of an antibody Fc fragment with a protein domain that provides the specificity for a given therapeutic target. Examples are combinations of the Fc fragment with any type of therapeutic proteins or fragments thereof. Fc polypeptides, in particular Fc fragments, therapeutic antibodies and Fc fusion proteins, are used today to treat various diseases, such as rheumatoid arthritis, psoriasis, multiple sclerosis and many forms of cancer. Therapeutic antibodies may be monoclonal or polyclonal antibodies. The monoclonal antibodies are obtained from a single antibody-producing cell line, which shows identical specificity for a single antigen. The therapeutic Fc fusion proteins are used or developed as drugs against autoimmune diseases and/or inflammatory component, such as etanercept (Amgen's Enbrel, which is an Fc-bound TNF receptor) or Alefacept (Biogen Idec's Amevive, which is LFA-3 bound to the Fc portion of human IgG1). Fc polypeptides, such as the Fc fragments, Fc antibodies and fusion proteins, have, in particular, an activity dependent on the binding of their Fc part to their receptors, i.e. FcRn and the Fc fragment receptors (FcR), such as FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors. One of the desired effects in therapies involving Fc polypeptide interactions with Fc fragment receptors (FcR) is inhibition of immune system activation by binding to Fc receptors on the surface of effector cells. Particularly in the context of the treatment of inflammatory and/or autoimmune diseases, involving autoantibodies and/or cytokines, Fc-based therapies can act by blocking Fc receptors and thus by competing with autoantibodies for access to these receptors. This results in inhibition of direct activities normally mediated by autoantibodies (e.g. antibody-dependent cellular cytotoxicity, complement-dependent cytotoxicity, or antibody-dependent cellular phagocytosis), and decreased activation of the immune system, including cytokine release. In addition, since the FcRn receptor is involved in the recycling of antibodies, blocking them with Fc polypeptides allows faster elimination of autoantibodies, thus reducing their half-life. This is why treatments based on Fc fragments are particularly suitable for autoimmune and/or inflammatory diseases, triggered by uncontrolled stimulation of the cells of the immune system, in particular by autoantibodies and/or cytokines. The basic therapy proposed for the treatment of these diseases is an intravenous immunoglobulin (IVIG or IVIg) therapy which consists in intravenously administering to the patients immunoglobulins (IgG most often) from pools of human plasma donations. It is generally accepted that these IgGs act, in particular, by blocking the Fc receptors and thus competing with the autoantibodies for access to these receptors. More recently, Fc fragments have been developed for the purpose of modifying their Fc receptor binding properties. Nevertheless, their effectiveness remains to be demonstrated. There is still a need to optimize these Fc fragments, in particular to increase their half-life, and/or their therapeutic efficacy. The Applicant has now developed particular Fc fragments exhibiting improved activity, in particular by an improved FcRn binding affinity. These Fc fragments may be used in therapy, and are particularly suitable for the treatment of inflammatory and/or autoimmune diseases, in order to bring greater effectiveness to the product that contains them. In particular, these fragments may exhibit a more efficient blockade of Fc receptors present on the cells of the immune system, which are then less, or no longer, accessible for the binding of autoantibodies, whose activity is then inhibited. In addition, Fc fragments make it possible to block the FcRn receptor more efficiently and thus eliminate autoantibodies more quickly. In addition, some of these particular Fc fragments have, as demonstrated in the examples, better inhibition of complement-dependent cytotoxicity (CDC) than IVIG. They therefore make it possible to reduce the toxicity of pathogenic autoantibodies, such as those involved in inflammatory and/or autoimmune diseases. SUMMARY OF THE INVENTION The present invention thus provides a variant of a parent polypeptide having optimized properties relating to functional activity mediated by the Fc region. (i) the four mutations 334N, 352S, 378V and 397M; and (ii) at least one mutation selected from 434Y, 434S, 226G, P228L, P228R, 230S, 230T, 230L, 241L, 264E, 307P, 315D, 330V, 362R, 389T and 389K; The present invention thus relates to a variant of a parent polypeptide comprising an Fc fragment, said variant having an increased affinity for the FcRn receptor, and an increased affinity for at least one Fc receptor (FcR) selected from FcγRI receptors. (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a), relative to that of the parent polypeptide, characterized in that it comprises: wherein the numbering is that of the EU index or equivalent in Kabat. According to one embodiment, the variant according to the invention further comprises at least one mutation (iii) in the Fc fragment chosen from among Y296W, K290G, V240H, V240I, V240M, V240N, V240S, F241H, F241Y, L242A, L242F, L242G, L242H, L242I, L242K, L242P, L242S, L242T, L242V, F243L, F243S, E258G, E258I, E258R, E258M, E258Q, E258Y, V259C, V259I, V259L, T260A, T260H, T260I, T260M, T260N, T260R, T260S, T260W, V262S, V263T, V264L, V264S, V264T, V266L, S267A, S267Q, S267V, K290D, K290E, K290H, K290L, K290N, K290Q, K290R, K290S, K290Y, P291G, P291Q, P291R, R292I, R292L, E293A, E293D, E293G, E293M, E293Q, E293S, E293T, E294A, E294G, E294P, E294Q, E294R, E294T, E294V, 02951, Q295M, Y296H, S298A, S298R, Y300I, Y300V, Y300W, R301A, R301M, R301 P, R301 S, V302F, V302L, V302M, V302R, V302S, V303S, V303Y, 5304T, V305A, V305F, V3051, V305L, V305R and V305S, wherein the numbering is that of the EU index or equivalent in Kabat. Such a variant is called “variant according to the invention”, “mutant according to the invention” or “polypeptide according to the invention”. Preferably, the variant according to the invention has both an increased affinity for the FcRn receptor and an increased affinity for all FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors. Preferably, in addition, the variant according to the invention is capable of inhibiting complement-dependent cytotoxicity (CDC), attributed to a modification of binding to complement proteins, in particular C1q. This inhibition is significantly improved compared to that conferred by IVIG. Preferably, the variant according to the invention is different from the variant consisting of an Fc fragment, in particular of IgG1, having the five mutations N434Y, K334N, P352S, V397M and A378V, and produced in HEK293 cells, wherein the numbering is that of the EU index or equivalent in Kabat. Thus, preferably, the variant according to the invention is different from the Fc fragment, in particular IgG1, N434Y/K334N/P352S/V397M/A378V produced in HEK293 cells, wherein the numbering is that of the EU index or equivalent in Kabat. Throughout this application, the numbering of residues in the Fc region is that of the immunoglobulin heavy chain according to the EU index or equivalent in Kabat et al. (Sequences of Proteins of Immunological Interest, 5th ed., Public Health Service, National Institutes of Health, Bethesda, Md., 1991). The term “EU index or equivalent in Kabat” refers to the US numbering of the residues of the human IgG1, IgG2, IgG3 or IgG4 antibody. This is illustrated on the IMGT website (http://www.imat.ora/IMGTScientificChart/Numberina/HuIG HGnber.html). By “polypeptide” or “protein” is meant a sequence comprising at least 100 covalently-attached amino acids. By “amino acid” is meant one of the 20 naturally occurring amino acids or non-natural analogues. The term “position” means a position in the sequence of a polypeptide. For the Fc region, the positions are numbered according to the EU index or equivalent in Kabat. The term “antibodies” is used in the everyday sense. It corresponds to a tetramer that comprises at least one Fc region, and two variable regions. Antibodies comprise, but are not limited to, full-length immunoglobulins, monoclonal antibodies, multi-specific antibodies, chimeric antibodies, humanized antibodies, and fully human antibodies. The amino-terminal portion of each heavy chain comprises a variable region of about 100 to 110 amino acids responsible for antigen recognition. In each variable region, three loops are pooled to form an antigen binding site. Each of the loops is called a complementarity determining region (hereinafter referred to as a “CDR”). The carboxy terminal portion of each heavy chain defines a constant region that is primarily responsible for the effector function. IgGs have several subclasses, in particular IgG1, IgG2, IgG3 and IgG4. The subclasses of IgM are, in particular, IgM1 and IgM2. Thus, by “isotype” is meant one of the subclasses of immunoglobulins defined by the chemical and antigenic characteristics of their constant regions. The known isotypes of human immunoglobulins are IgG1, IgG2, IgG3, IgG4, IgA1, IgA2, IgM1, IgM2, IgD and IgE. Full length IgGs are tetramers and consist of two identical pairs of two immunoglobulin chains, each pair having a light chain and a heavy chain, wherein each light chain comprises the VL and CL domains, and each heavy chain comprises the domains VH, Cγ1 (also called CH1), Cγ2 (also called CH2), and Cy3 (also called CH3). In the context of a human IgG1, “CH1” refers to positions 118 to 215, “CH2” refers to positions 231 to 340, and “CH3” refers to positions 341 to 447 according to the EU index or equivalent in Kabat. The IgG heavy chain also includes an N-terminal flexible hinge domain which refers to positions 216-230 in the case of IgG1. The lower hinge range refers to positions 226 to 230 according to the EU index or equivalent in Kabat. By “variable region” is meant the region of an immunoglobulin which comprises one or more Ig domains substantially encoded by any of the VK, Vλ and/or VH genes that make up the kappa, lambda, and immunoglobulin heavy chains, respectively. Variable regions include complementarity determining regions (CDRs) and framework regions (FRs). The term “Fc” or “Fc region” refers to the constant region of an antibody excluding the first domain of the immunoglobulin constant region (CH1). Thus Fc refers to the last two domains (CH2 and CH3) of the IgG1 constant region, and to the flexible N-terminal hinge of these domains. For a human IgG1, the Fc region corresponds to the residue C226 at its carboxy terminal end, i.e. the residues of the position 226 to 447, where the numbering is according to the EU index or equivalent in Kabat. The Fc region used may further comprise a portion of the upper hinge region located between positions 216-226 according to the EU index or equivalent in Kabat; in this case, the Fc region used corresponds to the residues of the position 216 to 447, 217 to 447, 218 to 447, 219 to 447, 220 to 447, 221 to 447, 222 to 447, 223 to 447, 224 to 447 or 225 to 447, wherein the numbering is according to the EU index or equivalent in Kabat. Preferably in this case, the Fc region used corresponds to the residues of position 216 to 447, wherein the numbering is according to the EU index or equivalent in Kabat. Preferably, the Fc region used is chosen from the sequences SEQ ID NO: 1 to 10 and 14. By “parent polypeptide” is meant a reference polypeptide. The said parent polypeptide may be of natural or synthetic origin. In the context of the present invention, the parent polypeptide comprises an Fc region, referred to as the “parent Fc region”. This Fc region may be selected from the group of wild-type Fc regions, their fragments and mutants. Preferably, the parent polypeptide comprises a human Fc fragment, preferably an Fc fragment of a human IgG1 or a human IgG2. The parent polypeptide may include preexisting amino acid modifications in the Fc region (e.g. Fc mutant) relative to wild-type Fc regions. Advantageously, the parent polypeptide may be an isolated Fc region (i.e. an Fc fragment as such), a sequence derived from an isolated Fc region, an antibody, an antibody fragment comprising an Fc region, a fusion protein comprising an Fc region or a conjugate Fc, wherein this list is not limiting. By “sequence derived from an isolated region Fc” is meant a sequence comprising at least two isolated Fc regions linked together, such as an scFc (single chain Fc) or a multimer Fc. By “fusion protein comprising an Fc region” is meant a polypeptide sequence fused to an Fc region, said polypeptide sequence being preferably selected from variable regions of any antibody, sequences binding a receptor to its ligand, adhesion molecules, ligands, enzymes, cytokines and chemokines. By “Fc conjugate” is meant a compound that is the result of the chemical coupling of an Fc region with a conjugation partner. The conjugation partner may be protein or non-protein. The coupling reaction generally utilizes functional groups on the Fc region and the conjugation partner. Various binding groups are known in the prior art as being suitable for the synthesis of a conjugate; for example, homo- or heterobifunctional binders are well known (see, Pierce Chemical Company catalog, 2005-2006, technical section on crosslinking agents, pages 321-350). Suitable conjugation partners include therapeutic proteins, labels, cytotoxic agents such as chemotherapeutic agents, toxins and their active fragments. Suitable toxins and fragments thereof include diphtheria toxin, exotoxin A, ricin, abrin, saporin, gelonin, calicheolyin, auristatin E and F, and mertansin. Advantageously, the parent polypeptide—and therefore the polypeptide according to the invention—consists of an Fc region. Advantageously, the parent polypeptide—and therefore the polypeptide according to the invention—is an antibody. By “mutation” is meant a change of at least one amino acid of the sequence of a polypeptide, including a change of at least one amino acid of the Fc region of the parent polypeptide. The mutated polypeptide thus obtained is a variant polypeptide; it is a polypeptide according to the invention. Such a polypeptide comprises a mutated Fc region, relative to the parent polypeptide. Preferably, the mutation is a substitution, an insertion or a deletion of at least one amino acid. By “substitution” is meant the replacement of an amino acid at a particular position in a parent polypeptide sequence by another amino acid. For example, the N434S substitution refers to a variant polypeptide, in this case a variant for which asparagine at position 434 is replaced by serine. By “amino acid insertion” or “insertion” is meant the addition of an amino acid at a particular position in a parent polypeptide sequence. For example, insertion G>235-236 refers to a glycine insertion between positions 235 and 236. By “amino acid deletion” or “deletion” is meant the deletion of an amino acid at a particular position in a parent polypeptide sequence. For example, E294del refers to the removal of glutamic acid at position 294. Preferably, the following mutation label is used: “434S” or “N434S”, and means that the parent polypeptide comprises asparagine at position 434, which is replaced by serine in the variant. In the case of a combination of substitutions, the preferred format is “259I/315D/434Y” or “V259I/N315D/N434Y”. This means that there are three substitutions in the variant, at positions 259, 315 and 434, and that the amino acid at position 259 of the parent polypeptide, i.e. valine, is replaced by isoleucine, that the amino acid at position 315 of the parent polypeptide, asparagine, is replaced by aspartic acid, and that the amino acid at position 434 of the parent polypeptide, asparagine, is replaced by tyrosine. By “FcRn” or “neonatal Fc receptor” as used herein is meant a protein that binds to the Fc region of IgG and is encoded at least in part by an FcRn gene. As is known in the prior art, the functional FcRn protein comprises two polypeptides, often referred to as heavy chain and light chain. The light chain is beta-2-microglobulin, while the heavy chain is encoded by the FcRn gene. Unless otherwise noted herein, FcRn or FcRn protein refers to the α-chain complex with beta-2-microglobulin. In humans, the gene encoding FcRn is called FCGRT. Preferably, the variant according to the invention has an increased affinity for the FcRn receptor, relative to that of the parent polypeptide, by a ratio at least equal to 2, preferably greater than 5, more preferably greater than 10, even more preferably greater than 15, particularly preferably greater than 20, even more particularly preferably greater than 25, most preferably greater than 30. Preferably, the variant according to the invention has an increased half-life compared to that of the parent polypeptide. Preferably, the variant according to the invention has an increased half-life with respect to that of the parent polypeptide, by a ratio at least equal to 2, preferably greater than 5, more preferably greater than 10, even more preferably greater than 15, particularly preferably greater than 20, even more particularly preferably greater than 25, most preferably greater than 30. One of the major functions of FcRn is known as IgG recycling. It consists of extracting IgG from the endothelial catabolism pathway of plasma proteins to restore them intact to the circulation. This recycling explains their half-life under normal physiological conditions (three weeks for IgG), while maintaining high plasma concentrations. The transcytosis of IgG from one pole to the other of epithelia or endothelium is the second major function of FcRn to ensure their biodistribution in the body. Preferably, the variant according to the invention has an increased affinity for at least one receptor of the Fc fragment (FcR) chosen from the receptors FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a), with respect to that of the parent polypeptide, by a ratio at least equal to 2, preferably greater than 5, more preferably greater than 10, even more preferably greater than 15, particularly preferably greater than 20, even more particularly preferably greater than 25, most preferably greater than 30. The FcγRI receptor (CD64) is involved in phagocytosis and cell activation. The FcγRIIIa receptor (CD16a) is also involved in Fc-dependent activity, including ADCC and phagocytosis; it has a V/F polymorphism at position 158. The FcγRIIa receptor (CD32a) is, in turn, involved in platelet activation and phagocytosis; it has an H/R polymorphism at position 131. Preferably, the variant according to the invention has both an increased affinity for the FcRn receptor, and an increased affinity for all FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32α) receptors. The affinity of a polypeptide comprising an Fc region for an FcR may be evaluated by methods well known in the prior art. For example, those skilled in the art may determine the affinity (Kd) using surface plasmon resonance (SPR). Alternatively, those skilled in the art may perform an appropriate ELISA test. An appropriate ELISA assay compares the binding forces of the parent Fc and the mutated Fc. The specific detected signals for the mutated Fc and the parent Fc are compared. Binding affinity may be indifferently determined by evaluating whole polypeptides or evaluating isolated Fc regions thereof. Alternatively, those skilled in the art may perform an appropriate competitive assay. An appropriate competitive assay is used to determine the ability of the mutated Fc to inhibit the binding of a labeled FcR ligand when these are incubated simultaneously with cells expressing these receptors. The binding of the labeled ligand to FcR may be evaluated, for example, by flow cytometry. The binding affinity of the Fc mutated at FcR is then determined by evaluating the variability of the average fluorescence intensity emitted by the labeled ligand bound to the FcR. Preferably, the mutated Fc region of the polypeptide according to the invention comprises from 3 to 20 mutations relative to the parent polypeptide, preferably from 4 to 20 mutations. By “from 3 to 20 amino acid modifications” is meant 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 and 20 amino acid mutations. Preferably, it comprises from 4 to 15 mutations, more preferably from 4 to 10 mutations relative to the parent polypeptide. Even more preferably, the mutated Fc region of the polypeptide according to the invention may comprise at least one combination of 5 mutations, said combination comprising the four mutations (i) as described above, and at least one mutation (ii) as described above, wherein the numbering is that of the EU index or equivalent in Kabat. Even more preferably, the mutated Fc region of the polypeptide according to the invention comprises a combination of 6 mutations, said combination comprising the four mutations (i) as described above, at least one mutation (ii) as described above, and at least one mutation (iii) as described above, wherein the numbering is that of the EU index or equivalent in Kabat. (i) the four mutations 334N, 352S, 378V and 397M; (ii) at least one mutation selected from 434Y, 434S, 226G, P228L, P228R, 230S, 230T, 230L, 241L, 264E, 307P, 315D, 330V, 362R, 389T and 389K; and when a mutation (iii) is present, it is selected from K290G and Y296W, wherein the numbering is that of the EU index or equivalent in Kabat. Preferably, the mutated Fc region of the polypeptide according to the invention comprises the following mutations: (i) the four mutations 334N, 352S, 378V and 397M; (ii) at least one mutation selected from 434Y, 434S, 226G, P228L, P228R, 230S, 230T, 230L, 241L, 264E, 307P, 315D, 330V, 362R, 389T and 389K; and (iii) at least one mutation selected from K290G and Y296W, wherein the numbering is that of the EU index or equivalent in Kabat. Preferably, the mutated Fc region of the polypeptide according to the invention comprises the following mutations: Preferably, the mutated Fc region of the polypeptide according to the invention comprises a combination of mutations chosen from the combinations: N434Y/K334N/P352S/V397M/A378V and N434Y/K334N/P352S/V397M/A378V/Y296W. Preferably, the polypeptide according to the invention is produced in mammary epithelial cells of transgenic non-human mammals. Preferably, the polypeptide according to the invention is produced in non-human transgenic animals, preferably in transgenic non-human mammals, more preferably in their mammary epithelial cells. By “transgenic non-human mammal” is meant a mammal chosen, in particular, from among cattle, pigs, goats, sheep and rodents, preferably from among the goat, the mouse, the sow, the rabbit, the ewe and the cow. Preferably, the transgenic non-human animal or the transgenic non-human mammal is a transgenic goat. Preferably, the variant according to the invention comprises at least the five mutations N434Y, K334N, P352S, V397M and A378V in its Fc fragment, and is produced in mammary epithelial cells of transgenic non-human mammals, or in transgenic non-human animals, preferably in transgenic non-human mammals, such as a goat. Such a variant has both increased affinity for the FcRn receptor, and increased affinity for all FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors. Thus, preferably, the variant according to the invention is the Fc N434Y/K334N/P352S/V397M/A378V variant produced in mammary epithelial cells of transgenic non-human mammals. Alternatively, preferably, the variant according to the invention is the Fc N434Y/K334N/P352S/V397M/A378V variant produced in transgenic non-human animals, preferably in transgenic non-human mammals, such as a goat. Such a variant has both increased affinity for the FcRn receptor, and increased affinity for all FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors. Preferably, the variant according to the invention comprises the sequence SEQ ID NO: 11 or the sequence SEQ ID NO: 15. Alternatively, preferably, the variant according to the invention is the variant Fc N434Y/K334N/P352S/V397M/A378V/Y296W produced in mammary epithelial cells of transgenic non-human mammals. Alternatively, preferably, the variant according to the invention is the Fc N434Y/K334N/P352S/V397M/A378V/Y296W variant produced in transgenic non-human animals, preferably in transgenic non-human mammals, such as a goat. Such a variant has both increased affinity for the FcRn receptor, and increased affinity for all FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors. Preferably, the method for producing a variant according to the invention comprises the expression of said variant in mammary epithelial cells of transgenic non-human mammals. (i) the four mutations 334N, 352S, 378V and 397M; and (ii) at least one mutation selected from 434Y, 434S, 226G, P228L, P228R, 230S, 230T, 230L, 241L, 264E, 307P, 315D, 330V, 362R, 389T and 389K; and wherein the numbering is that of the EU index or equivalent in Kabat, said method comprising expressing said variant in mammary epithelial cells of transgenic non-human mammals. Thus, the present invention also relates to a method for producing a variant of a parent polypeptide comprising an Fc fragment, said variant having an increased affinity for the FcRn receptor, and an increased affinity for at least one Fc receptor (FcR) selected from FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors, relative to that of the parent polypeptide, said variant comprising: Preferably, said variant further comprises at least one mutation (iii) in the Fc fragment chosen from among Y296W, K290G, V240H, V240I, V240M, V240N, V240S, F241H, F241Y, L242A, L242F, L242G, L242H, L242I, L242K, L242P, L242S, L242T, L242V, F243L, F243S, E258G, E258I, E258R, E258M, E258Q, E258Y, V259C, V259I, V259L, T260A, T260H, T260I, T260M, T260N, T260R, T260S, T260W, V262S, V263T, V264L, V264S, V264T, V266L, S267A, S267Q, S267V, K290D, K290E, K290H, K290L, K290N, K290Q, K290R, K290S, K290Y, P291G, P291Q, P291R, R292I, R292L, E293A, E293D, E293G, E293M, E293A, E293S, E293T, E294A, E294G, E294P, E294Q, E294R, E294T, E294 Q295I, Q295M, Y296H, S298A, S298R, Y300I, Y300V, Y300W, R301A, R301M, R301P, R301S, V302F, V302L, V302M, V302R, V302S, V303S, V303Y, 5304T, V305A, V305F, V3051, V305L, V305R and V305S, wherein the numbering is that of the EU index or equivalent in Kabat. a) preparing a DNA sequence comprising a sequence encoding the variant, a sequence encoding a mammalian casein promoter or a mammalian whey promoter, and a sequence encoding a signal peptide permitting the secretion of said variant; b) introducing the DNA sequence obtained in a), into a non-human mammalian embryo, to obtain a transgenic non-human mammal expressing the variant encoded by said DNA sequence obtained in a) in the mammary gland; and c) recovery of the variant in the milk produced by the transgenic non-human mammal obtained in b). In particular, such a method comprises the following steps: FIG. 1 Step a) thus comprises the preparation of a DNA sequence comprising a sequence coding for the variant, a sequence coding for a mammalian casein promoter or a mammalian whey promoter, and a sequence coding for a signal peptide. allowing the secretion of said variant. Such a step is illustrated in . The sequence coding for the variant is a DNA sequence coding for the variant according to the invention. For example, this variant has the sequence SEQ ID NO: 11. With the signal peptide, the corresponding sequence is the sequence SEQ ID NO: 13. In another example, this variant has the sequence SEQ ID NO: 15. With the signal peptide, the corresponding sequence is the sequence SEQ ID NO: 16. The coding sequence for a mammalian casein promoter or a mammalian whey promoter makes it possible to express the variant in the milk. Those skilled in the art know how to choose such a promoter. In the context of the present application, a signal peptide is an amino acid sequence, preferably from 2 to 30 amino acids, located at the N-terminus of the Fc polypeptide variant, serving to address it in the mammalian milk. Preferably, the coding sequence for a signal peptide is interposed between the sequence coding for the variant and the promoter. Without such a sequence, the variant would remain in the mammary tissue, wherein purification would be difficult and would require the sacrifice of the host animal. The signal peptide may be cleaved upon secretion. The coding sequence for the peptide signal may be one that is naturally associated with a parent polypeptide according to the invention. Alternatively, the coding sequence for the signal peptide may be that of the milk protein from which the promoter is derived, i.e. when the milk protein gene is digested in order to isolate the promoter, a DNA fragment is selected comprising both the promoter and the coding sequence of the signal peptide directly downstream of the promoter. Another alternative is to use a signal sequence derived from another secreted protein that is neither the milk protein normally expressed from the promoter, nor a polypeptide according to the invention. Preferably, the signal peptide has the sequence SEQ ID NO: 12. The DNA sequence used may comprise optimized codons. Codon optimization aims to replace natural codons by codons whose transfer RNA (tRNA) carrying the amino acids are most common in the cell type in question. The mobilization of frequently encountered tRNAs has the major advantage of increasing the translation speed of messenger RNAs (mRNAs) and therefore of increasing the final titre (Carton, J. M. et al., Protein Expr Purif, 2007). Sequence optimization also plays on the prediction of mRNA secondary structures that could slow down reading by the ribosomal complex. Sequence optimization also has an impact on the percentage of G/C that is directly related to the half-life of the mRNAs and therefore to their potential to be translated (Chechetkin, J. of Theoretical Biology 242, 2006 922-934). Homo sapiens Codon optimization may be effected by substitution of natural codons using codon frequency tables (Codon Usage Table) for mammals and more specifically for . There are algorithms available on the internet and made available by the suppliers of synthetic genes (DNA2.0, GeneArt, MWG, Genscript) that make this sequence optimization possible. Preferably, step a) comprises the following steps: (a1) preparing a DNA sequence comprising a sequence coding for the variant according to the invention, directly fused at its N-terminus to a sequence coding for a signal peptide allowing the secretion of said variant; (a2) introducing the DNA sequence obtained in (a1) into a vector comprising a sequence coding for a mammalian casein promoter or a mammalian whey promoter; (a3) digesting said vector obtained in (a2), in order to obtain a DNA sequence comprising the sequence coding for a mammalian casein promoter or a mammalian whey promoter, and the DNA sequence comprising a sequence encoding the variant of the invention directly fused at its N-terminus to a coding sequence for a signal peptide. In other words, preferably, at the end of step a), we obtain a DNA sequence comprising, from the N- to C-terminus, the coding sequence for a mammalian casein promoter or a mammalian whey promoter, fused to the coding sequence for a signal peptide, itself fused to the coding sequence for the variant according to the invention. Next, the process according to the invention comprises a step b) of introducing the DNA sequence obtained in a) into a non-human mammalian embryo, in order to obtain a transgenic non-human mammal expressing the variant coded by said sequence of DNA obtained in a) in the mammary gland. Finally, the process according to the invention comprises a step c) of recovering the variant in the milk produced by the transgenic nonhuman mammal obtained in b). Steps b) and c) are known from the prior art, in particular patent EP0264166. Preferably, such a process comprises, after step c), a purification step d) of the recovered milk. The purification step d) may be carried out by any known process of the prior art, in particular by purification on protein A. Once again, such a step is described, in particular, in patent EP0264166. (i) the four mutations 334N, 352S, 378V and 397M; and (ii) at least one mutation selected from 434Y, 434S, 226G, P228L, P228R, 230S, 230T, 230L, 241L, 264E, 307P, 315D, 330V, 362R, 389T and 389K; wherein the numbering is that of the EU index or equivalent in Kabat, said gene being under the control of a transcriptional promoter of mammalian casein or whey which does not naturally control the transcription of said gene, said DNA sequence further comprising a sequence coding for a signal peptide allowing secretion of said variant interposed between the sequence encoding the variant and the promoter. The present invention also relates to a DNA sequence comprising a gene encoding a variant of a parent polypeptide comprising an Fc fragment, said variant having an increased affinity for the FcRn receptor, and an increased affinity for at least one fragment receptor Fc (FcR) chosen from among FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors, relative to that of the parent polypeptide, said variant comprising: In a particular embodiment, said variant further comprises at least one mutation (iii) in the Fc fragment chosen from among Y296W, K290G, V240H, V240I, V240M, V240N, V240S, F241H, F241Y, L242A, L242F, L242G, L242H, L242I, L242K, L242P, L242S, L242T, L242V, F243L, F243S, E258G, E258I, E258R, E258M, E258Q, E258Y, V259C, V259I, V259L, T260A, T260H, T260I, T260M, T260N, T260R, T260S, T260W, V262S, V263T, V264L, V264S, V264T, V266L, S267A, S267Q, S267V, K290D, K290E, K290H, K290L, K290N, K290Q, K290R, K290S, K290Y, P291G, P291Q, P291R, R292I, R292L, E293A, E293D, E293G, E293M, E293Q, E293S, E293T, E294A, E294G, E294P, E294Q, E294R, E294T, E294V, 02951, Q295M, Y296H, S298A, S298R, Y300I, Y300V, Y300W, R301A, R301M, R301P, R301S, V302F, V302L, V302M, V302R, V302S, V303S, V303Y, S3041, V305A, V305F, V3051, V305L, V305R and V305S, wherein the numbering is that of the EU index or equivalent in Kabat. (i) the four mutations 334N, 352S, 378V and 397M; and (ii) at least one mutation selected from among 434Y, 434S, 226G, P228L, P228R, 230S, 230T, 230L, 241L, 264E, 307P, 315D, 330V, 362R, 389T and 389K; wherein the numbering is that of the EU index or equivalent in Kabat, said DNA sequence optionally comprising a sequence encoding a signal peptide permitting the secretion of said variant. The present invention also relates to a DNA sequence comprising a gene encoding a variant of a parent polypeptide comprising an Fc fragment, said variant having an increased affinity for the FcRn receptor, and an increased affinity for at least one fragment receptor Fc(FcR) selected from among FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors, relative to that of the parent polypeptide, said variant comprising: In a particular embodiment, said variant further comprising at least one mutation (iii) in the Fc fragment chosen from among Y296W, K290G, V240H, V240I, V240M, V240N, V240S, F241H, F241Y, L242A, L242F, L242G, L242H, L242I, L242K, L242P, L242S, L242T, L242V, F243L, F243S, E258G, E258I, E258R, E258M, E258Q, E258Y, V259C, V259I, V259L, T260A, T260H, T260I, T260M, T260N, T260R, T260S, T260W, V262S, V263T, V264L, V264S, V264T, V266L, S267A, S267Q, S267V, K290D, K290E, K290H, K290L, K290N, K290Q, K290R, K290S, K290Y, P291G, P291Q, P291R, R292I, R292L, E293A, E293D, E293G, E293M, E293Q, E293S, E293T, E294A, E294G, E294P, E294Q, E294R, E294T, E294V, 02951, Q295M, Y296H, S298A, S298R, Y300I, Y300V, Y300W, R301A, R301M, R301P, R301S, V302F, V302L, V302M, V302R, V302S, V303S, V303Y, S304T, V305A, V305F, V3051, V305L, V305R and V305S, wherein the numbering is that of the EU index or equivalent in Kabat. Alternatively, the polypeptide according to the invention may be produced in cultured mammalian cells. The preferred cells are the YB2/0 rat line, the CHO hamster line, in particular the CHO dhfr- and CHO Lec13 lines, the PER C6™ cells (Crucell), NSO, SP2/0, HeLa, BHK or COS cells, HEK293 cells. Preferably, the CHO hamster line is used. (i) the four mutations 334N, 352S, 378V and 397M; and (ii) at least one mutation selected from 434Y, 434S, 226G, P228L, P228R, 230S, 230T, 230L, 241L, 264E, 307P, 315D, 330V, 362R, 389T and 389K; and wherein the numbering is that of the EU index or equivalent in Kabat, said process comprising expressing said variant in mammalian cells in culture. Thus, the present invention also relates to a process for producing a variant of a parent polypeptide comprising an Fc fragment, said variant having an increased affinity for the FcRn receptor, and an increased affinity for at least one Fc receptor (FcR) selected from FcγRI (CD64), FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors, relative to that of the parent polypeptide, said variant comprising: In a particular embodiment, said variant further comprises at least one mutation (iii) in the Fc fragment chosen from among Y296W, K290G, V240H, V240I, V240M, V240N, V240S, F241H, F241Y, L242A, L242F, L242G, L242H, L242I, L242K, L242P, L242S, L242T, L242V, F243L, F243S, E258G, E258I, E258R, E258M, E258Q, E258Y, V259C, V259I, V259L, T260A, T260H, T260I, T260M, T260N, T260R, T260S, T260W, V262S, V263T, V264L, V264S, V264T, V266L, S267A, S267Q, S267V, K290D, K290E, K290H, K290L, K290N, K290Q, K290R, K290S, K290Y, P291G, P291Q, P291R, R292I, R292L, E293A, E293D, E293G, E293M, E293Q, E293S, E293T, E294A, E294G, E294P, E294Q, E294R, E294T, E294V, 02951, Q295M, Y296H, S298A, S298R, Y300I, Y300V, Y300W, R301A, R301M, R301P, R301S, V302F, V302L, V302M, V302R, V302S, V303S, V303Y, S304T, V305A, V305F, V3051, V305L, V305R and V305S, wherein the numbering is that of the EU index or equivalent in Kabat, a) preparing a DNA sequence encoding the variant; b) introducing the DNA sequence obtained in a) into mammalian cells in culture. The introduction may be carried out transiently or stably (i.e. integration of the DNA sequence obtained in a) into the genome of the cells); and c) expression of the variant from the cells obtained in b), then d) optionally, recovery of the variant in the culture medium. In particular, such a process comprises the following steps: The present invention also relates to a pharmaceutical composition comprising (i) a polypeptide according to the invention, and (ii) at least one pharmaceutically acceptable excipient. The object of the present invention is also a pharmaceutical composition comprising (i) the variant consisting of an Fc fragment, in particular of IgG1, exhibiting the five mutations N434Y, K334N, P352S, V397M and A378V, wherein the numbering is that of the EU index or equivalent in Kabat, and (ii) at least one pharmaceutically acceptable excipient. Preferably, the composition of the present invention comprises (i) the variant consisting of an Fc fragment, in particular of IgG1, having the six mutations N434Y, K334N, P352S, V397M and A378V, Y296W, the numbering being that of the index EU or equivalent in Kabat, and (ii) at least one pharmaceutically acceptable excipient. The object of the present invention is also the polypeptide according to the invention or the composition as described above, for its use as a drug. The object of the present invention is also the use of the variant consisting of an Fc fragment, in particular of IgG1, exhibiting the five mutations N434Y, K334N, P352S, V397M and A378V, wherein the numbering is that of the EU index or equivalent in Kabat. (i.e. variant N434Y/K334N/P352S/V397M/A378V) as a drug. In a particular embodiment, the object of the present invention is also the use of the variant consisting of an Fc fragment, in particular of IgG1, presenting the six mutations N434Y, Y296W, K334N, P352S, V397M, A378V, and Y296W, wherein the numbering is that of the index EU or equivalent in Kabat (i.e. variant N434Y/K334N/P352S/V397M/A378V/Y296W), as a drug. As indicated above, advantageously, the parent polypeptide—and therefore the polypeptide according to the invention—is an antibody. In this case, the antibody may be directed against an antigen selected from a tumor antigen, a viral antigen, a bacterial antigen, a fungal antigen, a toxin, a membrane or circulating cytokine, and a membrane receptor. When the antibody is directed against a tumor antigen, its use is particularly suitable in the treatment of cancers. By “cancer” is meant any physiological condition characterized by an abnormal proliferation of cells. Examples of cancers include carcinomas, lymphomas, blastomas, sarcomas (including liposarcomas), neuroendocrine tumors, mesotheliomas, meningiomas, adenocarcinomas, melanomas, leukemias and lymphoid malignancies, wherein this list is not exhaustive. When the antibody is directed against a viral antigen, its use is particularly useful in the treatment of viral infections. Viral infections include infections caused by HIV, a retrovirus, a Coxsackie virus, smallpox virus, influenza, yellow fever, West Nile, a cytomegalovirus, a rotavirus or hepatitis B or C, wherein this list is not exhaustive. Bacillus anthracis When the antibody is directed against a toxin, its use is particularly useful in the treatment of bacterial infections, for example infections with tetanus toxin, diphtheria toxin, anthrax toxins , or in the treatment of infections by botulinum toxins, ricin toxins, shigatoxins, wherein this list is not exhaustive. When the antibody is directed against a cytokine, its use is particularly suitable in the treatment of inflammatory and/or autoimmune diseases. Inflammatory and/or autoimmune diseases include thrombotic thrombocytopenic purpura (ITP), transplant and organ rejection, graft-versus-host disease, rheumatoid arthritis, systemic lupus erythematosus, various types of sclerosis, primary Sjögren's syndrome (or Gougerot-Sjögren's syndrome), autoimmune polyneuropathies such as multiple sclerosis, type I diabetes, autoimmune hepatitis, ankylosing spondylitis, Reiter's syndrome, gout arthritis, celiac disease, Crohn's disease, Hashimoto chronic thyroiditis (hypothyroidism), Adisson's disease, autoimmune hepatitis, Basedow (hyperthyroidism), ulcerative colitis, vasculitis and systemic vasculitis associated with ANCA (anti-cytoplasmic antibodies to neutrophils), autoimmune cytopenia and other hematologic complications in adults and children, such as acute or chronic autoimmune thrombocytopenia, autoimmune haemolytic anemias, haemolytic disease of the newborn (MHN), cold agglutinin disease, autoimmune haemophilia; Goodpasture syndrome, extra-membranous nephropathies, autoimmune bullous skin disorders, refractory myasthenia gravis, mixed cryoglobulinemia, psoriasis, juvenile chronic arthritis, inflammatory myositis, dermatomyositis and systemic autoimmune disorders of the child including antiphospholipid syndrome, connective tissue disease, pulmonary autoimmune inflammation, Guillain-Barré syndrome, chronic inflammatory demyelinating polyradiculoneuropathy (PDCl), autoimmune thyroiditis, mellitis, myasthenia gravis, inflammatory autoimmune disease of the eye, optic neuromyelitis (Devia's disease), scleroderma, pemphigus, insulin resistance diabetes, polymyositis, Biermer's anemia, glomerulonephritis, Wegener's disease, Horton, periarthritis nodosa and Churg and Strauss syndrome, Still's disease, atrophic polychondritis, malaise of Behçcet, monoclonal gammopathy, Wegener's granulomatosis, lupus, ulcerative colitis, psoriatic arthritis, sarcoidosis, collagenous colitis, dermatitis herpetiformis, familial Mediterranean fever, IgA glomerulonephritis, syndrome myasthenic Lambert-Eaton, sympathetic ophthalmia, Fiessinger-Leroy-Reiter syndrome, and uveo-meningoencephalic syndrome. Other inflammatory diseases are also included, such as acute respiratory distress syndrome (ARDS), acute septic arthritis, adjuvant arthritis, allergic encephalomyelitis, allergic rhinitis, allergic vasculitis, allergy, asthma, atherosclerosis, chronic inflammation due to chronic bacterial or viral infections, chronic obstructive pulmonary disease (COPD), coronary heart disease, encephalitis, inflammatory bowel disease, inflammatory osteolysis, inflammation associated with acute and delayed hypersensitivity reactions, inflammation associated with tumors, peripheral nerve injury or demyelinating diseases, inflammation associated with tissue trauma such as burns and ischemia, inflammation due to meningitis, multiorgan organ failure syndrome (multiple organ dysfunction syndrome, MODS), pulmonary fibrosis, sepsis and septic shock, Stevens-Johnson syndrome, undifferentiated arthritis, and undifferentiated spondyloarthropathies. In a particular embodiment of the invention, the autoimmune disease is idiopathic thrombotic purpura (ITP) and chronic inflammatory demyelinating polyradiculoneuropathy (CIDP). Preferably, the autoimmune or inflammatory pathology is selected from immunologic thrombocytopenic purpura (also called idiopathic thrombocytopenic purpura, or ITP), optic neuromyelitis or deviant disease (NMO) and multiple sclerosis. Multiple sclerosis and, in particular, experimental autoimmune encephalomyelitis (EAE) is studied thanks to a model. The sequences described in this application may be summarized as follows: SEQ ID NO: Protein Sequence 1 Fc region of human IgG1 CPPCPAPELLGGPSVFLFPPKPKDTLMISRTPEVTCV G1m1,17 (residus 226-447 VVDVSHEDPEVKFNWYVDGVEVHNAKTKPREEQYNST according to EU index YRVVSVLTVLHQDWLNGKEYKCKVSNKALPAPIEKTI or equivalent in Kabat) SKAKGQPREPQVYTLPPSRDELTKNQVSLTCLVKGFY without upper hinge PSDIAVEWESNGQPENNYKTTPPVLDSDGSFFLYSKL N-terminus region TVDKSRWQQGNVFSCSVMHEALHNHYTQKSLSLSPGK 2 Fc region of human IgG2 CPPCPAPPVAGPSVFLFPPKPKDTLMISRTPEVTCVV without upper hinge N- VDVSHEDPEVQFNWYVDGVEVHNAKTKPREEQFNSTF terminus region RVVSVLTVVHQDWLNGKEYKCKVSNKGLPAPIEKTIS KTKGQPREPQVYTLPPSREEMTKNQVSLTCLVKGFYP SDIAVEWESNGQPENNYKTTPPMLDSDGSFFLYSKLT VDKSRWQQGNVFSCSVMHEALHNHYTQKSLSLSPGK 3 Fc region of human IgG3 CPRCPAPELLGGPSVFLFPPKPKDTLMISRTPEVTCV without upper hinge N- VVDVSHEDPEVQFKWYVDGVEVHNAKTKPREEQYNST terminus region FRVVSVLTVLHQDWLNGKEYKCKVSNKALPAPIEKTI SKTKGQPREPQVYTLPPSREEMTKNQVSLTCLVKGFY PSDIAVEWESSGQPENNYNTTPPMLDSDGSFFLYSKL TVDKSRWQQGNIFSCSVMHEALHNRFTQKSLSLSPGK 4 Fc region of human IgG4 CPSCPAPEFLGGPSVFLFPPKPKDTLMISRTPEVTCV without upper hinge N- VVDVSQEDPEVQFNWYVDGVEVHNAKTKPREEQFNST terminus region YRVVSVLTVLHQDWLNGKEYKCKVSNKGLPSSIEKTI SKAKGQPREPQVYTLPPSQEEMTKNQVSLTCLVKGFY PSDIAVEWESNGQPENNYKTTPPVLDSDGSFFLYSRL TVDKSRWQEGNVFSCSVMHEALHNHYTQKSLSLSLGK 5 Fc region of human IgG1 CPPCPAPELLGGPSVFLFPPKPKDTLMISRTPEVTCV G1m3 without upper VVDVSHEDPEVKFNWYVDGVEVHNAKTKPREEQYNST hinge N-terminus region YRVVSVLTVLHQDWLNGKEYKCKVSNKALPAPIEKTI SKAKGQPREPQVYTLPPSREEMTKNQVSLTCLVKGFY PSDIAVEWESNGQPENNYKTTPPVLDSDGSFFLYSKL TVDKSRWQQGNVFSCSVMHEALHNHYTQKSLSLSPGK 6 Fc region of human EPKSCDKTHTCPPCPAPELLGGPSVFLFPPKPKDTLM d′IgG1 G1m1,17 with ISRTPEVTCVVVDVSHEDPEVKFNWYVDGVEVHNAKT upper hinge N-terminus KPREEQYNSTYRVVSVLTVLHQDWLNGKEYKCKVSNK region (residus 216-447 ALPAPIEKTISKAKGQPREPQVYTLPPSRDELTKNQV according to EU index or SLTCLVKGFYPSDIAVEWESNGQPENNYKTTPPVLDS equivalent in Kabat) DGSFFLYSKLTVDKSRWQQGNVFSCSVMHEALHNHYT QKSLSLSPGK 7 Fc region of human IgG2 ERKCCVECPPCPAPPVAGPSVFLFPPKPKDTLMISRT with upper hinge N- PEVTCVVVDVSHEDPEVQFNWYVDGVEVHNAKTKPRE terminus region EQFNSTFRVVSVLTVVHQDWLNGKEYKCKVSNKGLPA PIEKTISKTKGQPREPQVYTLPPSREEMTKNQVSLTC LVKGFYPSDIAVEWESNGQPENNYKTTPPMLDSDGSF FLYSKLTVDKSRWQQGNVFSCSVMHEALHNHYTQKSL SLSPGK 8 Fc region of human IgG3 ELKTPLGDTTHTCPRCPEPKSCDTPPPCPRCPEPKSC with upper hinge N- DTPPPCPRCPEPKSCDTPPPCPRCPAPELLGGPSVFL terminus region FPPKPKDTLMISRTPEVTCVVVDVSHEDPEVQFKWYV DGVEVHNAKTKPREEQYNSTFRVVSVLTVLHQDWLNG KEYKCKVSNKALPAPIEKTISKTKGQPREPQVYTLPP SREEMTKNQVSLTCLVKGFYPSDIAVEWESSGQPENN YNTIPPMLDSDGSFFLYSKLTVDKSRWQQGNIFSCSV MHEALHNRFTQKSLSLSPGK 9 Fc region of human IgG4 ESKYGPPCPSCPAPEFLGGPSVFLFPPKPKDTLMISR with upper hinge N- TPEVTCVVVDVSQEDPEVQFNWYVDGVEVHNAKTKPR terminus region EEQFNSTYRVVSVLTVLHQDWLNGKEYKCKVSNKGLP SSIEKTISKAKGQPREPQVYTLPPSQEEMTKNQVSLT CLVKGFYPSDIAVEWESNGQPENNYKTTPPVLDSDGS FFLYSRLTVDKSRWQEGNVFSCSVMHEALHNHYTQKS LSLSLGK 10 Fc region of human IgG1 EPKSCDKTHTCPPCPAPELLGGPSVFLFPPKPKDTLM G1m3 with upper hinge ISRTPEVTCVVVDVSHEDPEVKFNWYVDGVEVHNAKT N-terminus region KPREEQYNSTYRVVSVLTVLHQDWLNGKEYKCKVSNK ALPAPIEKTISKAKGQPREPQVYTLPPSREEMTKNQV SLTCLVKGFYPSDIAVEWESNGQPENNYKTTPPVLDS DGSFFLYSKLTVDKSRWQQGNVFSCSVMHEALHNHYT QKSLSLSPGK 11 Variant Fc A3A-184AY DKTHTCPPCPAPELLGGPSVFLFPPKPKDTLMISRTP EVTCVVVDVSHEDPEVKFNWYVDGVEVHNAKTKPREE QYNSTYRVVSVLTVLHQDWLNGKEYKCKVSNKALPAP IE<b>N</b>TISKAKGQPREPQVYTLSPSRDELTKNQVSLTCL VKGFYPSDIVVEWESNGQPENNYKTTPP<b>M</b>LDSDGSFF LYSKLTVDKSRWQQGNVFSCSVMHEALHYHYTQKSLS LSPGK 12 Signal peptide MRWSWIFLLLLSITSANA 13 Variant Fc A3A-184AY MRWSWIFLLLLSITSANADKTHTCPPCPAPELLGGPS with signal peptide VFLFPPKPKDTLMISRTPEVTCVVVDVSHEDPEVKFN (i.e. fusion of SEQ ID WYVDGVEVHNAKTKPREEQYNSTYRVVSVLTVLHQDW NO: 12 with SEQ ID LNGKEYKCKVSNKALPAPIE<b>N</b>TISKAKGQPREPQVYT NO: 11) L<b>S</b>PSRDELTKNQVSLTCLVKGFYPSDIVVEWESNGQP ENNYKTTPP<b>M</b>LDSDGSFFLYSKLTVDKSRWQQGNVFS CSVMHEALH<b>Y</b>HYTQKSLSLSPGK 14 Fc region of human IgG1 DKTHTCPPCPAPELLGGPSVFLFPPKPKDTLMISRTP G1m1,17 with residues EVTCVVVDVSHEDPEVKFNWYVDGVEVHNAKTKPREE 221-447 according to EU QYNSTYRVVSVLTVLHQDWLNGKEYKCKVSNKALPAP index or equivalent in IEKTISKAKGQPREPQVYTLPPSRDELTKNQVSLTCL Kabat VKGFYPSDIAVEWESNGQPENNYKTTPPVLDSDGSFF LYSKLTVDKSRWQQGNVFSCSVMHEALHNHYTQKSLS LSPGK 15 Variant Fc A3A-184EY DKTHTCPPCPAPELLGGPSVFLFPPKPKDTLMISRTP VETCVVVDVSHEDPEVKFNWYVDGVEVHNAKTKPREE Q<b>W</b>NSTYRVVSVLTVLHQDWLNGKEYKCKVSNKALPAP IE<b>N</b>TISKAKGQPREPQVYTL<b>S</b>PSRDELTKNQVSLTCL VKGFYPSDIVVEWESNGQPENNYKTTPP<b>M</b>LDSDGSFF LYSKLTVDKSRWQQGNVFSCSVMHEALHYHYTQKSLS LSPGK 16 Variant Fc A3A-184EY MRWSWIFLLLLSITSANADKTHTCPPCPAPELLGGPS with signal peptide VFLFPPKPKDTLMISRTPEVTCVVVDVSHEDPEVKFN (i.e. fusion of SEQ ID WYVDGVEVHNAKTKPREEQ<b>W</b>NSTYRVVSVLTVLHQDW NO: 12 with SEQ ID LNGKEYKCKVSNKALPAPIE<b>N</b>TISKAKGQPREPQVYT NO: 15) L<b>S</b>PSRDELTKNQVSLTCLVKGFYPSDIVVEWESNGQP ENNYKTTPP<b>M</b>LDSDGSFFLYSKLTVDKSRWQQGNVFS CSVMHEALH<b>Y</b>HYTQKSLSLSPGK The present invention will be better understood upon reading the following examples. BRIEF DESCRIPTION OF THE DRAWINGS B) Inhibition of ADCC: C) Inhibition Activity of the CDC: FIG. 1 : Production of variant A3A-184AY in goat milk and mouse using the vector Bc451 A) The beta casein vector, Bc451, was digested with XhoI. In the vector Bc451, the NotI-NotI fragment is the prokaryotic fragment. The NotI fragment (15730)-XhoI is the 3′ genomic sequence that contains the polyA signal. The BamHI-XhoI fragment is the promoter region of beta casein. B) The Sall fragment containing the Fc A3A-184AY variant coding region (i.e. FC3179 A3A-184AY 884 bp) was inserted into the vector, to generate the BC3180 FC A3A-184AY (C) gene construct. D) The DNA fragment for microinjection was then isolated from the prokaryotic vector. To do this, BC3180 was digested with NotI and NruI. The 16.4 kb fragment, containing the Fc gene (encoding the A3A-184AY variant) under the control of the beta casein promoter, was then purified by gel elution. FIG. 2 : Results of Tests in an Orentive Model of Arthritis Induced by K/B×N Mouse Serum Transfer The disease was induced by transferring 10 ml of K/B×N mouse serum intravenously on D0 to C57/BI/6J mice. The test molecules were administered once intraperitoneally at D0, 2h before injection of the K/B×N mouse serum. The clinical score is obtained by summing the four-leg index: 0=normal, 1=swelling of a joint, 2=swelling of more than one joint, and 3=severe swelling of the entire joint (arbitrary units). FIG. 3 : Results of tests in a therapeutic model of arthritis induced by the transfer of K/B×N mouse serum The disease was induced by transferring 10 ml of K/B×N mouse serum intravenously on D0 to C57/BI/6J mice. The test molecules were administered once intraperitoneally at D0, 72 hours after injection of K/B×N mouse serum (indicated by dotted lines). The clinical score is obtained by summing the four-leg index: 0=normal, 1=swelling of a joint, 2=swelling of more than one joint, and 3=severe swelling of the entire joint (arbitrary units). FIG. 4 : Test results of binding Fc and IqIV to sanitary cells IgIV or Fc variants according to the invention labeled with Alexa were incubated at 65 nM (10 μg/ml for Fc in 2% CSF PBS) with target cells for 20 minutes on ice. After 2 washes in 2% CSF, the cells were suspended in 500 ml Isoflow prior to flow cytometric analysis. The results are as follows: A) B cells labeled with anti-CD19 (“% positive B cells”); B) NK cells labeled with anti-CD56 (“% positive NK cells”); C) monocytes labeled with anti-CD14, in the presence of IgIV (“% positive cells+IgIV”); D) CD16+monocytes labeled with anti-CD14 and with the anti-CD16 3G8 antibody, in the presence of IgIV (“% positive cells+IgIV”); E) Neutrophils labeled with anti-CD15, in the presence of IgIV (“% positive cells+IgIV”); F) NK cells labeled with anti-CD56, in the presence of IgG or Fc WT (“% cell positive”). FIG. 5 : Results of ADCC tests, activation of Jurkat CD64 and CDC cells 6 6 A) Inhibition of activation of Jurkat CD64 cells: Raji cells (50 ml at 5×10cells/nil) were mixed with Rituxan (50 ml to 2m9/ml), Jurkat cells expressing human CD64 (Jurkat-H-CD64) (25 ml at 5×10cells/ml), PMA (50 ml to 40 ng/ml), then incubated with IgIV or the variant according to the invention (RFC A3A-184AY) at 1950 nM. After a night of incubation, the plates were centrifuged (125 g for 1 minute), and IL2 contained in the supernatant was evaluated by ELISA. The results were expressed as a percentage with respect to IgIV, according to the following formula: (IL-2 IgIV/IL-2 of the sample)×100. 7 7 Effector cells (mononuclear cells) (25 ml at 8×10cells/nil) and Rh-positive RBCs (25 ml at 4×10cells/ml final) were incubated with different concentrations (0 to 75 ng/ml) of anti-Rh-antibody D, with an Effector/Target ratio of 2/1. After 16 hours of incubation, lysis was estimated by quantifying the hemoglobin released into the supernatant using a specific substrate (DAF). The results are expressed as a percentage of specific lysis as a function of the amount of antibody. Inhibition of ADCC was induced by IgG or Fc variant according to the invention (RFC A3A-184AY) added at 33 nM. The results are expressed in percent, wherein 100% and 0% are the values obtained with IgIV at 650 nM and 0 nM respectively according to the following formula: [(ADCC with 33 nM sample−ADCC without IVIg)/(ADCC with IgIV at 33 nM—ADCC without IVIg)×100]. Raji cells were incubated for 30 minutes with a final concentration of 50 ng/ml of rituximab. A solution of young rabbit serum diluted 1/10 and previously incubated with the variant Fc according to the invention (rFc A3A-184AY) or IgIV (vol/vol) for 1 h at 37° C. was added. After 1 hour of incubation at 37° C., the plates were centrifuged (125 g for 1 minute) and the CDC was estimated by measuring the intracellular LDH released in the culture medium. The results were expressed as percent inhibition and compared to IgG and negative control (Fc without Fc function, i.e. rFc neg), 100% corresponding to a complete inhibition of lytic activity and 0% to the control value obtained without Fc or IgIV. FIG. 6 : Results of the Cell Binding Tests Natural Killer (NK) cells labeled with anti-CD56; Monocytes labeled with anti-CD14; CD16+monocytes labeled with anti-CD14 and anti-CD16 3G8 antibody; Neutrophils labeled with anti-CD15. IgIV, Fc-Rec (wild-type Fc), Fc MST-HN or Fc variants according to the invention (A3A-184AY CHO, A3A-184EY CHO) labeled with Alexa-Fluor® were incubated at 65 nM (10 μg/ml) for Fc in 2% CSF (Colony Stimulating Factor) PBS with target cells for 20 minutes on ice. After 2 washes in 2% CSF PBS, the cells were suspended in 500 μl of Isoflow before flow cytometric analysis The tests are performed on the following target cells: FIG. 7 : Results of tests in an in vivo model of idiopathic thrombocytopenic purpura (ITP) The disease was induced in mice expressing humanized FcRn by injecting an anti-platelet antibody 6A6-hlgG1 (0.3 pg/g body weight) intravenously to deplete platelets, also called thrombocytes, from mice. Negative Control (“CTL PBS”), IgIV (1000 mg/kg), Fc-Rec (Fc-wild) fragment (380 and 750 mg/kg), Fc MST-HN fragment (190 mg/kg) and the variant of the invention Fc A3A-184AY CHO (190 mg/kg and 380 mg/kg), were administered intraperitoneally 2 hours before platelet depletion. Platelet count was determined with an Advia Hematology system (Bayer). The number of platelets before the injection of antibodies was set at 100%. DESCRIPTION OF THE PREFERRED EMBODIMENTS Example 1: Preparation of Variants (Mutated Fc Fragments) According to the Invention Produced in the Milk of Transgenic Animals and Characterization of Said Variants Optimization of the Nucleotide Sequence: Expression Vector: Production in the Mouse: Expression in Goats: Example 2: Preparation of Variants (Mutated Fc Fragments) According to the Invention, Produced in HEK Cells and Characterization of Said Variants Example 3: Preparation of Variants (Mutated Fc Fragments) According to the Invention, Produced in CHO Cells Example 4: Binding Tests of FcRn, CD16aH, CD16aV, CD64 and CD32a Variants Produced in CHO Cells and in Transgenic Goat Milk Example 5: ADCC Inhibition and Jurkat Cell Activation Tests of Variants Produced in CHO Cells and in Transgenic Goat Milk Example 6: Tests of Binding Fc Variant to Blood Cells Example 7: In Vivo Model Tests of Idiopathic Thrombocytopenic Purpura (ITP) I. Materials and Methods Principle: An Fc fragment according to the invention may be produced in the milk of transgenic animals, by placing the coding sequence of the Fc fragment in a milk-specific expression vector. The vector may be introduced into the genome of a transgenic mouse or goat by microinjection. Following the screening and identification of an animal with the transgene, the females are reproduced. Following the parturition, milking the females allows recovery of their milk, in which the Fc could be secreted following the expression of the specific promoter of the milk. Protein Sequence of Fc Variant A3A-184AY (K334N/P352S/A378V/V397M/N434Y): (SEQ ID NO: 11) DKTHTCPPCPAPELLGGPSVFLFPPKPKDTLMISRTPEVTCVVVDVSHED PEVKFNWYVDGVEVHNAKTKPREEQYNSTYRVVSVLTVLHQDWLNGKEYK CKVSNKALPAPIE<b>N</b>TISKAKGQPREPQVYTLSP<b>S</b>RDELTKNQVSLTCLVK GFYPSDIVVEWESNGQPENNYKTTPP<b>M</b>LDSDGSFFLYSKLTVDKSRWQQG NVFSCSVMHEALH<b>Y</b>HYTQKSLSLSPGK A signal peptide (MRWSWIFLLLLSITSANA, SEQ ID NO: 12) is bound to the N-terminus of the protein sequence, so as to obtain the sequence SEQ ID NO: 13. It allows the secretion of the protein in milk, once expressed. Bos taurus The nucleotide sequence has been optimized for expression in the goat mammary gland. For this, the sequence was optimized for the species by the algorithm of a synthetic gene provider (such as GeneArt). FIG. 1 The goat beta casein expression vector (Bc451) was used for the production of the A3A-184AY variant in mouse and goat milk (see ). FIG. 1A FIGS. 1B and 1C The beta casein vector, Bc451, was digested with XhoI (). The Sall fragment containing the Fc A3A-184AY variant coding region was inserted to generate the BC3180 FC A3A-184AY gene construct (). The DNA fragment for microinjection was then isolated from the prokaryotic vector. FIG. 1D BC3180 was digested with NotI and NruI (). The released 16.4 kb fragment containing the Fc gene under the control of the beta casein promoter was then purified by gel elution. This DNA was then used in the microinjection stage. The DNA fragment was inserted by microinjection into preimplantation mouse embryos. The embryos were then implanted in pseudopregnant females. The offspring that were born were screened for the presence of the transgene by PCR analysis. The DNA fragment prepared for microinjection may also be used for the production of the Fc variant A3A-184AY in goat's milk. I. Materials and Methods for Production Each mutation of interest in the Fc fragment of sequence SEQ ID NO: 14 was inserted by overlap PCR using two sets of primers adapted to integrate the targeted mutation(s) with the codon(s) encoding the desired amino acid. Advantageously, when the mutations to be inserted are close to the Fc sequence, they are added via the same oligonucleotide. The fragments thus obtained by PCR were combined and the resulting fragment was amplified by PCR using standard protocols. The PCR product was purified on 1% (w/v) agarose gel, digested with the appropriate restriction enzymes and cloned. The recombinant Fc fragment was produced by transient transfection (by lipofection) in HEK293 cells (293-F cells, InvitroGen freestyle) in F17 medium supplemented with L-glutamine using the pCEP4 vector. After 8 days of culture, the supernatant is clarified by centrifugation and filtered through a 0.2 μm filter. Fragment Fc is then purified on Hi-Trap protein A, and elution is effected with 25 mM citrate buffer pH=3.0, neutralized and dialyzed in PBS prior to filtration sterilization (0.2 μm). II. Octet® Binding Tests (BLI Technology “Bio-Layer Interferometry”, Device: Byte RED96, Fortebio, PaII) Protocols: Human FcRn Binding (hFcRn): The biotinylated hFcRn receptor is immobilized on Streptavidin Biosensors, diluted to 0.7 μg/ml in run buffer (0.1 M phosphate buffer, 150 mM NaCl, 0.05% Tween 20, pH6). The variants according to the invention, WT and IgIV, were tested at 200, 100, 50, 25, 12.5, 6.25, 3.125 and 0 nM in run buffer (200 nM=10 μg/ml for Fc). Design of the Test: Baseline 1×120 s in run buffer Loading 300 s: the receiver is loaded on the biosensors Baseline 2×60 s in run buffer Association 60 s: samples (Fc or IVIg) are added to the biosensors loaded in hFcRn Dissociation 30 s in run buffer Regeneration 120 s in regeneration buffer (0.1 M phosphate buffer, 150 mM NaCl, 0.05% Tween 20, pH 7.8). Results Interpretation: The association and dissociation curves (first 10 s) are used to calculate the kinetic constants of association (kon) and dissociation (koff) using a 1/1 association model. KD (nM) is then calculated (kon/koff). Link to the hCD16aV and hCD32aH Receivers: The hCD16aV (R&D System) or hCD32aH (PX therapeutics) HisTag receptor is immobilized on anti-Penta-HIS Biosensors (HIS 1K), diluted to 1 μg/ml in kinetic buffer (PaII). The Fc variants according to the invention, WT and IgIV, were tested at 1000, 500, 250, 125, 62.5, 31.25, 15 and 0 nM in kinetic buffer. Loading Before Each Sample Design of the test: All the stages are realized in kinetic buffer (PaII) Baseline 1×60 s Loading 400 s Baseline 2×60 s Association 60 s Dissociation 30 s Regeneration 5 s in regeneration buffer (Glycine 10 mM pH 1.5/Neutralization: PBS). Results Interpretation: The association and dissociation curves (first 5 s) are used to calculate the kinetic constants of association (kon) and dissociation (koff) using a 1/1 association model. KD (nM) is then calculated (kon/koff). Results: The results are shown in Table 1 below: TABLE 1 Molecule hCD16aV SD hFcRn SD hCD32aH SD IVIg 653.8 4.0 34.4 1.94 438.2 114.3 Fc-WT (HEK) 504.3 75.0 36.5 8.2 659.3 203.1 A3A-184AY 132.0 14.1 7.8 0 313.0 29.7 (HEK) SD = standard deviation The results show that the variant Fc A3A 184AY (HEK) according to the invention exhibits both an increased affinity for the hFcRn receptor, and an increased affinity for the FcγRIIIa (CD16a) and FcγRIIa (CD32a) receptors, and this compared to Fc parent not mutated (Fc-WT) but also compared to IVIG. III. Model-Based Arthritis Assays Induced by K/B×N Mouse Serum Transfer Protocol: The K/B×N model was generated by crossing the transgenic mice for the KRN T cell receptor to the NOD mouse strain. K/B×N F1 mice spontaneously develop a disease at 3 to 5 weeks of age and share many clinical features with human rheumatoid arthritis. The disease was induced by transferring 10 ml of K/BxN mouse serum intravenously on D0 to C57/BI/6J mice. The molecules tested were administered once intraperitoneally at D0, 2h before or 72 hours after the injection of K/BxN mouse serum. Mice were monitored daily for signs and symptoms of arthritis to assess incidence and severity by adding the four-leg index: 0=normal, 1=swelling of a joint, 2=swelling of more than one joint, and 3=severe swelling of the entire joint. Results: Mice given K/BxN serum developed arthritis in the joint. The disease was characterized by an increase in ankle size, leading to an increase in the clinical score. These mice showed a significant increase in clinical score and ankle thickness compared to control mice treated with saline. 1—Preventive Model: Administered 2 h before the K/BxN mouse serum injection, treatment with 750 mg/kg of wild-type Fc (Fc WT) fragment significantly reduced the clinical score compared to the serum group of K/BxN mice. FIG. 2 Treatment with the Fc variant A3A-184AY (HEK) according to the invention significantly reduced the clinical score in a manner similar to the Fc WT fragment, but at a dose 15 times lower (50 mg/kg) (). 2—Therapeutic Model: 72 hours after the injection of K/B×N mouse serum, the IgG administered at 2 g/kg did not significantly reduce the clinical score compared to the group treated with K/B×N mouse serum. FIG. 3 However, treatment with the Fc WT fragment at 750 mg/kg (molecular dose equivalent to 2 g/kg IVIG) significantly reduced the clinical score compared to the group treated with K/B×N mouse serum. In addition, treatment with the variant Fc A3A-184AY (HEK) according to the invention significantly reduced the clinical score similarly to the Fc-WT fragment, but at a dose 4-fold lower (190 mg/kg) (). IV. In Vitro Cell Tests Protocols: Evaluation of the Binding of Fc and Ig IV Fragments to Blood Cells: IgIV or Fc variants according to the invention labeled with Alexa were incubated at 65 nM (10 μg/ml for Fc in 2% CSF PBS) with target cells for 20 minutes on ice. After 2 washes in 2% CSF, the cells were suspended in 500 ml Isoflow prior to flow cytometric analysis. B cells, NK cells, monocytes and neutrophils were specifically labeled with anti-CD19, anti-CD56, anti-CD14 and anti-CD15 respectively. The FcγRIII receptor (CD16) was demonstrated using the anti-CD16 3G8 antibody. Inhibition of ADCC: To mimic the lysis of red blood cells observed in idiopathic thrombocytopenic purpura (ITP), involving the autoantibodies of the patient with ITP, an effector cell-mediated red cell lysis in the presence of an anti-Rhesus D (RhD) monoclonal anti-body was conducted, and the ability of different amounts of polyvalent immunoglobulins (IVIg) or mutated or non-mutated recombinant Fc fragments, to inhibit this lysis, for example by competition with anti-RhD for fixation of Fc receptors on the surface of the effector cell, were evaluated. 7 7 The cytotoxicity of anti-RhD antibodies has been studied by the technique of ADCC. Briefly, effector cells (mononuclear cells) (25 to 8×10cells/nil) and Rh-positive red cells (25 to 4×10cells/ml final) were incubated with different concentrations (0 to 75 ng/ml) of anti-RhD antibodies, with an Effector/Target ratio of 2/1. After 16 hours of incubation, lysis was estimated by quantifying the hemoglobin released into the supernatant using a specific substrate (DAF). The results are expressed as a percentage of specific lysis as a function of the amount of antibody. The inhibition of ADCC induced by IgIV or the Fc variant according to the invention (RFC A3A-184AY) added to 33 nM was evaluated. The results are expressed in percent, wherein 100% and 0% are the values obtained with IgIV at 650 nM and 0 nM respectively, according to the following formula: [(ADCC with 33 nM sample−ADCC without IVIg)/(ADCC with IgIV at 33 nM−ADCC without IVIg)×100]. Inhibition of Activation of Jurkat CD64 Cells: This test estimates the ability of the Fc variants according to the invention or IVIG (total IgG), to inhibit the secretion of IL2 by Jurkat cells expressing human CD64 (Jurkat-H-CD64) induced by the Raji cell line with Rituxan. 6 6 Briefly, Raji cells (50 ml at 5×10cells/nil) were mixed with Rituxan (50 ml at 2 mg/ml), Jurkat H-CD64 cells (25 ml at 5×10cells/ml, a phorbol ester (PMA, 50 ml at 40 ng/ml), then incubated with the IgIV or the Fc variant according to the invention at 1950 nM. After a night of incubation, the plates were centrifuged (125 g for 1 minute) and NL2 contained in the supernatant was evaluated by ELISA. The results were expressed as a percentage with respect to IgIV, according to the following formula: (IL-2 IgIV/IL-2 of the sample)×100. Inhibitory Activity of the CDC: This assay estimates the ability of the Fc variant according to the invention or IVIG to inhibit rituximab-mediated CDC activity on the Raji cell line in the presence of rabbit serum as a source of complement. Briefly, Raji cells were incubated for 30 minutes with a final concentration of 50 ng/ml of rituximab. A solution of young rabbit serum diluted 1/10 and previously incubated with the variant according to the invention or IgIV (vol/vol) for 1 h at 37° C., was added. After 1 hour of incubation at 37° C., the plates were centrifuged (125 g for 1 minute) and the CDC was estimated by measuring the intracellular LDH released in the culture medium. The results were expressed as percentage inhibition and compared to IVIG and negative control (Fc without Fc function), 100% corresponding to a complete inhibition of lytic activity and 0% to the control value obtained without Fc or IVIG. Results: FIGS. 4 and 5 The results are shown in . FIG. 5 FIG. 4 As shown in , the Fc variant according to the invention (A3A-184AY (HEK)) has a better inhibition of the activity of the Jurkat cells expressing CD64, of the ADCC and of the CDC, in comparison with the IVIg. These results show that a variant according to the invention such as A3A-184AY may be effective for the treatment of pathologies involving patient autoantibodies, in particular by blocking Fc receptors on the effector cells of the patient (see ). 2 5 The recombinant Fc fragment may be obtained from SEQ ID NO: 14 in the same manner as that described in Example 2. This mutated Fc fragment may be produced by transfection into CHO—S cells with the aid of lipofection such as Freestyle Max Reagent (Thermofisher) using a vector optimized for expression in this cell line. The CHO—S cells are cultured in CD FortiCHO medium+8 mM Glutamine, under conditions agitated at 135 rpm in a controlled atmosphere (8% CO) at 37° C. On the day before the day of transfection, the cells are seeded at a density of 6.10cells/ml. 6 5 On the day of transfection, the linearized DNA (50 μg) and 50 μl of transfection agent (TA) are pre-incubated separately in Opti-Pro SFM medium and then mixed and incubated for 20 minutes to allow the formation of the DNA/AT complex. The whole is then added to a cell preparation of 1.10cells/ml in a volume of 30 ml. After 48 hours of incubation, transfection agents are added (Neomycin 1 g/L and Methotrexate 200 nM) to the cells. The cell density and viability are determined every 3-4 days and the culture volumes adapted to maintain a cell density greater than 6.10cells/ml. When the viability is greater than 90%, the stable pools obtained are saved by cryostatic congelation and productions in agitated conditions are carried out in “Fed-batch” mode for 10 days with an addition of 4 g/l or 6 g/l of glucose during production. At the end of production, the cells and the supernatant are separated by centrifugation. The cells are removed and the supernatant is harvested, concentrated and filtered at 0.22 μm. The Fc fragment is then purified by affinity chromatography on a protein A resin (HiTrap protein A, GE Healthcare). After capture on the balanced resin PBS buffer, the Fc fragment is eluted with 25 mM citrate buffer pH=3.0, followed by rapid pH neutralization with 1M Tris and then dialysed in PBS buffer before sterilization by filtration (0.2 pm). Variants of the invention A3A-184AY CHO (K334N/P352S/A378V/V397M/N434Y), A3A-184EY_CHO (Y296W/K334N/P352S/A378V/V397M/N434Y) produced in CHO cells according to the process given in example 3, A3A-184AY_TGg produced in the transgenic goat according to the process described in Example 1; The Fc MST-HN fragment containing the mutations M252Y/S254T/T256E/H433K/N434F, described in the literature as having an optimized binding only to the FcRn receptor (Ulrichts et al, JCI, 2018) was produced in HEK-293 cells. (293-F cells, InvitroGen freestyle); A wild-type Fc Fc-WT or Fc-Rec fragment obtained by digesting with papain an IgG1 produced in transgenic goat milk; IVIG Fc receptor binding assays are performed with the following molecules: Human FcRn Binding (hFcRn): FcRn binding is studied by competitive assay using A488 labeled Rituxan (Rituxan-A488) and Jurkat cells expressing the FcRn receptor (Jurkat-FcRn). 5 The Jurkat-FcRn cells are seeded in a 96-well plate (V bottom) at a concentration of 2.10cells per well. The cells are then incubated for 20 minutes at 4° C. with the test molecules diluted in buffer at the following final concentrations: 167 μg/ml; 83 μg/ml; 42 μg/ml; 21 μg/ml; 10 μg/ml; 5 μg/ml; 3 μg/ml; 1 μg/ml; 0 μg/ml, and simultaneously with 25 μg/ml Rituxan-A488. The cells are then washed by adding 100 μl of PBS at pH 6 and centrifuged at 1700 rpm for 3 minutes at 4° C. The supernatant is then removed and 300 μl of cold PBS is added at pH 6. The binding of Rituxan-A488 to FcRn expressed by Jurkat-FcRn cells is evaluated by flow cytometry. The mean fluorescence intensity (MFI) observed are expressed as a percentage, wherein 100% is the value obtained with Rituxan-A488 alone, and 0% the value in the absence of Rituxan-A488. The molecular concentrations required to induce 50% inhibition of Rituxan-A488 binding to FcRn of Jurkat-FcRn cells are calculated using “Prism Software”. The results are shown in Table 2 below. TABLE 2 A3A- A3A- A3A- MST- 184AY_CHO 184EY_CHO 184AY_TGg HN Fc-WT IVIG Inhibition of 13 15 12 14 476 1356 binding to FcRn (IC 50%, nM) The results show that the Fc A3A-184AY CHO, Fc A3A-184EY CHO and A3A-184AY-TGg variants show increased Rituxan-A488 binding inhibition (×100 compared to IVIG). The variants of the invention show an FcRn binding affinity equivalent to that observed with the Fc MST-HN fragment described in the literature as optimized only for FcRn (Ulrichts et al, JCI, 2018). Binding to hCD64 and hCD16aH, hCD16aV, hCD32aH, hCD32aR Receptors: Binding to Human CD64 (hCD64) Human CD64 binding is studied by competitive assay using Rituxan-A488 and Jurkat cells expressing the CD64 receptor (Jurkat-CD64). 5 Jurkat-CD64 cells are seeded in a 96-well plate (V-bottom) at a concentration of 2.10cells per well. The cells are then incubated for 20 minutes at 4° C. with the test molecules diluted in the buffer with the final concentrations: 167 μg/ml; 83 μg/ml; 42 μg/ml; 21 μg/ml; 10 μg/ml; 5 μg/ml; 3 μg/ml; 1 μg/ml; 0 μg/ml, and simultaneously with 25 μg/ml Rituxan-A488. The cells are then washed by adding 1 μl of PBS at pH 6 and centrifuged at 1700 rpm for 3 minutes at 4° C. The supernatant is then removed and 300 μl of cold PBS is added at pH 6. The binding of Rituxan-A488 to CD64 expressed by Jurkat-CD64 cells is evaluated by flow cytometry. The mean fluorescence intensities (MFI) observed are expressed as a percentage, wherein 100% is the value obtained with Rituxan-A488 alone, and 0% is the value in the absence of rituxan-A488. The molecular concentrations required to induce 50% inhibition of Rituxan-A488 binding to CD64 of Jurkat-CD64 cells are calculated using “Prism Software”. Binding to CD32aH and CD32aR Human CD32 receptor binding is studied by competitive assay using Rituxan-A488 and HEK cells transfected with CD32aH and CD32aR (HEK-CD32) receptors. 5 The HEK-CD32 cells are seeded in a 96-well plate (V bottom) at a concentration of 2.10cells per well. The cells are then incubated for 20 minutes at 4° C. with the test molecules diluted in buffer at the following final concentrations: 333 μg/ml; 167 μg/ml, 83 μg/ml; 42 μg/ml; 21 μg/ml; 10 μg/ml; 5 μg/ml; 3 μg/ml; 1 μg/ml; 0 μg/ml, and simultaneously with 30 μg/ml Rituxan-A488. The cells are then washed by adding 100 μl of PBS at pH 6 and centrifuged at 1700 rpm for 3 minutes at 4° C. The supernatant is then removed and 300 μl of cold PBS is added at pH 6. The binding of Rituxan-A488 to CD32aH and CD32aR expressed by HEK-CD32 cells is evaluated by flow cytometry. The mean fluorescence intensities (MFI) observed are expressed as a percentage, wherein 100% is the value obtained with the Rituxan-A488 alone, and 0% is the value in the absence of Rituxan-A488. The molecular concentrations required to induce 50% inhibition of Rituxan-A488 binding to CD32aH and CD32aR of HEK-CD32 cells are calculated using “Prism Software”. Binding to hCD16aH The binding to human CD16aH is studied by competitive assay using a murine anti-CD16 3G8 antibody labeled with phycoerythrin (3G8-PE) and Jurkat cells transfected with the human CD16aH receptor (Jurkat-CD16aH). 5 The Jurkat-CD16aH cells are seeded in a 96-well plate (V bottom) at a concentration of 2.10cells per well. The cells are then incubated for 20 minutes at 4° C. with the test molecules diluted in buffer at the following final concentrations: 83 μg/ml; 42 μg/ml; 21 μg/ml; 10 μg/ml; 5 μg/ml; 3 μg/ml; 1 μg/ml; 0 μg/ml, and simultaneously with 0.5 μg/ml mAb 3G8-PE. The cells are then washed by adding 1 μl of PBS at pH 6 and centrifuged at 1700 rpm for 3 minutes at 4° C. The supernatant is then removed and 300 μl of cold PBS is added at pH 6. The binding of mAb 3G8-PE to CD16aH expressed by Jurkat-CD16aH cells is evaluated by flow cytometry. The average fluorescence intensities (MFI) observed are expressed as a percentage, wherein 100% is the value obtained with the mAb 3G8-PE alone, and 0% is the value in the absence of mAb 3G8-PE. The molecular concentrations required to induce 50% inhibition of mAb 3G8-PE binding to CD16aH of Jurkat-CD16aH cells, are calculated using “Prism Software”. The results are shown in Table 3 below. TABLE 3 A3A- A3A- A3A- MST- 184AY_CHO 184EY_CHO 184AY_TGg HN Fc-WT IVIG Inhibition of 262 123 105 >2170 282 1684 binding to the CD16a-F (IC 50%, nM) Inhibition of 135 147 170 >2170 >2170 671 binding to the CD32a-H (IC 50%, nM) Inhibition of 176 132 Not >2170 >2170 1308 binding to the determined CD32a-R (IC 50%, nM) Inhibition of 57 55 59 >2170 >2170 761 binding to the CD32b (IC 50%, nM) Inhibition of 84 70 87 494 176 880 binding to the CD64 (IC 50%, nM) The results show that the A3A-184AY CHO Fc, A3A-184EY CHO Fc and A3A-184AY_TGg variants have an increased affinity for the FcγRIIIa (CD16a), FcγRI (CD64) and FcγRIIa (CD32a) receptors, compared to the Fc non mutated (Fc-WT) but also compared to IVIG. The mutants of the invention show a very increased affinity for FcγRIIIa (CD16a), FcγRI (CD64) and FcγRIIa (CD32a) receptors compared to MST-HN. Binding to Human CD16aV: HisTag hCD16aV (R&D System) receptor is immobilized on anti-Penta-HIS Biosensors (HIS 1K), diluted to 1 μg/ml in kinetic buffer (PaII). The molecules were tested at concentrations of 1000, 500, 250, 125, 62.5, 31, 25, 15 and 0 nM in kinetic buffer. Loading Before Each Sample Design of the Test: All the Steps are Realized in Kinetic Buffer (PaII) Baseline 1×60 s Loading 400 s Baseline 2×60 s Association 60 s Dissociation 30 s Regeneration 5 s in regeneration buffer (Glycine 10 mM pH 1.5/Neutralization: PBS). Results Interpretation: The association and dissociation curves (first 5 s) are used to calculate the kinetic constants of association (kon) and dissociation (koff) using a 1/1 association model. KD (nM) is then calculated (kon/koff). The results are shown in Table 4 below. TABLE 4 Molecule KD hCD16aV (nM) SD A3A-184AY_CHO 80.3 18.1 A3A-184EY_CHO 59.3 7.7 A3A-184AY_TGg 51.2 10.7 MST-HN 268.2 83.6 Fc-WT 314.1 72.7 IVIG 339.0 103.9 SD: standard deviation The results show that the Fc A3A-184AY CHO, Fc A3A-184EY CHO and A3A-184AY_TGg variants show a binding increase for the human FcγRIIIa-V receptor (CD16a-V), and this compared to the non-mutated Fc (Fc-WT) but also compared to IgM and Fc fragment MST-HN containing M252Y/S254T/T256E/H433K/N434F mutations. Variants of the invention A3A-184AY_CHO (K334N/P352S/A378V/V397M/N434Y), A3A-184EY_CHO (Y296W/K334N/P352S/A378V/V397M/N434Y) produced in CHO cells according to the process given in Example 3, The Fc MST-HN fragment containing the M252Y/S254T/T256E/H433K/N434F mutations, described in the literature as having a binding optimized only to the FcRn receptor (Ulrichts et al, JCI, 2018) was produced in HEK-293 cells (293-F cells, Freestyle InvitroGen), A wild-type Fc “Fc-Rec” or “Fc-WT” fragment, obtained by digesting with papain an IgG1 produced in transgenic goat's milk, IgIV ADCC inhibition and Jurkat cell activation tests are performed with the following molecules: ADCC Inhibition Test: To mimic the lysis of red blood cells observed in idiopathic thrombocytopenic purpura (ITP), involving the autoantibodies of the patient with ITP, an effector cell-mediated red cell lysis in the presence of a Rhesus D (RhD) anti-human monoclonal antibody was conducted, and the ability of different amounts of polyvalent immunoglobulins (IgMV) or mutated or non-mutated recombinant Fc fragments, to inhibit this lysis, for example by competition with anti-RhD for fixation Fc receptors on the surface of the effector cells, were evaluated. 7 7 The cytotoxicity of anti-RhD antibodies has been studied by the technique of ADCC. Briefly, effector cells (mononuclear cells) (25 to 8×10cells/nil) and Rh-positive red cells (25 to 4×10cells/ml final) were incubated with different concentrations (0 to 75 ng/ml) of anti-RhD antibodies, with an Effector/Target ratio of 2/1. After 16 hours of incubation, lysis was estimated by quantifying the hemoglobin released into the supernatant using a specific substrate (DAF). The results are expressed as a percentage of specific lysis as a function of the amount of antibody. The inhibition of ADCC is induced by the molecules tested (IgM, MST-HN, Fc-WT A3A-184AY CHO, A3A-184EY CHO) at concentrations of 500, 50, 5, 0.5 μg/ml. for MST-HN, Fc-WT A3A-184AY_CHO, A3A-184EY_CHO and 1500, 150, 15, 1.5 μg/ml for IgIV. The molecule concentrations to induce 25% or 50% inhibition were calculated with “Prism Software”. The results are shown in Table 5 below. TABLE 5 A3A- A3A- MST- 184AY_CHO 184EY_CHO HN Fc-WT IVIg Inhibition of the lysis of 13.5 7.6 190.2 82 59.6 the red blood cells medited by the anti-D AD1 (IC 25%, nM) Inhibition of the lysis of 97 56 441 1500 351 the red blood cells medited by the anti-D AD1 (IC 50%, nM) The results show that the Fc variants, A3A-184AY CHO and A3A-184EY CHO, show an inhibition of lysis of red blood cells by an increased anti-Rhesus D antibody compared to non-mutated Fc (Fc-WT) but also compared with IVIG. In addition, the inhibition of A3A-184AY CHO or A3A-184EY CHO is greatly increased compared to the Fc fragment, MST-HN, containing the M252Y/S254T/T256E/H433K/N434F mutations. Inhibition of Activation of Jurkat CD64 Cells: This test estimates the ability of the Fc variants according to the invention or IVIG (total IgG) to inhibit the secretion of IL2 by Jurkat cells expressing human CD64 (Jurkat-H-CD64) induced by the Raji cell line with Rituxan. 6 6 Briefly, Raji cells (50 ml at 5×10cells/nil) were mixed with Rituxan (50 ml at 2 mg/ml), Jurkat H-CD64 cells (25 ml at 5×10), a phorbol ester (PMA, 50 ml at 40 ng/ml), then incubated with the IGVI or Fc variant according to the invention at 1950 nM. After a night of incubation, the plates were centrifuged (125 g for 1 minute) and NL2 contained in the supernatant was evaluated by ELISA. Inhibition of IL2 secretion was induced by IVIG, Fc-WT, MST-HN or Fc variants according to the invention (A3A-184AY CHO or A3A-184EY CHO) added at 50 and 100 μg/ml. for Fc-WT, MST-HN fragments or Fc variants according to the invention (A3A-184AY CHO or A3A-184EY CHO), and 150 and 300 μg/ml for IGVI. The concentrations of the molecule to induce 25% or 50% inhibition were calculated with “Prism Software”. The results are shown in Table 6 below. TABLE 6 A3A- A3A- MST- 184AY_CHO 184EY_CHO HN Fc-WT IVIG Inhibition of the 448 442 1455 926 1106 secretion of IL-2 of the Jurkat cells transfected with CD64 (IC 25%, nM) Inhibition of the 600 600 <1950 <1950 <1950 secretion of IL-2 of the Jurkat cells transfected with CD64 (IC 50%, nM) The results show that the A3A-184AY-CHO and A3A-184EY-CHO Fc variants show increased inhibition of IL2 secretion compared to non-mutated Fc (Fc-WT) but also compared to IVIG. In addition, the inhibition of RFC A3A-184AY CHO or A3A-184EY CHO is greatly increased compared to the MST-HN Fc fragment containing the M252Y/S254T/T256E/H433K/N434F mutations. Variants of the invention A3A-184AY CHO (K334N/P352S/A378V/V397M/N434Y), A3A-184EY_CHO (Y296W/K334N/P352S/A378V/V397M/N434Y) produced in CHO cells according to the process given in example 3, A3A-184AY_TGg produced in the transgenic goat according to the process described in Example 1, The fragment Fc MST-HN containing the mutations M252Y/S254T/T256E/H433K/N434F, described in the literature as having an optimized binding only to the FcRn receptor (Ulrichts et al, JCI, 2018), was produced in HEK-293 cells (293-F cells, Freestyle InvitroGen), A wild-type Fc “Fc-Rec” or “Fc-WT” fragment, obtained by digesting with papain an IgG1 produced in transgenic goat's milk, IgIV The blood cell binding tests are performed with the following molecules: The molecules labeled with the Alexa Fluor® marker (highly fluorescent protein marker) were incubated at 65 nM (10 μg/ml for Fc in 2% CSF PBS) with target cells for 20 minutes on ice. Natural Killer (NK) cells labeled with anti-CD56 (“% positive NK cells”); Monocytes labeled with anti-CD14 (“% positive cells”); CD16+ monocytes labeled with anti-CD14 and with the anti-CD16 3G8 antibody (“% positive cells”); Neutrophils labeled with anti-CD15 (“% positive cells”) After 2 washes in 2% CSF, the cells were suspended in 500 ml Isoflow prior to flow cytometric analysis. The tests are performed on the following cells: The FcγRIII receptor (CD16) was demonstrated using the anti-CD16 3G8 antibody. FIG. 6 The results show that the variants Fc A3A-184AY CHO, A3A-184EY CHO and A3A-184AY_TGg, whatever the mode of production, offer increased binding compared to the non-mutated Fc (Fc-Rec), but also compared to the IgIV. In addition, the binding of A3A-184AY or A3A-184EY is greatly increased compared to the MST-HN fragment for NK cells, CD16+ monocytes and neutrophils (see ). The disease was induced in mice expressing a humanized FcRn (mFcRn−/−hFcRnTg 276 heterozygous B6 gene background (The Jackson Laboratory) by injecting an anti-platelet antibody 6A6-hIgG1 (0.3 pg/g body weight) intravenously to deplete the platelets of the mice. A blood test is made (number of thrombocytes) 24 hours before the injection of 6A6-hIgG1, 4h after the induction of the disease. The IgIV (1000 mg/kg), Fc-Rec (380 and 750 mg/kg), Fc MST-HN (190 mg/kg) and Fc A3A-184AY CHO (190 mg/kg and 380 mg/kg), were administered intraperitoneally 2 hours before platelet depletion. Platelet count was determined with an Advia Hematology system (Bayer). The number of platelets before the injection of antibodies was set at 100%. The anti-platelet antibody 6A6-hIgG1 (0.3 μg/g) makes it possible to deplete 90% of the platelets. FIG. 7 100% platelets for A3A 184AY CHO at a dose of 380 mg/kg; 106% platelets for A3A-184AY CHO at a dose of 190 mg/kg; 90% platelets for IgIV at a dose of 1000 mg/kg; 64% platelets for Fc-WT at a dose of 750 g/kg; 75% platelets for Fc-WT at a dose of 380 mg/kg; 61% of the platelets for the MST-HN variant at a dose of 190 mg/kg. The administration of drug candidates 2 hours before depletion of platelets can restore ():
Foreign Language Studies is an international journal published by the College of Foreign Languages and Literature at National Chengchi University (NCCU), which regularly publishes original studies in the fields of literature, linguistics, language teaching and cultural studies. We publish biannually in June and December. We welcome submissions in traditional Chinese and any other languages. Authors writing in simplified Chinese should convert the language of their submissions accordingly. Authors are advised to consult the style sheet of this journal when preparing their manuscripts for submission. This journal also welcomes book reviews and research notes. Book reviews comment on academic books published either in Taiwan or abroad. Research notes discuss specific topics or research methods. Book reviews and research notes should not exceed 3,000 words. This journal also accepts invited submissions. Each issue can accept at most one invited submission of book review or research notes. Submissions are welcomed at all times and will be peer-reviewed. Authors must sign the 'Foreign Language Studies Contributor’s Declaration Form' and indicate that the manuscript has not been published elsewhere. The manuscript should not contain any self-identifying references. Submitted papers are reviewed by at least two independent reviewers. Both reviewers and author(s) remain anonymous throughout the double-blind review process. Please note that the author will receive the review result within 6 months, and should return the revised version of the accepted manuscript after proofreading. All accepted submissions should be formatted by the author(s) according to our style sheet. Foreign Language Studies reserves the right to decide in which issue an accepted manuscript will be published. Submitted papers might be published in the next issue if the review process is delayed. Authors should sign the 'Contributor’s Declaration Form' to grant a non-exclusive free license to Foreign Language Studies, and the soft copy of their published works will also be included in NCL Taiwan Periodical Literature, Airiti Library, HyRead Journal, Taiwan Academic Citation Index (TACI), UDP Taiwan Journals Search, and YueDan Knowledge Base. Authors will receive both a hard and soft copy of the issue containing their work. We do not offer any monetary rewards to authors. Please follow the instructions below for submission: 1. Submit the Word file of your manuscript to iPress. 2. Sign the 'Contributor’s Declaration Form' and upload the scan of the document to iPress as well. 3. Send NT$1,000 to the address below by registered mail (authors submitting book reviews and research notes, as well as professors of College of Foreign Languages & Literature of NCCU are excluded). Recipient: Foreign Language Studies, College of Foreign Languages & Literature, National Chengchi University (NCCU) Address: No.64, Sec. 2, Zhinan Rd., Wenshan Dist., Taipei City, Taiwan (R.O.C.) Submission guidelines of this journal takes effect after approved in the editorial board meeting, and the same shall apply to any amendment thereto.
https://flstudies.org/zh/submission-guidelines.html
Since 1981, the biennial International Symposium on Aviation Psychology (ISAP) has been convened for the purposes of (a) presenting the latest research on human performance problems and opportunities within aviation systems, (b) envisioning design solutions that best utilize human capabilities for creating safe and efficient aviation systems, and (c) bringing together scientists, research sponsors, and operators in an effort to bridge the gap between research and applications. Though rooted in the presentations of the 18th ISAP, held in 2015 in Dayton, Ohio, Advances in Aviation Psychology is not simply a collection of selected proceedings papers. Based upon the potential impact of emerging trends, current debates or enduring issues present in their work, select authors were invited to expand upon their work following the benefit of interactions at the symposium. Consequently the volume includes discussion of the most pressing research priorities and the latest scientific and technical priorities for addressing them. This book is the second in a series of volumes. The aim of each volume is not only to report the latest findings in aviation psychology but also to suggest new directions for advancing the field.
https://www.taylorfrancis.com/books/e/9781315565712
Sustainability is a key word in the environmental vocabulary informing how research projects in the social sciences are framed. This book provides a systematic and critical review of the key research methods used when studying sustainable strategies and outcomes. It is divided into four areas: - Front Matter - Chapters Part I: Measuring the Immeasurable? The Challenges and Opportunities of Sustainability Research in the Social Sciences - Chapter 1: Sustainability Research in the Social Sciences – Concepts, Methodologies and the Challenge of Interdisciplinarity Part II: Researching Local Lives: Experiences of (Un)sustainability among Individuals, Households and Communities - Chapter 2: Household Analysis: Researching ‘Green’ Lifestyles, a Survey Approach - Chapter 3: Social Groups and Collective Decision-making: Focus Group Approaches - Chapter 4: Local Lives and Conflict: Towards a Methodology of Dialogic Research Part III: Comparative Research on the Sustainability Performance of Cities, Regions and Nation-states - Chapter 5: Sustainable Development of What? Contesting Global Development Concepts and Measures - Chapter 6: Biophysical Indicators of Society–Nature Interaction: Material and Energy Flow Analysis, Human Appropriation of Net Primary Production and the Ecological Footprint - Chapter 7: Mapping for Sustainability: Environmental Noise and the City Part IV: Time in Focus - Chapter 8: Everyday Life in Transition: Biographical Research and Sustainability - Chapter 9: Time and Sustainability Part V: Current Developments and Future Trends - Loading...
https://sk.sagepub.com/books/methods-of-sustainability-research-in-the-social-sciences
In July 2019, the Department of Health and Social Care (DHSC) published their prevention green paper, building on the Secretary of State’s prevention vision that was announced in November 2018. The paper contains a number of proposals to tackle the causes of preventable ill health in the England. He... Document Breastfeeding in the UK - position statement The UK has one of the lowest rates of breastfeeding in Europe. We strongly support national policies, practices and legislation that are conducive to breastfeeding, as well as promotion, advice and support to new mothers. Our messages and recommendations in this statement are specific to the UK. Resource Child Protection Evidence - Early years neglect Child Protection Evidence is a resource available for clinicians across the UK and internationally to inform clinical practice, child protection procedures and professional and expert opinion in the legal system. This systematic review evaluates the literature on early years neglect. Resource Establishing a correct diagnosis of Ehlers Danlos Syndrome hypermobility type (hEDS) in children and adolescents – position statement The purpose of this position statement is to clarify the current criteria in making a diagnosis of hEDS in children and adolescents and to provide advice to paediatric health professionals in relation to provision of appropriate rehabilitation. Document NHS Long Term Plan Implementation Framework - Priority areas and funding for children and young people On 27 June 2019, NHS England published the Long Term Plan Implementation Framework (the Framework). We summarise the key parts relevant for our members and for child health, including an outline of who should be involved in the creation of the five-year local-level plans as well as the broad funding... Resource Protection of time for paediatricians to take part in Research Ethics Committees Clinical research involving children is essential if we are to increase our understanding of childhood conditions and improve healthcare for children. Yet our 2015 survey of the paediatric workforce demonstrated that very few paediatricians have time allocated for research-related activities related... Resource Rights of migrant, refugee, stateless and undocumented children - our position (2019) As outlined in the United Nations’ Convention on the Rights of the Child, every child has the right to voice, protection, health and education. We urge all states to recognise and realise their obligation to children's access to healthcare. Resource Supporting professionals to have healthier weight conversations - consensus statement We agree to work collaboratively with various organisations to use collective resources and influence in supporting the public health workforce to have healthier weight conversations. Document UK General Election 2019 – what are the main parties promising for the NHS and child health? The RCPCH is politically neutral and therefore not partial to any political party, during an election campaign or any other time. We do, however, pay close attention to the policy commitments made in each manifesto. These give us an indication of what we can expect to see from the next Government. ... Resource Vitamin D for infants, children and young people - guidance Vitamin D helps the development of healthy, strong bones and to prevent rickets. RCPCH reiterates the importance of babies and children taking supplementation.
https://www.rcpch.ac.uk/resources/all-resources?amp%3Bf%5B1%5D=topic%3AChild%20protection&f%5B0%5D=topic%3AEthics&f%5B1%5D=topic%3AInfant&f%5B2%5D=topic%3AClinical%20guidelines%20and%20standards&f%5B3%5D=topic%3ASmoking%20and%20tobacco&f%5B4%5D=topic%3A7%20day%20services&f%5B5%5D=topic%3AAlcohol&f%5B6%5D=topic%3APoverty&f%5B7%5D=topic%3AVitamin%20D&f%5B8%5D=topic%3AHealth%20policy&f%5B9%5D=resource_type%3ASummary&f%5B10%5D=resource_type%3ASystematic%20review&f%5B11%5D=resource_type%3APosition%20statement
My Son Sanctuary used to be a place to worship sacrifices of the Champa dynasty as well as the tombs of the Champa kings or princes. My Son Sanctuary is considered one of the main centers of Hindu temples in Southeast Asia and is the only heritage of its kind in Vietnam, which contains lots of fascinating historical legends and facts that might intrigue you. Let’s find out more about this popular spot with me today! Where is My Son Sanctuary located? My Son Sanctuary is located in Duy Phu commune, Duy Xuyen district, Quang Nam province, about 69 kilometers from Da Nang and near the ancient town of Tra Kieu, including many Cham Pa temples, in a valley about 2 kilometers in diameter and is surrounded by majestic hills. What you should know about its history? Why was My Son Sanctuary built? Influenced by the Indian philosophical ideology and the devotion of indigenous beliefs, right from the very beginning of the nation, in order to pray for prosperity for the kingdom and consolidate the kingship, the Champa dynasty built the cathedral worshiping the guardian deity. That is the basic reason when introduced into Champa, the influence of Indian culture is an important basis for the birth of a religious mecca in My Son, including the architectural complex of temples of which Siva is the god be revered and exalted. But why the first Champa king, Bhadravarman, chose My Son valley as a holy land to build a place to worship the god-king Bhadresvara and not in another place? It is believed that My Son is located in a closed, craggy valley. This terrain fits well with the requirements of Hinduism. Because Hinduism thinks that the divine space is a sacred space, and that people can attain to God by the way of devotion to the gods. Influenced by Hindu thinking, the Cham also held the idea that being a monk is a pilgrimage, going into a mountainous forest, and the My Son space has met that demand of Brahmanical believers. The first ever construction and its revolution Based on other inscriptions, it is known that this site once had the first temple made of wood in the fourth century. More than 2 centuries later, the temple was destroyed in a major fire. In the early seventh century, King Sambhuvarman (reigned from 577 to 629) used bricks to rebuild the temple that still exists today. The later kings continued to remodel the old temples and build new temples to worship the gods. Brick is a good material to retain the memories of a mysterious people; and the technique of building these towers of the Cams is still a mystery to this day. It has not yet found an appropriate answer about the material of binding, brick burning method and construction. Although the towers and tombs date from the seventh to the fourteenth centuries, the excavation results show that the Cham kings were buried here in the fourth century. The total number of buildings is over 70. After the destruction of the kingdom of Champa, My Son Sanctuary was lost in oblivion for centuries, until 1885, it was discovered. Ten years later, researchers began to conduct luminescence, studying the site. My Son is also the cultural and religious center of the Champa dynasties and the burial place for powerful kings and monks. In addition to that, it was also a place to perform ceremonies and help dynasties reach the gods. Composition The overall sanctuary My Son sanctuary consists of two hills, facing each other in the East – West direction and at the crossroads of a stream, whose branches have become the natural boundary dividing this place into four areas A, B, C, D. This division not only was suitable for spiritual factors but also prevented the tearing of the overall architecture. The center of the Holy Land is a main tower (Kalan) and many small auxiliary towers surround it. On each arch is a miniature tower, according to the documents, this is the highest tower in the holy towers in My Son with a height of 24 meters. In the tower is a large Linga – Yoni set (now only has a Yoni stone pedestal). The top of the tower has 3 floors, the layers are smaller and the top is sandstone tower. There are fake doors on every floor, with people standing under arches which were decorated with very sophisticated patterns. A thousand-year-old road in My Son Holy Land Besides the well-known old towers in My Son Santuary, there lies an ancient road that is worth our attention. The road was discovered by Indian experts during the excavation and restoration of the K tower in the core of My Son World Cultural Heritage site. Up to 8 meters wide, the road is led by two parallel walls, the depth of which the road is buried nearly 1 meter in the ground and according to recorded documents, this is the first gateway for the kings to enter the central temples and pagoda to sacrifice to the gods and to celebrate the ceremony. After excavating the experts, it was astonishing at the grandeur of this road with very skillful guide walls, very specific materials such as earthenware and special adhesive additives to build. Thanks to this interesting discovery, it enriched the long-standing historical values, architecture, culture and art created by the ancients in the whole composition. The old route just revealed at My Son World Cultural Heritage has a starting point at the foot of the K tower, also known as the Gate tower, and the end point can not be determined Outstanding architectural characteristics of the Sanctuary This is a complex with more than 70 temple towers with many architectural styles typical of sculpture for each historical period of the Kingdom of Champa. Architectural styles here are divided into 6 types: ancient style, Hoa Lai, Dong Duong, My Son, PoNagar and the style of the people of Binh Dinh. Most of the architectural works and sculptures in My Son are influenced by Hinduism. The technique of Cham sculpting on bricks rarely appears in other art in other areas. The towers are pyramidal, symbolizing the holy Meru peak, the abode of Hindu gods. The gate of the tower usually turns to the east to receive sunlight. The outer wall of the tower is often decorated with continuous S-shaped leaf patterns. The decorations are Makara sandstone sculpture statues, Apsara dancers, lions, elephants, Garuda birds, and prayers. The highlight of Cham sculpture is to show the vigor of the human being with the inner mind at the time of both thoughtfulness and anxiety. It is quite surprising that to this day there has been no research yet to identify binders to tightly stick all the materials together or to stick humanoids and shapes on the tower. The temple in My Son was built with very durable materials, which still exist today The effect of time on My Son sanctuary Through the devastation of war and time, once a magnificent temple complex is now partly only a ruins. Many archaeologists have previously tried to protect the area. For example, in 1937, French scientists restored almost all the temples here. But after the bombardment in 1969, the area of Tower A was almost completely destroyed. However, most of the small temples in areas B, C and D still exist, although many antiques, large statues and altars were taken away by the French during the war. They have now donated a large number of antiques to the Vietnam Museum of History and the Museum of Cham Sculpture. One more thing, don’t be surprised when you see Cham artifacts at the famous Louvre museum! Cultural heritage My Son Sanctuary not only bears a distinctive architectural style, but it also imbues Cham culture with gentle and impressive Cham dances. Visitors will have the chance to see the Cham dancers often wear candles, water, flowers, and betel nut on their head as a ceremony to celebrate. In addition, this place also has many unique art activities such as folk performing arts, fire-biting dances, water dance. It will not make visitors disappointed when coming. It is also a destination for photographers who love the mystery, especially in the holy place. Kate festival If you come right on the occasion of the festival taking place in My Son Sanctuary, the trip will become much more enjoyable and complete. You will witness the opening of the festival as Cham religious dignitaries perform rituals of prayers at the tower according to the tradition handed down to date. Many other traditional rituals took place such as ceremony, water procession, and Kate rituals. The festival is an opportunity for indigenous people and tourists to gain more knowledge about the place as well as contribute to maintaining and protecting the pure artistic values of ancient Cham culture. Kate festival What to do? You can try taking a walk around the mecca, feel a spirit of beauty mixed with the grandeur of nature and earth and bring along a camera save the impressive architectural imprints of the ancient people. Tourists not only experience My Son relics but also should take a visit to a few places that are also well-known in the surrounding area such as Tra Kieu Church, Our Lady of Tra Kieu, Sa Huynh – Champa Cultural Museum. Why not enjoying delicious and inexpensive specialties here such as Tam Ky chicken rice, Quang Nam green eel porridge, nest cake, Quang noodles with your family and friends. Trust me you might want to come back here a second time because of the food. The ticket price Foreigners: 150,000 VND (including entrance fee and service). Vietnamese: 100,000 VND (including entrance fee and service). Children under 15 years old will not be charged the fee How to get there? Starting the journey to explore the most mysterious place in Vietnam, you can easily get here by any means of motorbike or car. From Hoi An, if traveling by motorbike, i only takes nearly 2 hours to arrive. The rental price for a motorbike is about 150,000 – 200,000 VND / day. A little note is that you should fill the fuel tank before you go and remember to follow the weather forecast. Time and war have devastated the monument, but what is left here is still the mysterious beauty, unique architecture with the unique characteristics of the Champa people. It is this that has attracted the curiosity of domestic and foreign tourists coming here to visit and explore. And I think that’s quite enough for you to spend your precious time paying a visit here and discovering what has been left here by the ancestors, isn’t it? Alida A wanderlust who is into photography and has special interest in chasing clouds and admiring the sky. Hoping that someday I will be able to see the sky from every part of the world.
https://www.travelsense.asia/son-sanctuary-legends-found/
Opinion Writing: Exercise 13(p-194) Advise Tactile Ltd whether it may be liable for the full extent of the damage. Conclusion: I think Tactile Ltd have a good chance to limit the liability for any damage to $1000 subject to the validity of the exclusion clause. It seems that the company can rely on the exclusion clause in order to limit of damages, as its own wording there is clear restriction of liability caused by the negligence of the Tactile Ltd its servents or agents or otherwise.However, Lord Blunder will have to establish first there is a breach of contract due to the company’s bad workmanship.Although it is crucial to consider whether the damages primarily caused by the improper fixed canopy to the roof, if its in what extent. Tactile Ltd liable for the extent of the damage: Once it is established that the contract has been breached due to the company’s negligence only then will consider whether the clause provide a defence to breach of the obligation. It is clear that the clause is incorporated into the contract and is acknowledged by the both party. Certainly, the court would satisfy that the particular clause limiting the amount of damages is in truth an integral part of the contract and so lord Blunder intended to comply with the clause. Therefore, he would be bound by the clause as reasonable persons would assume to be no more than a receipt is an affront to common sense. Blunder might raise the possible arguments that the improper fixing canopy to the roof is a fundamental breach. Therefore, the clause has no effect into the contract so the court would award the full damages which occurred by the negligence of the Tactile. Secondly, it can be argued that the clause is unreasonble therefore the company can not deny the damages in relying on the clause. Finally, the loss suffered by the negligence of the company is reasonably foreseeable. Exercise: 4 Conclusion: I think there is a better chance that the court would impose a duty on the occupiers as the ramps were not reasonably painted and the Westbury District Council failed to put the proper sign at the entrance. Therefore, the damage is foreseeabe which gives rise to a duty of care because the relationship between the occupiers and Mr Nutt is such that it is obvious that a lack of care creates a risk of harm (by the law as one of ‘proximity’or ‘neighbourhood’).Clearly, the situation is one in which the court would consider it is fair, just and reasonable that the law should impose a duty as the duty of care has been broken. The Liability of the Westbury District Council and the Duke of Westland: The question is who will be liable and it depends on the degree of control of the Sandy Road. As a general rule, it would probably not be Duke of Westland who would be responsible for maintenance activities of the Sandy Road such as yellow painting on the ramps, where as if a visitor were to be injured because of a structural defect, then it would seem right that Duke of Westland responsible. Therefore, the court would likely impose duty on the Westbury District Council for Mr Nutt’s injuries due to breach of the occupiers duty under the Occupiers Liability Act 1957 or for negligence under the common law. Further, it seems that he is a visitor to the beach and so he has an implied invitation to be in the beach and the WDC therefore owes its common duty of care (set out in s2 ( 1) OLA1957. WDC On the other hand might argue that the they do not have to insure the safety of the Mr Nutt, but only has to make him reasonably safe.They have posted notice(exclusion clause) at the entrance such as Max speed 20mph therefore he was aware of the vicinity of the danger also they did not extend his invitation with 55 mph so he is a trespasser.He therefore could have been contributory negligent, thus reducing any damages he may be awarded.
https://www.lawyersnjurists.com/course/opinion-writing-exercise/
- The are possible problems with selective breeding programmes. They may lead to inbreeding, where two closely related individuals mate, and this can cause health problems within the species. - Inbreeding can cause a population of alleles in the population (the gene pool). This can lead to: - an increased risks of harmful recessive characteristics showing up on the offspring. - a reduction in variation, so that populations cannot adapt or change so easily. 1 of 3 Genetic engineering - Genetic engineering has advantages and risks: - One advantage - organisms with desired features can be produced very quickly. - One disadvantage - negative, harmful side effects. - To carry out genetic engineering, four steps are taken: - The desired characteristics are selected - The genes responsible are identified and removed (isolation) - The genes are inserted into other organisms. - The organisms are allowed to reproduce (replication). 2 of 3 Gene therapy - The process of using genetic engineering to cure certain diseases and/or change a persons genes is called gene therapy. - Gene therapy could involve body cells or gametes. Changing the genes in gametes are much more controversial. Because it is difficult sometimes to decide which gene parent should be allowed to change. It could lead to 'designer babies'.
https://getrevising.co.uk/revision-cards/b3_new_genes_for_old
A lot of writers are in Yakima Today for the 11th Annual Travel and Words: Northwest Travel & Lifestyle Writers Conference. It's being held at The Yakima Convention Center Today. The conference welcomes nearly 125 regional travel and lifestyle writers, bloggers and industry representatives for educational workshops and networking. As part of the event, Yakima Valley Tourism will host a number of the writers on familiarization tours, showing them various attractions and features in the Yakima Valley. In addition, the tourism agency will host a reception at the 4th Street Theatre in Downtown Yakima exposing attendees to local craft beverage and food vendors plus information about the Yakima area arts scene.
https://newstalkkit.com/northwest-travel-and-lifestyle-writers-visit-yakima/
Sort by: Relevance Date 130 results found Computational and Systems Biology Medicine Identification of human glucocorticoid response markers using integrated multi-omic analysis from a randomized crossover trial Dimitrios Chantzichristos et al. A human experimental model for physiological glucocorticoid exposure and glucocorticoid withdrawal identifies a multi-omic cluster, including microRNA miR-122-5p and metabolites, associated with glucocorticoid-responsive genes. Cancer Biology Re-expression of SMARCA4/BRG1 in small cell carcinoma of ovary, hypercalcemic type (SCCOHT) promotes an epithelial-like gene signature through an AP-1-dependent mechanism Krystal Ann Orlando et al. BRG1 loss drives initiation and progression in human cancers through changes in specific differentiation programs in an AP-1-dependent manner. Cell Biology ALKBH7 mediates necrosis via rewiring of glyoxal metabolism Chaitanya A Kulkarni et al. Multi-omics reveals that Alkb homolog 7 (ALKBH7), α mitochondrial alpha-ketoglutarate dioxygenase of unclear function, regulates glyoxal metabolism, which may explain its role in necrosis and heart attack. Evolutionary Biology Plant Biology Evolutionary routes to biochemical innovation revealed by integrative analysis of a plant-defense related specialized metabolic pathway Gaurav D Moghe et al. Integrative analysis of a specialized metabolic pathway across multiple non-model species illustrates mechanisms of emergence of chemical novelty in plant metabolism. Computational and Systems Biology CDK9-dependent RNA polymerase II pausing controls transcription initiation Saskia Gressel et al. CDK9 inhibition in human cells uncovers that Pol II pause duration regulates the frequency of productive transcription initiation. Genetics and Genomics Science Forum: The single-cell eQTLGen consortium MGP van der Wijst et al. The single-cell eQTLGen consortium aims to pinpoint the cellular contexts in which disease-causing genetic variants affect gene expression and its regulation. Microbiology and Infectious Disease Post-acute COVID-19 associated with evidence of bystander T-cell activation and a recurring antibiotic-resistant bacterial pneumonia Michaela Gregorova et al. Post-acute or long-COVID is associated with bystander T-cell activation and a recurring antimicrobial resistant, bacterial ventilator-associated pneumonia. Cell Biology Genetics and Genomics An integrative study of five biological clocks in somatic and mental health Rick Jansen et al. An integrative study of five biological clocks in somatic and mental health indicate that one's biological age is best reflected by combining aging measures from multiple cellular levels. Cell Biology Developmental Biology CCN1 interlinks integrin and hippo pathway to autoregulate tip cell activity Myo-Hyeon Park et al. Cyr61 autoregulating tip cell activity by interlinking integrin and Hippo pathway and targeting Cyr61 provides promising therapeutic approach for the treatment of pathological angiogenesis, including tumour angiogenesis. Computational and Systems Biology Microbiology and Infectious Disease Extensive transmission of microbes along the gastrointestinal tract Thomas SB Schmidt et al. Microbial populations are continuous along the gastrointestinal tract, with increased transmission in colorectal cancer and rheumatoid arthritis patients. Load more Refine your results by:
https://elifesciences.org/search?for=multi-omics
The name Amrita means nectar of immortality. When you learn about yoga from the guru (teacher) you are drinking the nectar of immortality or shining a light into the darkness. Amrita views this name as her spiritual destiny and as a spiritual goal. Practicing yoga gives your life richness of which everybody can benefit. Yoga is a life science to improve your physical, mental and spiritual body. Amrita expresses her love towards her students by encouraging them to practice confidently and comfortably. Her classes are accessible but challenging to all levels. She makes practicing yoga exhilarating by telling silly jokes and sharing personal stories. No matter what level or style of yoga you want to practice Amrita has a class for you. Amrita was introduced to yoga by chance but she continued to practice because she felt it fulfilled a spiritual longing. Yoga permeates all of her daily activities. When you take her classes she helps you to connect with your own inner peace. Although many teachers influence her practice Amrita follows these words from Swami Vishnudevananda, “health is wealth, peace of mind is happiness and yoga shows the way”.
https://risinglotusyoga.ca/instructors-2/
The great pianist Arthur Rubinstein is said to have learned Franck’s Symphonic Variations by engaging in mental practice on a long train trip, playing it on a piano for the first time at the first rehearsal. Is this just the stuff of legend? Or are feats of learning like this possible for us “normal” folks? To what degree can we learn, memorize, and play pieces that are at our ability level without the benefit of an instrument to practice on? Mental practice vs. physical practice A team of Italian and German researchers conducted a study with 16 pianists (ranging in age from 18 to 36, each with at least 15 years of piano study). The pianists were given two comparable 19-measure excerpts to learn – from two different Scarlatti sonatas (K72 and K113). Why these two in particular? The researchers wanted to make sure that technical issues wouldn’t be a factor, and both of these excerpts were easy enough to be sight-readable by every participant. To compare the effectiveness of mental practice vs. physical practice, the two excerpts were learned on two different days. On one day, the pianists engaged in 30 minutes of mental practice on one excerpt, and then gave a performance of it from memory. On another day, the pianists engaged in 30 minutes of physical practice on the other excerpt, and gave a performance from memory. The results were nuanced, but interesting. Two measures of effectiveness The effectiveness of the pianists’ practice was evaluated in several different ways. Number of notes One measure was simply the number of notes they were able to recall from memory. When relying on mental practice, the pianists were able to get about 63% of the way through the excerpt before stopping. That’s not bad, but physical practice was more effective – getting the pianists about 84% of the way through the excerpt. Ratio of wrong notes to total notes The researchers also computed a ratio score – of wrong notes to total notes. Because it’s one thing to play 300 notes, but if half of them are the wrong ones, that’s much less meaningful than playing only 200 notes, but nailing every single one. Here too, physical practice was more effective, with a ratio of .08 to .17 for mental practice (lower is better). So. Thus far, the results suggest that mental practice is better than nothing, but not quite as good as physical practice. Which makes total sense, of course. But wait – there’s a twist (because there’s always a twist, right?). 10 more minutes! After the first performance test, all of the pianists were given 10 more minutes to practice. So for those doing physical practice, this meant a total of 40 minutes of piano time before a second and final run-through. Those who did mental practice also got 10 minutes to practice – but this time, on a real, physical piano. Could they get to the same level of playing in 10 minutes on a piano that the other group achieved in 40? As it turns out, yes, they could. Ten minutes of physical practice was enough to get the mental practice group’s performance up to the level of the physical practice-only group. With regards to the number of notes played, the mental practice folks got 83% of the way through the piece. The physical practice group got 90% of the way through the piece – but this difference was not statistically significant. Same thing with the ratio of correct notes to total notes. 30 minutes of mental practice plus 10 minutes of physical practice led to a ratio score of .07, while 40 minutes of physical practice resulted in a score of .04. Once again, this small difference was not statistically significant. What exactly does mental practice entail? There was a bit of variation from pianist to pianist, so a bit of caution is probably not a bad idea. But it does seem that mental practice plus physical practice could be an effective combo. Useful if you’re stuck on a train or airplane and don’t have access to instruments. Or if you’re trying to save your chops after a double rehearsal, and avoid injury. But hold on a sec. What exactly does mental practice mean? What were these pianists doing in their heads during their 30 minutes of no-touching-the-piano-allowed time? The researchers dug a little deeper, and found that strategies ranged from mentally hearing the sounds of the notes to imagining the feel of the correct hand/finger movements to singing out loud. Some of these strategies seemed to lead to higher ratings (by a panel of musicians) of their performances, while others led to no change or even lower ratings. Based on the results, here are the researchers’ recommendations: - Auditory imagery – or hearing the notes on the page “should be a default operation, a foundation on which other operations rest.” - Analysis – figuring out the harmonic, melodic, and rhythmic structure of the piece is another important aspect of mental practice. - Listening to recordings can help – once again, to support the development of the inner soundtrack of the music in your head. A couple additional notes of interest… A plug for aural skills training The pianists all took an aural skills test, and those who scored higher on aural skills ability were able to play a greater percentage of the excerpts. They also received a higher overall performance score from the judges. So if you’ve been looking for a reason to pay more attention in ear training class, I guess now you’ve got one! Mental practice – not a common practice ? These were all experienced pianists, who were well aware of a range of mental practice techniques. However, none of the 16 reported utilizing mental practice regularly during their normal practice routines. Which made me wonder…is that a bit of an anomaly, or is mental practice really not a regular part of most musicians’ day-to-day practice habits? By Noa Kageyama, Ph.D.
https://www.hbschoolofmusic.com/blog/effective-mental-practice-really/
Our recent Perspectives focused on 2015 property/casualty insurers’ investment highlights and led us to reach the following conclusion: It is to an insurer’s advantage to adopt an enterprise capital management approach to optimizing asset allocation which encompasses a more complete integration with enterprise risk appetite and tolerances, a comprehensive vetting of investment guidelines and consideration of capital structure and management. These two points within the summary serve as background for this issue of Perspectives. - Nominal underwriting margins and continued low prospective investment returns, combined with very low leverage, limit the industry’s return on equity to mid-single digit levels - Tax-preferenced municipal bonds (and structured securities) appear to be underutilized assets In the first of five sections, we present the framework and historical context of drivers that will generate returns for insurers. Next, we describe our approach to estimate prospective investment returns and provide forward-looking underwriting margins for clients; and then present several enterprise-based asset allocation options to compare and contrast results. The fourth section shows the impact of actively managing the leverage drivers within the enterprise return and risk framework. We close with a summary of results and additional considerations. Enterprise Return Framework and Historic Perspectives The basic framework of this Perspectives review begins with DuPont’s decomposition of an insurer’s enterprise return on equity into its four principal components: premium leverage (ratio of premium to capital or P/C), underwriting margin (100 minus the combined ratio), investment leverage (ratio of invested assets to capital or IA/C) and investment return (percentage of return on invested assets). Underwriting and investments are the sources of enterprise return and risk, and their impact upon capital is amplified by their respective leverage1 Other revenue and expense streams can be appended to this basic formula (such as premium finance income and debt-servicing costs). However, these are the four essential ingredients. And, it is understood that they are inter-related: an erosion of either underwriting margins or investment returns changes both premium leverage and investment leverage, not just the related component. Some might contend that return on equity today is low because of diminutive earnings. Others might argue that there is simply too much capital. Chart I presents industry premium-to-capital and invested asset-to-capital leverage ratios for a 64-year period (1951 - 2015). Prior to the era of oil embargos and the 1973 stock market crash, the P/C and IA/C hovered in the ranges of 1.25 - 2.15 and 2.1 - 3.3, respectively. Both ratios peaked in 1973 at levels of 2.7 and 4.4, respectively. Since then, the ratios have been trending downhill, reaching all-time lows. The environment of the last 20 years is the one most familiar to today’s insurance leadership. Chart I U.S. Property/Casualty Industry Premium and Invested Assets Leverage Ratios 1951 - 2015 Chart II shows premium and investment leverage ratios for the 480 largest insurers as year-end 2015. The “red” dot represents a target company, while the “gold” dots represent the nine largest companies. Although there is wide variation among insurers, there are few whose premium or investment leverage is even close to the levels of the late 1960s through the 1980s (1.7 and 3.0+, respectively). That said, there are companies operating that have multiples of other firm’s leverage. The former companies will have an advantage in pursuing higher returns on capital, though misses in underwriting margins or asset returns will prove to be more damaging to their capital positions. Chart II Premium and Investment Leverage of 480 Largest Property/Casualty Companies Chart III displays the industry’s combined ratios and capital market fixed-income yields. The combined ratios are reported calendar year values both with and without estimated natural catastrophes. The capital market’s fixed-income yields are a blend of short-and long-term investment grade indices. Chart III U.S. Property/Casualty Industry Combined Ratios and Fixed-Income Yields 1951 - 2015 In Chart III the escalation of combined ratios and yields in the 1970s reflects social and rampant economic inflation and judicial contract reform. The 1992 and 2001 calendar years reflect both catastrophes (Andrew and 9/11) and associated recording of prior year development. Hurricane Sandy had an impact in 2011 and 2012. The combined ratios and yield data highlight inherent volatility in both insurance and capital markets and the near secular behavior of yields. This begs the question: What’s next? Prospective Investment Returns and Underwriting Outcomes Unfortunately, continued uncertainty is next, and levels are unknown. The enterprise framework harnesses investment return expectations and underwriting margins, combining them with volatility estimates, to produce a range of leveraged return/risk profiles which are then stress tested. What is the process? NEAM derives prospective investment returns using methods similar to Economic Scenario Generators (ESGs), which combine investment return mean reversion assumptions and in-depth analysis of economic and market technical indicators with “expert” qualitative judgments to derive multi-year rate, spread and equity valuation estimates. The estimation process is scalable, transparent and repeatable. Figure I depicts hypothetical distributions of historic and one year forward total returns. The prospective distribution is characteristic of most fixed income assets in an environment of low rates, expectations of slight rate increases and nominal spread movements. Similar distributions of forward returns are compiled for equity investments enabling estimates of combined fixed income and equity portfolios’ return distributions. The table adjacent to the chart in Figure I displays total return, volatility and value-at-risk (VaR) estimates for representative fixed income and equity asset classes on both a historic and prospective basis2. Although not shown for reasons of space, total returns reflect estimates for both income and price change. For fixed- income assets, there is a need to account for duration and book income migration of existing asset holdings in addition to purchases to replace assets maturating, paying down or called. Figure I Sample of Historic and Prospective Returns The methods allow for multi-period estimates. However, the longer the time horizon, the greater our skepticism. The estimates used in the enterprise framework are the outputs of NEAM’s investment policy process, which places a premium upon real world investment decisions. This is not a “modeling” exercise, but rather a component of an actual investment process. That noted, the method per se is not as important as the eventual scrutiny of the enterprise applications’ outcomes by applying multiple stress-test methods. Chart IV shows the prospective return and risk estimates of the 480 largest property/casualty companies’ 2015 holdings. The red intersecting line reflects industry median values. The chart suggests ample opportunity for many individual insurers to improve their investment portfolio’s return/risk profile. Chart IV One-year Forward Expected Investment Total Return and Volatility for Individual P/C companies In the absence of re-examining enterprise capital return and risk management opportunities, the outlook for investment earnings is very difficult. This is illustrated in Figure II which displays historic and prospective industry book yields, investment earnings and return on equity in the context of rather benign underwriting results of 4.5% premium growth and a 97 combined ratio3. Earned investment income returns to 2007 peak earnings years in 2018, but only because the asset base is more than 45% larger. Neither gross income yield nor after-tax book yields are projected to achieve anything close to historic pre-financial crisis levels. And, return on equity remains in the lower mid-single digit levels during the 2016 – 2020 period. Figure II P/C Industry Earned Invested Assets, Earned Income, After-tax Book Yield and ROE 2006 - 2020 Asset Allocation Options Underwriting margins and their respective estimates of volatility and downside risk are essential to developing an enterprise-based asset allocation. Most often these are provided by client companies as part of their business planning process. They are frequently sourced from their internal economic capital or pricing and reserving models, sometimes with the assistance of third-party consultants or reinsurers (intermediaries). Combining estimates of underwriting margins with those for invested assets and accounting for leverage better enables companies to identify, measure and manage the return/risk trade-offs (net of taxes) across the enterprise. This ensures that insurance operations and investment activities are consistent with stakeholder return expectations and risk tolerances. Figure III depicts return/risk trade-offs based upon 2015 year-end property/casualty industry holdings, product mix, leverage, a 3% underwriting margin (97 combined ratio) and assuming a full rate tax-payer similar as above. Figure III 2015 P/C Industry Enterprise – Potential Asset Allocation Options The enhanced asset allocation above emphasizes tax-preferenced securities such as tax-exempt municipals, preferreds and high-dividend equities. The allocation to these assets increase significantly from current levels to similar earnings and enterprise T-VaR levels, with return on equity rising nearly 100 basis points. Whereas suppressed interest rates have pressured insurers’ earnings, eligible assets remain to improve operating results. These opportunities have persisted due to favorable tax treatment and ensuing relative value in most yield and spread environments. Accordingly, companies blessed with well-managed underwriting operations might be able to “double dip” to improve their overall financial performance. Capital Management Capital management (embedded within NEAM’s Enterprise Capital Return and Risk Management® framework) is a very powerful tool, especially for organizations placing a greater emphasis upon return on equity rather than absolute dollar returns. Figure IV contrasts enterprise and other metrics attributable to recent levels to pre-millennial levels. The notable differences are the trade-offs between rates of return and dollars of return. The former are higher and the later are lower as leverage is increased. In the higher leverage scenario, even at a T-VaR similar to the lower leverage scenario, the rate of return on equity (10.53) is notably higher than the former (8.66). At the end of the day, it is all about trade-offs. Figure IV 2015 P/C Industry Enterprise – Potential Capital Management Opportunities There are several takeaways from this Perspective’s review: - Operating leverage (premium to capital) and investment leverage (invested assets to capital) continue their downward spiral from peak levels of the early 1970s. Capital withdrawals remain low except for recent years. - Underwriting margins are only episodically favorable, with volatility accentuated by catastrophe losses and occasional serious missteps in pricing and reserving estimates. - With prolonged low market yields or even modest multi-year interest rate increases, insurers’ book yields will continue to decline and investment income will only increase due to premium-driven invested asset accumulations - Nominal underwriting margins and continued low prospective investment returns, combined with low leverage, limits the industry’s prospective return on equity to mid-single digit levels. - Against this backdrop, there are causes for optimism for some (not all) insurers, as noted by the wide dispersion in underwriting margins, investment returns and leverage among companies…there are firms achieving exceptional risk-adjusted returns, consistently. - Companies with superior underwriting results providing capacity for tax-preferenced income may potentially be able to further improve their risk-adjusted after-tax total return by increasing their asset allocation to tax- exempt municipals, tax-advantaged preferred stocks and high-dividend equities. - In the absence of underutilized tax capacity, companies with access to investment expertise in structured securities might wish to explore that as an asset allocation opportunity to improve prospective returns. - Lastly, there are capital management opportunities, not favorably viewed by all, but available to firms seeking to increase the efficiency of their capital utilization. Share buy backs, debt and M&A are matters of consideration for some companies; we reviewed the possible consequences of one such approach. We welcome your feedback and comments. Please contact us if there are investment themes you would like us to review or if you would like to receive a comparative HealthCheck of your investment portfolio 1 It need not be “all about return.” Rather, as in the case of some mutual and captives, the emphasis upon preservation of capital is a frequent and consistent application of this framework. 2 In practice, forward-return estimates and associated statistics are derived from actual lot level holdings. In the following section, forward estimates for investment alternatives are based upon industry proxies. 3 Please see General ReView Issue #74 from January 2016: “Investment Highlights: Rehab–The Long Road to Recovery,” for more in-depth review, as combined ratio assumptions due to attritional losses and catastrophes vary.
https://www.neamgroup.com/insights/considering-opportunities-in-low-return-uncertain-environment-an-enterprise-view
Establishing a healthy romantic relationship is not always easy, but dating a former drug addict or alcoholic can present its own unique challenges. If you have met someone and you feel a connection you would like to explore, but have just found out he is in recovery , you may be wondering if you should go forward. If you do continue the relationship, you may wonder how it will work and what you may be in for. Finding out that someone you like is a recovering addict does not need to be a roadblock, but you should be prepared to meet the challenge. Svetlana. Age: 27. I am a total nymphomaniac. Very open-minded and a total sex fiend who will do anything that you want as long as you ask me nicely! If you are ready for a truly unique and sexually exhilarating experience then call today. Expect a wild ride from start to finish. 6 Heartbreaking Things That Happen When You Love An Addict Dating a Recovering Addict: Match-Maker or Deal-Breaker? | Psychology Today This study examined the associations between dating partners' misuse of prescription medications and the implications of misuse for intimate relationship quality. A sample of young adult dating pairs completed ratings of prescription drug use and misuse, alcohol use, and relationship quality. Results indicated positive associations between male and female dating partners' prescription drug misuse, which were more consistent for past-year rather than lifetime misuse. Dyadic associations obtained via actor-partner interdependence modeling further revealed that individuals' prescription drug misuse holds problematic implications for their own but not their partners' intimate relationship quality. Monika. Age: 31. You will feel like your in a dream with every one of your fantasies fulfilled by a loving and affectionate goddess The Good, The Bad And The Ugly Of Dating A Drug Addict Substance addiction in the United States is widespread, and, in recent years, has become a greatly discussed public health concern. But the devastating effects of drug and alcohol misuse aren't only limited to those who ingest the substances — abuse can deeply impact loved ones, colleagues, and many others. For that reason, we investigated how substance abuse affects those who are dating addicts. We surveyed people aged 18 to 72 who were currently in relationships with people suffering from addiction. We asked them when they first noticed their partners' addictions, whether they regretted their relationships, and how addiction impacted the most intimate aspects of their lives. Prescription drugs are any types of medications which must be authorized by a medical professional. Medical professionals that may distribute prescription drugs include physicians, dentists, veterinarians, optometrists, nurse practitioners, and physician assistants. All prescription drugs are regulated by the Federal Government in the United States, which means that each drug must meet a specific criterion before being administered or distributed. Prescription drugs are typically allotted from pharmacies and pharmacy chains, under the digression of Pharmacists who will dispense the medication. Category: Spain Tags: dating+a+prescription+drug+addict Report this video: Inappropriately Gives an error message Violates rights Other reason Cause: Send Your comments cracked85 17.09.2019 When you got it, you got it.and boy.you got TWO! SpeedSGT 17.09.2019 Disregarding those chickenhead rat teeth, how is it that one's iris slips inside their eyeball? LOL scary!
https://expatvalue-leblog.com/spain/dating-a-prescription-drug-addict-22574.php
Saturday, October 13, 2012 7:00 p.m. – 8:00 p.m. West Courtyard Check-in: Begins at 5 p.m. (There will be an express check-in line for Museum members.) Join the San Antonio Museum of Art for a celebration of author, Rick Riordan! Riordan's presentation and Q&A about his new young adult book The Heroes of Olympus, Book 3: The Mark of Athena will take place in SAMA's outdoor West Courtyard. The first 750 attendees will receive a free Camp Half-blood or Camp Jupiter t-shirt. To allow Rick time to speak and answer as many questions as possible he will have pre-signed The Mark of Athena for this event. Pre-signed books will be distributed by The Twig bookstore at SAMA. There will not be a traditional signing line. Purchase tickets and a signed copy of The Mark of Athena below, (available until noon, October 12) or you may purchase a pre-signed book on the day of the event in the SAMA Shop providing we haven't sold out in advance. FAQs 1) How early should I get there? Check-in begins at 5 pm. Seating is open. Bring blankets to sit on. Lawn chairs and picnics are welcome, and food trucks will be on site. Before or after the event, visit SAMA's Roman Art gallery to see a sculpture of Athena or visit SAMA's special exhibition Aphrodite and the Gods of Love. 2) How can I ask Rick a question during the Q&A? Questions for the Q&A can be submitted at check-in. A selection of questions will be read during the Q&A portion of the program. 3) Can I get my book personalized or a photo taken with Rick? Due to a very busy schedule, and as we're anticipating quite a large attendance, Rick will not able to personalize books or pose individually for pictures (though you are free to take as many pictures of him as you'd like!). 4) Will Rick be able to sign any of my books? Rick will only have enough time to pre-sign his new book Mark of Athena, so lighten your load and leave your old books at home! 5) Do we have to purchase a ticket and a book to attend the event? Tickets are required to attend this event and available for purchase through SAMA's website ONLY. Tickets are not available for purchase in-store, over the phone, or at the door.
https://www.samuseum.org/calendar/event-detail?eid=2791
OSCE Special Representative meets Armenian officials to discuss priorities in fight against human trafficking YEREVAN, 8 May 2009 - The OSCE Special Representative for Combating Trafficking in Human Beings, Eva Biaudet, welcomed today the Armenian authorities' efforts to combat trafficking in human beings and underscored the need to place human trafficking higher on the political agenda. In Yerevan on a three-day visit that ends today, Biaudet participated in the opening ceremony of the Anti-Trafficking Support and Resource Unit, which was established by the OSCE Office in Yerevan under the Ministry of Labour and Social Issues with the support of the Governments of France, Germany, Sweden and the United States. She expressed the hope that the Unit would help improve protection of victims and serve as a forum for discussion, training, gathering and analyzing information, and for developing feasible goals for combating human trafficking. In particular, Biaudet stressed the need for better victim identification: "Enhanced co-operation between NGOs providing assistance to victims and law enforcement agencies is a pre-requisite for better victim identification and I urge the Armenian authorities to put more effort into also investigating possible cases of internal trafficking among vulnerable populations." During her visit, Biaudet met Vice Prime Minister and Minister of Territorial Administration Armen Gevorgyan, who chairs the Inter-Ministerial Council on Trafficking Issues, as well as representatives of the National Assembly, the Ministries of Foreign Affairs, Justice, Labour and Social Issues and Education, the police, the Office of the Prosecutor General and the Ombudsman. She also discussed co-operation and co-ordination in the field with representatives of civil society and international organizations. "I welcome the increased efforts of the Armenian authorities to bring traffickers to justice. However, I am concerned that the vast majority of convicted persons in recent years have been women. Some of these women are former victims, and I encourage efforts to be directed towards effective prosecution of all responsible persons, including the main profiteers," said Biaudet. The Head of the OSCE Office in Yerevan, Ambassador Sergey Kapinos, said his Office stands ready to continue providing support to all relevant Armenian actors: "The OSCE has always been an active international actor in anti-trafficking activities in Armenia. The Anti-Trafficking Support and Resource Unit will promote further effective co-operation between key national and international anti-trafficking actors in Armenia to provide for improved prevention, protection and referral mechanisms in the country."
https://www.osce.org/cthb/50894
Priorities include: (1) Document changes in morphological variation that may occur from nonassortative mating of progeny of the mixed population of 7 different subspecies released in e. North America during reintroduction. (2) Continue to monitor Peregrine distribution and abundance as reintroduced or recovering populations increase, to help determine carrying capacity now, as environments have changed in several decades since the population decline (see also Abbitt and Scott 2001). (3) Study frequency of breeder dispersal in different regional populations and its influence on estimates of adult survival and on population dynamics. Further study of natal dispersal would be useful; are some populations sources and others sinks? As telemetry techniques are refined, use of these to provide more extensive data on wintering locations of given breeding populations is important. (4) Perform more removal experiments to determine whether male and female replacements differ between poor- and high-quality territories (see Johnstone 1998). (5) Assess changes in reproduction, age of first breeding, and survival of prebreeders and breeders in relation to increased population density and saturation of nesting habitat. See Conservation and management, above, for future concerns, and Cade et al. 1996b and Pagel et al. 1996 for other recommendations and concerns.
https://birdsna.org/Species-Account/bna/species/perfal/priorities
Policymakers and people all throughout the world have long been concerned about inflation. It refers to the average price level of a basket of goods and services over time in an economy. Several variables, including supply and demand, currency exchange rates, and economic policies, can affect this price level (IMF, 2021). Low inflation might signal sluggish economic progress, while high inflation can undermine purchasing power and cause economic instability (World Bank, 2021). Egypt has experienced serious problems with inflation in recent years. Numerous economic reforms and structural changes have been made in the nation, including the establishment of a value-added tax and the deregulation of the currency rate (IMF, 2020). These actions have increased the cost of numerous goods and services while also aiding in the stabilization of the economy and luring foreign investment (Government of Egypt, 2021). Egypt’s inflation peaked in 2017 at above 30%, fueled in part by a lack of foreign money and a failing agricultural industry (Central Bank of Egypt, 2017). This resulted in a great deal of public unhappiness and several protests as many Egyptians battled to make ends meet in the face of growing prices (BBC, 2018). As a result, the government put in place a range of steps to reduce inflation, including reducing subsidies and enacting price limits on specific items (Government of Egypt, 2018). Despite these efforts, inflation in Egypt has remained high in recent years. According to data from the Central Bank of Egypt (2020), inflation stood at around 9% in December 2020, down from a peak of over 30% in 2017 but still well above the central bank’s target of 4-6%. This has led some experts to question whether the government’s efforts to address inflation have been effective (Al-Monitor, 2021). The government’s need to strike a balance between conflicting agendas has been one of the biggest obstacles in its efforts to reduce inflation. On the one hand, actions like reducing subsidies and raising taxes can help in the near term to lower demand and stabilize prices (IMF, 2020). However, these policies may also be unpopular with the general populace and may have detrimental effects on social welfare and economic development (World Bank, 2021). Despite these obstacles, there are indications that the government’s initiatives to control inflation may be beginning to pay off. Inflation has been on the decline recently (Central Bank of Egypt, 2020), and the central bank has indicated that it anticipates prices to keep down in the months to come (Al-Monitor, 2021). For many Egyptians who have faced with price increases in recent years, this may come as a pleasant respite. Although it is challenging to say for sure whether Egypt’s inflation has peaked, there are indications that the government’s efforts to stabilize prices may be beginning to bear fruit. But a lot will depend on a number of variables, such as how well the government’s economic policies are working and how the global economy is doing. Finding a balance between the need to fight inflation and the need to promote economic expansion and social welfare will be crucial, as it usually has been.
https://thndrclaps.thndr.app/2023/01/08/the-ongoing-battle-against-inflation-in-egypt-a-closer-look/
Try these handy hints to speed up your labour. 1. Pace yourself “If us doctors step in while you're in labour too early, you’re more likely to end up with intervention, so it’s worth waiting it out,” advises Virginia Beckett, a consultant obstetrician at Bradford Teaching Hospitals NHS Trust in the UK. “Go to the hospital when your contractions are three minutes apart.” 2. Maintain energy Eat and drink little and often to maintain your energy level. Go for dried fruit, such as mango slices. 3. Pain relief “It’s safe to take Panadol during labour, as it won’t affect your baby,” Beckett notes. Long labours can be difficult, so an epidural could also help conserve your energy for when you start pushing. 4. Splash it out Get your husband to get you a water spray or nozzle to cool you down as you'll get warmer with the progression of labour. 5. Sleep it off Sleep or rest whenever you can to save your energy for when you need to push.
https://www.smartparents.sg/giving-birth/5-quick-delivery-tips
Field of the Invention Background of the Invention Summary of the Invention Brief Description of the Drawing Description of the Preferred Embodiments Particle Production A. Particle Properties B. Phosphors and Displays C. The invention relates to phosphor particles that emit light at desired wavelengths following stimulation and devices made with these particles. The invention further relates to methods of producing phosphor particles. Electronic displays often use a phosphor material, which emits visible light in response to interaction with electrons. Phosphor materials can be applied to substrates to produce cathode ray tubes and flat panel displays . Improvements in display devices place stringent demands on the phosphor materials, for example, due to decreases in electron velocity and increases in display resolution. Electron velocity is reduced in order to reduce power demands. In particular, flat panel displays generally require phosphors responsive to low velocity electrons. In addition, a desire for color display requires the use of materials or combinations of materials that emit light at different wavelengths at positions in the display that can be selectively excited. A variety of materials have been used as phosphors. In order to obtain materials that emit at desired wavelengths of light, activators have been doped into phosphor material. Alternatively, multiple phosphors can be mixed to obtain the desired emission. Furthermore, the phosphor materials must show sufficient luminescence. Small, nanoscale particles provide improved performance as phosphors. For example, particles with average diameters less than about 100 nm have altered band gaps with emission frequencies that are functions of the particle diameters. Therefore, collections of these particles with a narrow distribution of diameters can be used to provide selected emission frequencies without necessarily altering the particle composition. The small size of the particles also results in high luminescence, responsiveness to low velocity electrons as well as processing advantages. Laser pyrolysis provides an efficient method for the production of highly pure nanoscale particles with a narrow distribution of particle sizes. 2 2 3 In a first aspect, the invention features a display device comprising phosphor particles having an average diameter less than about 100 nm and wherein the phosphor particles comprise a collection of particles having a diameter distribution such that at least about 95% of the particles have a diameter greater than about 60% of the average diameter and less than about 140% of the average diameter and the phosphor particles comprising a metal oxide. The phosphor particles can comprise a metal compound such as ZnO, TiO and YO. The phosphor particles preferably have an average diameter from about 5 nm to about 50 nm and a diameter distribution such that at least about 95 percent of the particles have a diameter greater than about 60 percent of the average diameter and less than about 140 percent of the average diameter. In certain embodiments, the excitation of the phosphors is accomplished with low velocity electrons. In a second aspect, the invention features a display device according to claim 19. In another aspect, the invention features a composition for application by photolithography comprising phosphor particles and a curable polymer, the phosphor particles having an average diameter and a distribution of diameters selected to yield light emissions in a selected portion of the electromagnetic spectrum following excitation and the phosphor particles having an average diameter less than about 100 nm. The curable polymer can be curable by UV radiation or by electron beam. radiation. The phosphor particles preferably have an average diameter from about 5 nm to about 50 nm. 2 2 In another aspect, the invention features a method for producing zinc oxide particles comprising the step of pyrolyzing a molecular stream comprising a zinc precursor, an oxidizing agent and a radiation absorbing gas in a reaction chamber, where the pyrolysis is driven by heat absorbed from a laser beam. The zinc oxide particles preferably have an average diameter less than about 150 nm and more preferably an average diameter from about 5 nm to about 50 nm. In practicing the method, the laser beam preferably is produced by a CO laser and the molecular stream preferably is elongated in one dimension. Suitable zinc precursor include ZnCl. In another aspect, the invention features a method for producing zinc sulfide particles comprising the step of pyrolyzing a molecular stream comprising a zinc precursor, a sulfur source and a radiation absorbing gas in a reaction chamber, where the pyrolysis is driven by heat absorbed from a laser beam. Other features and advantages of the invention are apparent from the following description of the preferred embodiments, and from the claims. Fig. 1 is a schematic, sectional view of an embodiment of a laser pyrolysis apparatus taken through the middle of the laser radiation path. The upper insert is a bottom view of the injection nozzle, and the lower insert is a top view of the collection nozzle. Fig. 2 is a schematic, perspective view of a reaction chamber of an alternative embodiment of the laser pyrolysis apparatus, where the materials of the chamber are depicted as transparent to reveal the interior of the apparatus. Fig. 3 is a sectional view of the reaction chamber of Fig. 2 taken along line 3-3. Fig. 4 is a schematic, sectional view of an oven for heating particles, in which the section is taken through the center of the quartz tube. Fig. 5 is a sectional view of an embodiment of display device incorporating a phosphor layer. Fig. 6 is a sectional view of an embodiment of a liquid crystal display incorporating a phosphor for illumination. Fig. 7 is a sectional view of an electroluminescent display. Fig. 8 is a sectional view of an embodiment of a flat panel display incorporating field emission display devices. Small scale particles can be used as improved phosphor particles. In particular, particles on the order of 100 nm or less have superior processing properties to produce displays, and they have good luminescence. Significantly, the band gap of these materials is size dependent at diameters on the order of 100 nm or less. Therefore, particles with a selected, narrow distribution of diameters can serve as a phosphor at one color (wavelength) while particles of the same or different material with similarly selected average diameter and narrow distribution of sizes can serve as a phosphor at a different color. In addition, the small size of the particles can be advantageous for the production of higher resolution displays. 2 2 3 Appropriate particles generally are chalcogenides, especially ZnO, ZnS, TiO, and YO. Preferred particles have a desired emission frequency and are highly luminescent. In addition, preferred particles have persistent emission, i.e., there is a significant time for the emission to decay following stimulation of the material. Specifically, there should be sufficient persistence of the emission to allow for human perception. Suitable particles generally are semiconductors, and their emission frequency is determined by the band gap. Preferably, the luminescing state has an energy reasonably close to the excitation energy such that little energy is wasted as heat. 2 2 3 Laser pyrolysis, as described below, is an excellent way of efficiently producing ZnO, ZnS, TiO and YO particles with narrow distributions of average particle diameters. A basic feature of successful application of laser pyrolysis for the production of appropriate small scale particles is production of a molecular stream containing a metal precursor compound, a radiation absorber and a reactant serving as an oxygen or sulfur source, as appropriate. The molecular stream is pyrolyzed by an intense laser beam. The intense heat resulting from the absorption of the laser radiation induces the reaction of the metal compound precursor in the oxygen or sulfur environment. As the molecular stream leaves the laser beam, the particles are rapidly quenched. Laser pyrolysis has been discovered to be a valuable tool for the production of nanoscale metal oxide and sulfide particles of interest. In addition, the metal oxide and sulfide particles produced by laser pyrolysis are a convenient material for further processing to expand the pathways for the production of desirable metal compound particles. Thus, using laser pyrolysis alone or in combination with additional processes, a wide variety of metal oxide and sulfide particles can be produced. In some cases, alternative production pathways can be followed to produce comparable particles. The reaction conditions determine the qualities of the particles produced by laser pyrolysis. The reaction conditions for laser pyrolysis can be controlled relatively precisely in order to produce particles with desired properties. The appropriate reaction conditions to produce a certain type of particles generally depend on the design of the particular apparatus. Nevertheless, some general observations on the relationship between reaction conditions and the resulting particles can be made. Reactant gas flow rate and velocity of the reactant gas stream are inversely related to particle size so that increasing the reactant gas flow rate or velocity tends to result in smaller particle size. Also, the growth dynamics of the particles have a significant influence on the size of the resulting particles. In other words, different crystal forms of a metal compound have a tendency to form different size particles from other crystal forms under relatively similar conditions. Laser power also influences particle size with increased laser power favoring larger particle formation for lower melting materials and smaller particle formation for higher melting materials. 4 3 2 4 5 3 7 13 2 2 2 2 2 Appropriate metal precursor compounds generally include metal compounds with suitable vapor pressures, i.e., vapor pressures sufficient to get desired amounts of precursor vapor in the reactant stream. The vessel holding the precursor compounds can be heated to increase the vapor pressure of the metal compound precursor, if desired. Preferred titanium precursors include, for example, TiCl and Ti [OCH(CH)] (titanium tetra-I-propoxide). Preferred yttrium precursors include YO (OCH) (yttrium oxide isopropoxide). Preferred zinc precursors include, for example, ZnCl. ZnCl vapor can be generated by heating and, optionally, melting ZnCl solids. For example, ZnCl has a vapor pressure of about 5 mm Hg at a temperature of about 500°C. When using ZnCl precursor, the chamber and nozzle preferably are heated to avoid getting condensation of the precursor. 2 2 3 2 Preferred reactants suitable as oxygen sources include, for example, O, CO, CO, O and mixtures thereof. Preferred reactants suitable as sulfur sources include, for example, HS. The reactant compound serving as the oxygen of sulfur source should not react significantly with the metal precursor compound prior to entering the reaction zone since this generally would result in the formation of large particles. 2 2 4 3 6 4 3 3 Laser pyrolysis can be performed with a variety of optical laser frequencies. Preferred lasers operate in the infrared portion of the electromagnetic spectrum. CO lasers are particularly preferred sources of laser light. Infrared absorbers for inclusion in the molecular stream include, for example, CH, NH, SF, SiH and O. O can act as both an infrared absorber and as an oxygen source. The radiation absorber, such as the infrared absorber, absorbs energy from the radiation beam and distributes the energy as heat to the other reactants to drive the pyrolysis. Preferably, the energy absorbed from the radiation beam increases the temperature at a tremendous rate, many times the rate that energy generally would be produced even by strongly exothermic reactions under controlled condition. While the process generally involves nonequilibrium conditions, the temperature can be described approximately based on the energy in the absorbing region. The laser pyrolysis process is qualitatively different from the process in a combustion reactor where an energy source initiates a reaction, but the reaction is driven by energy given off by an exothermic reaction. 2 An inert shielding gas can be used to reduce the amount of reactant and product molecules contacting the reactant chamber components. Appropriate shielding gases include, for example, Ar, He and N. An appropriate laser pyrolysis apparatus generally includes a reaction chamber isolated from the ambient environment. A reactant inlet connected to a reactant supply system produces a molecular stream through the reaction chamber. A laser beam path intersects the molecular stream at a reaction zone. The molecular stream continues after the reaction zone to an outlet, where the molecular stream exits the reaction chamber and passes into a collection system. Generally, the laser is located external to the reaction chamber, and the laser beam enters the reaction chamber through an appropriate window. Referring to Fig. 1, a particular embodiment 100 of a pyrolysis apparatus involves a reactant supply system 102, reaction chamber 104, collection system 106 and laser 108. Reactant supply system 102 includes a source 120 of metal compound precursor. For liquid precursors, a carrier gas from carrier gas source 122 can be introduced into precursor source 120, containing liquid precursor to facilitate delivery of the precursor. The carrier gas from source 122 preferably is either an infrared absorber or an inert gas and is preferably bubbled through the liquid, metal compound precursor. The quantity of precursor vapor in the reaction zone is roughly proportional to the flow rate of the carrier gas. Alternatively, carrier gas can be supplied directly from infrared absorber source 124 or inert gas source 126, as appropriate. The reactant serving as the oxygen or sulfur source is supplied from reactant source 128, which can be a gas cylinder or other appropriate container. The gases from the metal compound precursor source 120 are mixed with gases from reactant source 128, infrared absorber source 124 and inert gas source 126 by combining the gases in a single portion of tubing 130. The gases are combined a sufficient distance from reaction chamber 104 such that the gases become well mixed prior to their entrance into reaction chamber 104. The combined gas in tube 130 passes through a duct 132 into rectangular channel 134, which forms part of an injection nozzle for directing reactants into the reaction chamber. Flow from sources 122, 124, 126 and 128 are preferably independently controlled by mass flow controllers 136. Mass flow controllers 136 preferably provide a controlled flow rate from each respective source. Suitable mass flow controllers include, for example, Edwards Mass Flow Controller, Model 825 series, from Edwards High Vacuum International, Wilmington, MA. Inert gas source 138 is connected to an inert gas duct 140, which flows into annular channel 142. A mass flow controller 144 regulates the flow of inert gas into inert gas duct 140. Inert gas source 126 can also function as the inert gas source for duct 140, if desired. The reaction chamber 104 includes a main chamber 200. Reactant supply system 102 connects to the main chamber 200 at injection nozzle 202. The end of injection nozzle 202 has an annular opening 204 for the passage of inert shielding gas, and a rectangular slit 206 for the passage of reactant gases to form a molecular stream in the reaction chamber. Annular opening 204 has, for example, a diameter of about 3.81cm (1.5 inches) and a width along the radial direction of about 0.1588 (1/16 in). The flow of shielding gas through annular opening 204 helps to prevent the spread of the reactant gases and product particles throughout reaction chamber 104. Tubular sections 208, 210 are located on either side of injection nozzle 202. Tubular sections 208, 210 include ZnSe windows 212. 214, respectively. Windows 212, 214 are about 2.54 cm (1 inch) in diameter. Windows 212, 214 are preferably plane-focusing tenses with a focal length equal to the distance between the center of the chamber to the surface of the lens to focus the beam to a point just below the center of the nozzle opening. Windows 212, 214 preferably have an antireflective coating. Appropriate ZnSe lenses are available from Janos Technology, Townshend, Vermont. Tubular sections 208, 210 provide for the displacement of windows 212, 214 away from main chamber 200 such that windows 212, 214 are less likely to be contaminated by reactants or products. Window 212, 214 are displaced, for example, about 3 cm from the edge of the main chamber 200. Windows 212, 214 are sealed with a rubber o-ring to tubular sections 208, 210 to prevent the flow of ambient air into reaction chamber 104. Tubular inlets 216, 218 provide for the flow of shielding gas into tubular sections 208, 210 to reduce the contamination of windows 212, 214. Tubular inlets 216, 218 are connected to inert gas source 138 or to a separate inert gas source. In either case, flow to inlets 216, 218 preferably is controlled by a mass flow controller 220. 2 Laser 108 is aligned to generate a laser beam 222 that enters window 212 and exits window 214. Windows 212, 214 define a laser light path through main chamber 200 intersecting the flow of reactants at reaction zone 224. After exiting window 214, laser beam 222 strikes power meter 226, which also acts as a beam dump. An appropriate power meter is available from Coherent Inc., Santa Clara, CA. Laser 108 can be replaced with an intense conventional light source such as an arc lamp. Preferably, laser 108 is an infrared laser, especially a CW CO laser such as an 1800 watt maximum power output laser available from PRC Corp., Landing, NJ. 5 Reactants passing through slit 206 in injection nozzle 202 initiate a molecular stream. The molecular stream passes through reaction zone 224, where reaction involving the metal precursor compound takes place. Heating of the gases in reaction zone 224 is extremely rapid, roughly on the order of 10°C/sec depending on the specific conditions. The reaction is rapidly quenched upon leaving reaction zone 224, and particles 228 are formed in the molecular stream. The nonequilibrium nature of the process allows for the production of particles with a highly uniform size distribution and structural homogeneity. The path of the molecular stream continues to collection nozzle 230. Collection nozzle 230 is spaced about 2 cm from injection nozzle 202. The small spacing between injection nozzle 202 and collection nozzle 230 helps reduce the contamination of reaction chamber 104 with reactants and products. Collection nozzle 230 has a circular opening 232. Circular opening 232 feeds into collection system 106. The chamber pressure is monitored with a pressure gauge attached to the main chamber. The chamber pressure generally ranges from about 666.61Pa (5Torr) to about 133322Pa (1000 Torr). Reaction. chamber 104 has two additional tubular sections not shown. One of the additional tubular sections projects into the plane of the sectional view in Fig. 1, and the second additional tubular section projects out of the plane of the sectional view in Fig. 1. When viewed from above, the four tubular sections are distributed roughly, symmetrically around the center of the chamber. These additional tubular sections have windows for observing the inside of the chamber. In this configuration of the apparatus, the two additional tubular sections are not used to facilitate production of particles. Collection system 106 can include a curved channel 250 leading from collection nozzle 230. Because of the small size of the particles, the product particles follow the flow of the gas around curves. Collection system 106 includes a filter 252 within the gas flow to collect the product particles. A variety of materials such as teflon, glass fibers and the like can be used for the filter as long as the material is inert and has a fine enough mesh to trap the particles. Preferred materials for the filter include, for example, a glass fiber filter from ACE Glass Inc., Vineland, NJ. Pump 254 is used to maintain collection system 106 at a selected pressure. A variety of different pumps can be used. Appropriate pumps for use as pump 254 include, for example, Busch Model B0024 pump from Busch, Inc., Virginia Beach, VA with a pumping capacity of about 0.707921 cubic meters per minute (cmm) (25 cubic feet per minute (cfm)) and Leybold Model SV300 pump from Leybold Vacuum Products, Export, PA with a pumping capacity of about 5.521785 cmm (195 cfm). It may be desirable to flow the exhaust of the pump through a scrubber 256 to remove any remaining reactive chemicals before venting into the atmosphere. The entire apparatus 100 can be placed in a fume hood for ventilation purposes and for safety considerations. Generally, the laser remains outside of the fume hood because of its large size. The apparatus is controlled by a computer. Generally, the computer controls the laser and monitors the pressure in the reaction chamber. The computer can be used to control the flow of reactants and/or the shielding gas. The pumping rate is controlled by either a manual needle valve or an automatic throttle valve inserted between pump 254 and filter 252. As the chamber pressure increases due to the accumulation of particles on filter 252, the manual valve or the throttle valve can be adjusted to maintain the pumping rate and the corresponding chamber pressure. The reaction can be continued until sufficient particles are collected on filter 252 such that the pump can no longer maintain the desired pressure in the reaction chamber 104 against the resistance through filter 252. When the pressure in reaction chamber 104 can no longer be maintained at the desired value, the reaction is stopped, and the filter 252 is removed. With this embodiment, about 3-75 grams of particles can be collected in a single run before the chamber pressure can no longer be maintained. A single run generally can last from about 10 minutes to about 3 hours depending on the type of particle being produced and the particular filter. Therefore, it is straightforward to produce a macroscopic quantity of particles, i.e., a quantity visible with the naked eye. The reaction conditions can be controlled relatively precisely. The mass flow controllers are quite accurate. The laser generally has about 0.5 percent power stability. With either a manual control or a throttle valve, the chamber pressure can be controlled to within about 1 percent. The configuration of the reactant supply system 102 and the collection system 106 can be reversed. In this alternative configuration, the reactants are supplied from the bottom of the reaction chamber, and the product particles are collected from the top of the chamber. This alternative configuration tends to result in a slightly higher collection of product for particles that tend to be buoyant in the surrounding gases. In this configuration, it is preferable to include a curved section in the collection system so that the collection filter is not mounted directly above the reaction chamber. An alternative design of a laser pyrolysis apparatus has been described. See, commonly assigned U.S. Patent Application No. 08/808,850, entitled "Efficient Production of Particles by Chemical Reaction, " . This alternative design is intended to facilitate production of commercial quantities of particles by laser pyrolysis. A variety of configurations are described for injecting the reactant materials into the reaction chamber. The alternative apparatus includes a reaction chamber designed to minimize contamination of the walls of the chamber with particles, to increase the production capacity and to make efficient use of resources. To accomplish these objectives, the reaction chamber conforms generally to the shape of an elongated reactant inlet, decreasing the dead volume outside of the molecular stream. Gases can accumulate in the dead volume, increasing the amount of wasted radiation through scattering or absorption by nonreacting molecules. Also, due to reduced gas flow in the dead volume, particles can accumulate in the dead volume causing chamber contamination. The design of the improved reaction chamber 300 is schematically shown in Figs. 2 and 3. A reactant gas channel 302 is located within block 304. Facets 306 of block 304 form a portion of conduits 308. Another portion of conduits 308 join at edge 310 with an inner surface of main chamber 312. Conduits 308 terminate at shielding gas inlets 314. Block 304 can be repositioned or replaced, depending on the reaction and desired conditions, to vary the relationship between the elongated reactant inlet 316 and shielding gas inlets 314. The shielding gases from shielding gas inlets 314 form blankets around the molecular stream originating from reactant inlet 316. 2 The dimensions of elongated reactant inlet 316 preferably are designed for high efficiency particle production. Reasonable dimensions for the reactant inlet for the production of metal oxide or metal sulfide particles, when used with a 1800 watt CO laser, are from about 5 mm to about 1 meter. Main chamber 312 conforms generally to the shape of elongated reactant inlet 316. Main chamber 312 includes an outlet 318 along the molecular stream for removal. of particulate products, any unreacted gases and inert gases. Tubular sections 320, 322 extend from the main chamber 312. Tubular sections 320, 322 hold windows 324, 326 to define a laser beam path 328 through the reaction chamber 300. Tubular sections 320, 322 can include shielding gas inlets 330, 332 for the introduction of shielding gas into tubular sections 320, 322. The improved apparatus includes a collection system to remove the particles from the molecular stream. The collection system can be designed to collect a large quantity of particles without terminating production or, preferably, to run in continuous production by switching between different particle collectors within the collection system. The collection system can include curved components within the flow path similar to curved portion of the collection system shown in Fig. 1. The configuration of the reactant injection components and the collection system can be reversed such that the particles are collected at the top of the apparatus. As noted above, properties of the metal compound particles can be modified by further processing. For example, oxide nanoscale particles can be heated in an oven in an oxidizing environment or an inert environment to alter the oxygen content and/or crystal structure of the metal oxide. The processing of nanoscale metal oxides in an oven is further discussed in commonly assigned and copending, U.S. Patent Application Ser. No. 08/897,903, entitled "Processing of Vanadium Oxide Particles With Heat,". In addition, the heating process can be used possibly to remove adsorbed compounds on the particles to increase the quality of the particles. It has been discovered that use of mild conditions, i.e., temperatures well below the melting point of the particles, can result in modification of the stoichiometry or crystal structure of metal oxides without significantly sintering the particles into larger particles. A variety of apparatuses can be used to perform the heat processing. An example of an apparatus 400 to perform this heat processing is displayed in Fig. 4. Apparatus 400 includes a tube 402 into which the particles are placed. Tube 402 is connected to a reactive. gas source 404 and inert gas source 406. Reactant gas, inert gas or a combination thereof to produce the desired atmosphere are placed within tube 402. 2 3 2 2 Preferably, the desired gases are flowed through tube 402. Appropriate reactant gases to produce an oxidizing environment include, for example, O, O, CO, CO, and combinations thereof. The reactant gases can be diluted with inert gases such as Ar, He and N. The gases in tube 402 can be exclusively inert gases, if desired. The reactant gases may not result in changes to the stoichiometry of the particles being heated. Tube 402 is located within oven or furnace 408. Oven 408 maintains the relevant portions of the tube at a relatively constant temperature, although the temperature can be varied systematically through the processing step, if desired. Temperature in oven 408 generally is measured with a thermocouple 410. The particles can be placed in tube 402 within a vial 412. Vial 412 prevents loss of the particles due to gas flow. Vial 412 generally is oriented with the open end directed toward the direction of the source of the gas flow. The precise conditions including type of active gas (if any), concentration of active gas, pressure or flow rate of gas, temperature and processing time can be selected to produce the desired type of product material. The temperatures generally are mild, i.e., significantly below the melting point of the material. The use of mild conditions avoids interparticle sintering resulting in larger particle sizes. Some controlled sintering of the metal oxide particles can be performed in oven 408 at somewhat higher temperatures to produce slightly larger average particle diameters. For the processing of titanium oxides and zinc oxides, the temperatures preferably range from about 50°C to about 1000°C and more preferably from about 80°C to about 500°C. The particles preferably are heated for about 1 hour to about 100 hours. Some empirical adjustment may be required to produce the conditions appropriate for yielding a desired material. A collection of preferred particles has an average diameter of less than a micron, preferably from about 5 nm to about 500 nm and more preferably from about 5 nm to about 100 nm, and even more preferably from about 5 nm to about 50 nm. The particles generally have a roughly spherical gross appearance. Upon closer examination, the particles generally have facets corresponding to the underlying crystal lattice. Nevertheless, the particles tend to exhibit growth that is roughly equal in the three physical dimensions to give a gross spherical appearance. Diameter measurements on particles with asymmetries are based on an average of length measurements along the principle axes of the particle. The measurements along the principle axes preferably are each less than about 1 micron for at least about 95 percent of the particles, and more preferably for at least about 98 percent of the particles. Because of their small size, the particles tend to form loose agglomerates due to van der Waals forces between nearby particles. Nevertheless, the nanometer scale of the particles (i.e., primary particles) is clearly observable in transmission electron micrographs of the particles. For crystalline particles, the particle size generally corresponds to the crystal size. The particles generally have a surface area corresponding to particles on a nanometer scale as observed in the micrographs. Furthermore, the particles manifest unique properties due to their small size and large surface area per weight of material. Of particular relevance, the particles have an altered band structure, as described further below. The high surface area of the particles generally leads to high luminosity of the particles. As produced, the particles preferably have a high degree of uniformity in size. As determined from examination of transmission electron micrographs, the particles generally have a distribution in sizes such that at least about 95 percent of the particles have a diameter greater than about 40 percent of the average diameter and less than about 160 percent of the average diameter. Preferably, the particles have a distribution of diameters such that at least about 95 percent of the particles have a diameter greater than about 60 percent of the average diameter and less than about 140 percent of the average diameter. The narrow size distributions can be exploited in a variety of applications, as described below. For some of the applications, it may be desirable to mix several collections of particles, each having a narrow diameter distribution, to produce a desired distribution of particle diameters and compositions. 2 At small crystalline diameters the band properties of the particles are altered. The increase in band gap is approximately in proportion to 1/(particle size). For especially small particle sizes, the density of states may become low enough that the band description may become incomplete as individual molecular orbitals play a more prominent role. The qualitative trends should hold regardless of the need to account for a molecular orbital description of the electronic properties. In addition, with a uniform distribution of small particles, the emission spectrum narrows because of the reduction of inhomogeneous broadening. The result is a sharper emission spectrum with an emission maximum that depends on the average particle diameter. Thus, the use of very small particle diameters may allow for adjustment of emission characteristics without the need to activate the particles with a second metal. Furthermore, the small size of the particles allows for the formation of very thin layers. This is advantageous for use with low velocity electrons since the electrons may not penetrate deeply within a layer. The small size of the particles is also conducive to the formation of small patterns, for example using photolithography, with sharp edges between the elements of the pattern. The production of small, sharply separated elements is important for the formation of high resolution displays. In addition, the particles produced as described above generally have a very high purity level. Metal oxide and sulfide particles produced by the above methods are expected to have a purity greater than the reactant gases because the crystal formation process tends to exclude contaminants from the lattice. Furthermore, metal oxide and sulfide particles produced by laser pyrolysis generally have a high degree of crystallinity and few surface distortions. Although under certain conditions mixed phase material may be formed, laser pyrolysis generally can be effectively used to produce single phase crystalline particles. Primary particles generally consist of single crystals of the material. The single phase, single crystal properties of the particles can be used advantageously along with the uniformity and narrow size distribution. Under certain conditions, amorphous particles may be formed by laser pyrolysis. Some amorphous particles can be heated under mild conditions to form crystalline particles. 2 2 3 Zinc oxides can have a stoichiometry of, at least, ZnO (hexagonal crystal, Wurtzite structure) or ZnO. The production parameters can be varied to select for a particular stoichiometry of zinc oxide. Zinc sulfide has a cubic crystal lattice generally with a zincblend structure. YO has a cubic crystal lattice. 2 Titanium dioxide is known to exist in three crystalline phases, anatase, rutile and brookite, as well as an amorphous phase. The anatase and rutile phases have a tetrahedral crystal lattice, and the brookite phase has an orthorhombic crystal structure. The conditions of the laser pyrolysis can be varied to favor the formation of a single, selected phase of TiO. In addition, heating of small metal oxide particles under mild conditions may be useful to alter the phase or composition of the materials. The particles described in this application can be used as phosphors. The phosphors emit light, preferably visible light, following excitation. A variety of ways can be used to excite the phosphors, and particular phosphors may be responsive to one or more of the excitation approaches. Particular types of luminescence include cathodoluminescence, photoluminescence and electroluminescence which, respectively, involve excitation by electrons, light and electric fields. Many materials that are suitable as chathodoluminescence phosphors are also suitable as electroluminescence phosphors. In particular, the particles preferably are suitable for low-velocity electron excitation, with electrons accelerated with potentials below 1 KV, and more preferably below 100 V. The small size of the particles makes them suitable for low velocity electron excitation. Furthermore, the particle produce high luminescence with low electron velocity excitation. The phosphor particles can be used to produce any of a variety of display devices based on low velocity electrons, high velocity electrons, or electric fields. Referring to Fig. 5, a display device 500 includes an anode 502 with a phosphor layer 504 on one side. The phosphor layer faces an appropriately shaped cathode 506, which is the source of electrons used to excite the phosphor. A grid cathode 508 can be placed between the anode 502 and the cathode 506 to control the flow of electrons from the cathode 506 to the anode 502. Cathode ray tubes (CRTs) have been used for a long time for producing images. CRTs generally use relatively higher electron velocities. Phosphor particles, as described above, can still be used advantageously as a convenient way of supplying particles of different colors, reducing the phosphor layer thickness and decreasing the quantity of phosphor for a given luminosity. CRTs have the general structure as shown in Fig. 5, except that the anode and cathode are separated by a relatively larger distance and steering electrodes rather than a grid electrode generally are used to guide the electrons from the cathode to the anode. Other preferred applications include the production of flat panel displays. Flat panel displays can be based on, for example, liquid crystals or field emission devices. Liquid crystal displays can be based on any of a variety of light sources. Phosphors can be useful in the production of lighting for liquid crystal displays. Referring to Fig. 6, a liquid crystal element 530 includes at least partially light transparent substrates 532, 534 surrounding a liquid crystal layer 536. Lighting is provided by a phosphor layer 538 on an anode 540. Cathode 542 provides a source of electrons to excite the phosphor layer 538. Alternative embodiments are described, for example, in U.S. Patent No. 5,504,599. Liquid crystal displays can also be illuminated with backlighting from an electroluminescenct display. Referring to Fig. 7, electroluminescent display 550 has a conductive substrate 552 that functions as a first electrode. Conductive substrate 552 can be made from, for example, aluminum, or graphite . A second electrode 554 is transparent and can be formed from, for example, indium tin oxide. A dielectric layer 556 may be located between electrodes 552, 554, adjacent to first electrode 552. Dielectric layer 556 includes a dielectric binder 558 such as cyanoethyl cellulose or cyanoethyl starch. Dielectric layer 556 can also include ferroelectric material 560 such as barium titanate. Dielectric layer 556 may not be needed for dc-driven (in contrast with ac-driven) electro-luminescent devices. A phosphor layer 562 is located between transparent electrode 554 and dielectric layer 562. Phosphor layer 562 includes electroluminescent particles 564 in a dielectric binder 566. Electroluminescent display 550 also can be used for other display applications such as automotive dashboard and control switch illumination. In addition, a combined liquid crystal/electroluminescent display has been designed. See, Fuh, et al., Japan J. Applied Phys. 33:L870-L872 (1994). Referring to Fig. 8, a display 580 based on field emission devices involves anodes 582 and cathodes 584 spaced a relatively small distance apart. Each electrode pair form an individually addressable pixel. A phosphor layer 586 is located between each anode 582 and cathode 584. The phosphor layer 586 includes phosphorescent nanoparticles as described above. Phosphorescent particles with a selected emission frequency can be located at a particular addressable location. The phosphor layer 586 is excited by low velocity electrons travelling from the cathode 584 to the anode 582. Grid electrodes 588 can be used to accelerate and focus the electron beam as well as act as an on/off switch for electrons directed at the phosphor layer 586. An electrically insulating layer is located between anodes 582 and grid electrodes 588. The elements are generally produced by photolithography or a comparable techniques such as sputtering and chemical vapor deposition for the production of integrated circuits. As shown in Fig. 8, the anode should be at least partially transparent to permit transmission of light emitted by phosphor 586. Alternatively, U.S. Patent 5,651,712, discloses a display incorporating field emission devices having a phosphor layer oriented with an edge (rather than a face) along the desired direction for light propagation. The construction displayed in this patent incorporates color filters to produce a desired color emission rather than using phosphors that emit at desired frequencies. Based on the particles described above, selected phosphor particles preferably would be used to produce the different colors of light, thereby eliminating the need for color filters. The phosphor particles can be adapted for use in a variety of other devices beyond the representative embodiments specifically described. The nanoparticles can be directly applied to a substrate to produce the above structures. Alternatively, the nanoparticles can be mixed with a binder such as a curable polymer for application to a substrate. The composition involving the curable binder and the phosphor nanoparticles can be applied to a substrate by photolithography or other suitable technique for patterning a substrate such as used in the formation of integrated circuit boards. Once the composition is deposited at a suitable positions on the substrate, the material can be exposed to suitable conditions to cure the polymer. The polymer can be curable by electron beam radiation, UV radiation or other suitable techniques.
Tony Robinson presents aerial footage of areas of Britain that are off limits to the public, including the underground tube station at Bank Hidden Britain by Drone. Tony Robinson returns with a second series of the show using drones to take us into usually inaccessible places. Tonight, they dive into tunnels at London’s Bank station, where construction work is going on, soar over oil rigs in a Scottish firth and, at a location worthy of Scooby-Doo, explore an abandoned amusement park. ‘I’ve always thought of a drone as being essentially an exterior device,’ explains Tony, ‘but we’ve been able to use them in interiors, too. ‘The first sequence in episode one is at a big stately home, where we were able to do amazing tracking shots through rooms where the doors you’re going through are very small. You couldn’t have done that before. ‘We also go underground into the tunnels at Bank Tube station in London, into luxury car vaults, and into the biggest wine store in Europe – it’s brilliant stuff!
https://www.whatsontv.co.uk/events/hidden-britain-drone-c4-5-aug/
The internet has already changed the lives of billions of people all over the planet and still continues to do so. But in order to fully benefit from what the internet can offer, a broadband connection is essential. In the Arctic this is not yet the case. A large portion of the Arctic region suffers from a bad connection. There exists a significant digital gap between the northern and the southern region of the Arctic countries. For the majority of the inhabitants of the Arctic regions, internet is very expensive, but not only that; it offers a low bandwidth and a low data cap. This is particularly the case in Nunavut where Inuit rely on only one way to connect: via satellite. Other regions can be connected via micro wave or terrestrial fiber optic cables, but not all of them. Even if the satellite and microwave connect the northerners to the rest of the World, these technologies are likely to suffer due to the harsh environment (ice, snow storms, electromagnetic storms) that can disrupt, and even cut off completely, the only way to communicate for some of the Indigenous communities. Submarine cables for now seem to be the most reliable, fastest and cheapest option in the long term to connect most of the communities to broadband internet, even in the Arctic. While most of the Arctic communities are settled on shores in the North American Arctic, especially in Canada, the option of laying submarine fibre optic cables to connect them to broadband internet might be a solution. But why is broadband internet via submarine fibre optic cables vital for the Arctic populations? How has the internet changed their lives and will it continue to do so? Internet, an Everyday Necessity Internet is central to everyday life in the North, especially for Indigenous peoples, and is now considered as a basic need and even a human right. It helps the Inuit to protect their culture and rights by raising awareness via social media. Unfortunately, most of the time they must deal with bad connections and signal problems. However, change could be around the corner with the completion of several projects of submarine cables coming to the Arctic, bringing broadband to the top of the world. The fact that Connectivity was chosen as one of the four priorities of the Finnish chairmanship of the Arctic Council (AC) from 2017 to 20191 appears to be logical when we combine all the studies published on this subject over the past few years. It reveals the enormous need for a better connection in the Arctic regions. It has been a Northern concern for many years, while reports point towards the need for faster, more reliable and affordable broadband connections for all Arctic inhabitants and especially Indigenous peoples.2 The will of the AC to take this matter into consideration is highlighted by the creation of the Task Force on Telecommunications Infrastructure in the Arctic (TFTIA),3 and the release of two reports on Arctic telecommunications: the Arctic Economic Council’s (AEC) January 2017 report Arctic Broadband, Recommendations for an Interconnected Arctic,4 and the AC’s May 2017 report on Telecommunications Infrastructure in the Arctic: A Circumpolar Assessment.5 A basic need and a human right As with running water, electricity or food, broadband internet access has become a necessity for everyday life. In 2016, the Canadian Radio-television and Telecommunications Commission (CRTC) declared broadband internet as a basic need.6 Perhaps even more than a need, internet has been a legal right for every citizen since 2010 in Finland, an Arctic country where internet access is a universal service obligation (USO) at a minimum rate of 2Mbps since 2015, with a target of 10Mbps by 2021. In 2016, the United Nations took this further by pushing the vote of a non-binding resolution defining internet access as a basic Human Right. Amazon prime in the Arctic, a double-edged sword? Thanks to internet, the everyday lives of some of the northerners have changed in the past few years through the use of Amazon. The Amazon Prime membership was, and still is, an essential tool in everyday life for some of the North American communities. This status allows customers to ship their purchases from this website for free to almost anywhere in North America7 within a few days, sometimes more for the Arctic regions, for only 79 CDN$ a year for Canadians and 99 US$ for the United States. Like everything in the North, shipping costs are much higher than in the south of the US and Canada. Shipping goods by plane is expensive, and by sealift it is only possible during summer, and even then only for non-perishable foods and supplies. That’s why the free delivery Prime status has become such a boon for the northerners. In Arctic Alaska, for instance, Prime allows Alaska inhabitants to purchase everyday items for the same price they would pay in the contiguous United States. For example, the owner of one the few hotels in Utqiagvik can serve fresh bread for breakfast every day because he can order his flour on Amazon for a much lower price thanks to Prime. Furthermore, he can fill his vending machine with cheaper candies than the local store paying shipping fees, making local kids happy to be able to buy affordable sweets. But the benefits don’t just apply to food. They’re seen in schools as well. In Eagle, Alaska, a remote town located near the Canadian border, the towns’ school principal, Kristy Robbins, uses Amazon Prime to provide her school with gym and art supplies, allowing them to last until the end of the year even when the road is closed during winter.8 Before the Prime status was created, people of Alaska living in remote villages who wanted to save money for shopping had to fly to urban areas such as Anchorage or Fairbanks, buy their goods, and then mail them through the United States Postal Service (USPS) or ship them by plane back to their home in the Arctic. A double-edged sword for Canadian communities Canadian northerners used to also benefit from Prime’s free shipping until April 2015 when Amazon decided to ship for free only to the capitals of the Arctic territories (i.e., Iqaluit, Nunavut; Yellowknife, Northwest Territories; and Whitehorse, Yukon).9 Since then, northerners must pay $29 CDN plus $9.99 CDN per pound of weight, making it impossible to order vital goods at affordable prices.10 The end of Amazon’s free shipping for Canadian Arctic communities had dramatic economic impacts, especially in Nunavut where severe food insecurity continues. Residents can now only rely on local stores where food prices are high, as shown by Feeding my Family’s Facebook page,11where Inuit try to raise awareness by posting pictures of the food prices in local stores12 and the increased costs of Amazon’s delivery prices.13 Free shipping was a real life-changer for Inuit communities, allowing them to buy both food and essentials for everyday life at much lower prices than they were used to in local stores. However, Amazon Prime should not be considered as the solution to food insecurity. On the one hand, it is only accessible for people who can afford a credit card and subscribe to the Prime membership status. On the other, it creates dependency which leaves no backup options in case free shipping is canceled. The internet is not the only answer to every problem in the Arctic, but it can sometimes be a very useful tool to help diminish the drawbacks of living in the very remote North, and also raise awareness about Indigenous life and living conditions, such as through social media. A tool to gain from political weight Despite low speed, high prices and data caps14 for internet connections, Inuit are social media savvy. Internet is used, primarily but not exclusively, to gain visibility in media and develop political weight. This massive use of social media helps Indigenous peoples become more visible to society throughout campaigns in the cyber and the real world, making their voices heard. Whether it is in Greenland, Canada or in Alaska, Inuit use Facebook and Twitter when their culture or way of life is attacked. A significant example of the importance of social media is when the American television celebrity, Ellen Degeneres, twitted a selfie taken during the Oscars ceremony in 2014 to raise money against seal hunting. It was, until recently, the most shared tweet in the history of the platform. Following that tweet, Inuit people mobilized together on Twitter and, in opposition, created the hashtag #Sealfie, posting selfies of Inuit wearing seal skin in a bid to defend their traditional way of life and to oppose the seal hunt ban campaign, not only in big communities of the Canadian Arctic but also in remote villages. Thanks to the internet this action had an international echo. The Idle No More (INM) movement gained visibility not only because of physical protests around the real world, but also because of activism in the cyber world. It began in late 2012 after four women15 in Saskatchewan, Canada, exchanged e-mails worrying about the effects of the Federal government’s omnibus budget Bill C-45 that threatened the environmental protection of almost all Canadian waterways. The movement first gathered together Firsts Nations, Metis and Inuit, and then spread all over North America and even around the world with rallies, protests, flash mobs and marches organized in urban centers. In parallel, it took over the cyber world via a very popular hashtag on Twitter, #IdleNoMore, and through a Facebook page. The popularity of INM was further amplified through the power of social media. It helped to give this operation an international echo and visibility, and then bypass traditional media.16 It is social media that helped to reinforce this movement and provide it with political legitimacy. Hence, it helped Indigenous peoples to touch the Canadian public opinion; a poll showed that two thirds of Canadians have heard about the INM movement.17 Twitter also helped to create bonds and unity between Canadian Indigenous peoples that might have previously been divided.18 Without an internet connection, Inuit and others Indigenous populations of the North American Arctic would not have been able to join the movement in the cyber world. In fact, it allowed them to become an important part of it, despite living far away from the rallies and marches that were taking place further in the south. Even if internet is slow, expensive and has data caps in the Arctic, the examples above show how it has already changed the lives of many northerners and especially the Inuit, but many others example exist. The completion of several submarine cables bringing a cheaper and more reliable internet broadband connection, could initiate more changes in northerners’ lives and help them to fully benefit from what internet can offer. Arctic Submarine Cables Projects to Come There are five submarine fibre optic cable projects in the Arctic that have been announced so far. Each one has a different goal: either to connect the Arctic regions and/or to connect Asia, Europe and North America (mostly for data centers and stock exchange markets). However, plans to lay fibre optic cables through the Arctic have previously been scrapped, which draws skepticism to the new projects today.19 The idea is not new, though no-one has managed to lay a cable beneath the Arctic Ocean either in the Northwest or the Northeastern passages. The completion of these projects is a real challenge with not only technological but also financial risks. Those cables require large investments with no guarantee of successful results. Northwest Passage (Quintillion, Nuvitik, Kativik) The most advanced of all the projects is the Quintillion Network submarine cable, which continued as the Arctic Fibre project after Quintillion purchased it in 2016,20 carrying a slightly different design. The first part of the cable, phase one, was laid during the summer of 2016, close to the coast of Alaska, connecting five villages,21 and at the end of the summer of 2017: Prudhoe Bay in Arctic Alaska. Phase one was announced ‘Ready For Service’ (RFS) by December 1st 2017.22 In phase two and three, the cable will connect Japan to Great Britain through the Northwest Passage (NWP), therefore connecting the major stock exchanges of the northern hemisphere, while connecting some of the Indigenous communities along the way in the NWP for a much cheaper price than satellite and microwave.23 The main investor of Quintillion is Len Blavatnik24, originally from Ukraine and also the owner of Warner Music. A Québec based company, Nuvitik, wants to give all the Inuit communities of Nunavut the possibility to have access to broadband internet via its Ivaluk Network, and for a much lower price than satellite. Driven by social concerns, this non-profit project is awaiting funding from the Canadian federal government before it can go ahead. To date, the company has not received any money from the federal or the territorial government to kick-start its project. The other Canadian project, Eastern Arctic Undersea Fibre Optic Network (EAUFON), is led by the Kativik Regional Government (KRG) in northern Québec (Nunavik). Quite similar to the previous project, EAUFON is seeking to connect 24 communities of Nunavik, Nunavut and Nunatsiavut to broadband internet via a submarine cable. In October 2016, the KRG awarded a contract to WFN Strategies to lead a feasibility study and risk assessment.25 Greenland The west coast of Greenland will soon have a second submarine cable called Greenland Connect North, aiming to connect Nuuk, Maniitsoq, Sisimiut and Aasiaat to broadband internet.26 It should be RFS by December 2017. This completes the first submarine cable that connects Greenland to North America (via Newfoundland) and Europe (via Iceland) since 2009. TeleGreenland continues to invest in its infrastructure in order to bring broadband internet to more Greenlandic communities. Northeast Passage In the Russian Arctic, a submarine cable project called Arctic Connect aims to connect Asia, Russia and Europe via the Northeast Passage (NEP), by 2022. This project, evaluated at $700 million USD, is developed by Cinia Group, a company that is 77% state-owned by the Finnish government who is also backing this project.27 With this cable, Finland hopes to further improve its internet network and consequently become a major data hub.28 Arctic Connect will include a partnership between Finland, Norway and Russia, all three of which are extremely interested in having this cable in their Arctic regions. The project is supported at the political level by Russia since being discussed during a meeting between the Prime Ministers of Finland and Russia in December 2016.29 Since then, the Russian Ministry of Communications and Mass Media has released a statement declaring that it will support the project, while Polarnet and Cinia will create a joint venture to lay this cable in the NEP.30 It seems that the Russian company, Polarnet Project, created in 1999 to lay a cable in the Russian Arctic, is still in the race after a few years of intermission. This cable could help Russian authorities to further develop the NEP, a highly strategic area for the Russian government. Recently, China also showed interest in this cable during a meeting between the Russian Minister of Communications and Mass Media, Nikolai Nikiforov, and Chinese Minister of Industry and Information Technology, Miao Wei, in July 2017, offering to cooperate on the project.31 Conclusion Indigenous peoples, including the Inuit, have already adopted internet and social media because they understand its virtue in terms of political influence, its social-economic advantages for northern rural towns and villages. Without internet access, Arctic issues may have remained isolated from the South, with limited exposure and political weight to influence public opinion. The five submarine fibre optic cables seeking to bring broadband internet connection to the top of the world may continue to change the lives of northerners by allowing them to benefit from all that the internet can offer such as tele-health, tele-education, e-government, e-business, and maybe even attract new investors to the Arctic as data centers companies. Ultimately, the cyber world will help Indigenous peoples defend their cultures and educate and distribute information about their traditional way of life with a larger audience using social media. Due to the ongoing thawing sea-ice, the Arctic Ocean is predicted to be increasingly open to the impacts of globalization; not only because of tourism, shipping or oil and gas extraction, but also because of new internet highways, hopefully in turn connecting Arctic inhabitants, allowing them to protect their culture while becoming closer to the connected world.
https://arcticyearbook.com/arctic-yearbook/2017/2017-briefing-notes/250-submarine-cables-bringing-broadband-internet-to-the-arctic-a-life-changer-for-northerners
We publish the best Essays of the Free TON Positioning Essay Contest. This time with a technological bias. About Internet protocols using OSI model From its inception in 1992 until today, the Internet has come a long way. It is a path from a network of several thousand sites built on simple HTML to a global system that has become an integral part of society. In the seven-layer OSI (The Open Systems Interconnection model) network representation model — physical layer and data link layer protocols with the development of technologies, communications and equipment, has undergone incredible metamorphoses: from the simplest network equipment and modems for 300 baud, to an input optical cable of any capacity for apartments, and in the near future even to quantum repeaters! At the same time, the logical structure of the entire global network – network layer protocols — is a legacy of the 20th century. In addition to the fact that the entire array of possible IPv4 addresses is no longer enough, the protocols themselves were developed for ideal conditions, and therefore proved to be defenseless against various network attacks. The lack of encryption at the basic level provokes attacks such as MITM, ARP poisoning, DDOS, etc., which appeared after the standardization of protocols. The security “crutches” that appeared in response to the threats did not solve the problem — the IP protocol and the entire IP routing system were outdated. Omitting the consideration of the transport layer and moving on to the protocols of the session, presentation and especially the application layer, it should be noted that they are excessive and overcomplicated, since they are partly aimed at solving the problems of lower-layer protocols. Such tasks as traffic encryption and reliable addressing, which must be solved by the protocols of the network and transport layers, are left to the mercy of the upper-layer protocols, which, instead of their main tasks, additionally try to fix architectural vulnerabilities. This seriously affects the overall security, which in fact has become a hostage to the quality of the developed network interaction of specific programs. About services and applications of the Internet What is the Internet for the user? Services and Applications! A small number of services (storage and transmission of data, communication, media and games, financial and information) corresponds to a huge number of applications for their implementation. In fact, these are applications for the sake of applications. At the same time, the successful Chinese WeChat system successfully demonstrated how you can do without this diversity. A huge number of applications creates a corresponding number of problems resulting from the quality of application development. The scourge of quality is the “release race”. This problem was partially solved by the global corporate distribution centers for applications “Google play”, “App Store” and others, which took responsibility for the quality of programs. But this did not solve the problem globally. Paraphrasing Murphy’s law, we can say: if errors in software development are fundamentally possible, they will definitely be there. About the advantages and disadvantages of centralized management History shows that the concentration of power in one hand allows solving tactical tasks in a short time. But in the long run, this form of power tends to be abused and starts to hinder progress. The situation is similar with the Internet. Large corporations, which control the threads of the global network, are primarily concerned with their own well-being. Occasionally, the interests of companies may coincide with the interests of most Internet users, but not always. In addition to such hackneyed topics as the use of personal and financial user data by corporations at their discretion, they are also able to influence every Internet user, manipulating information and determining social development. In addition to the hackneyed topics of how corporations use people’s personal and financial data, they can also influence every Internet user by manipulating information and shaping social development. Think about it, with such a system, the future is not in our hands, but in the hands of a “board of directors” pursuing its own goals. About Web 3.0 as a Utopia In 2007, Jason Calacanis posted a vision for the Internet on his blog. Calacanis noted that on the modern Internet, due to the huge number of monotonous resources, many of them have depreciated. He suggested that a new, not so much technological, as a socio-cultural platform should emerge, allowing professionals to create interesting, useful and high-quality content. Web 3.0 was presented as a continuation of the Web 2.0 concept. As a solution, he suggested introducing a metalanguage describing the content of sites in order to organize automatic exchange between servers. However, no implementation happened. The reason is not so much the cost of creating a semantic version of sites, but rather a human factor: the lack of a guarantee of an adequate description by publishers of their own resources, a wide field for manipulating descriptive mechanisms and the impossibility of adopting a unified description format in a competitive environment due to corporate advertising policy. About technological solutions and Free TON From the above, we can conclude that the implementation of the concept of Web 3.0 requires a new technology platform that would remove personal and corporate interests from software development, network management and decision-making. The main properties of such a platform should be: - distribution; - basic security of network interaction; - the ability to implement Internet services. The best candidates for such platforms are solutions based on the blockchain concept. Yes, we can criticize the implementation of the blockchain in Bitcoin that appeared in 2009 (for example, the formation of blocks by the method of Proof-of-work from the point of view of the ecology of the planet), nevertheless, it became the starting point for the development of the whole families of more technologically advanced solutions. Today, one of the most conceptually advanced solutions has become the TON (Telegram Open Network) P2P network, developed by the Durov brothers for the subsequent transfer of the Telegram messenger to it. Despite the fact that as a result of the injunction the platform was deprived of the opportunity to join the 400 million messenger community, nevertheless, in May 2020, thanks to the efforts of the enthusiastic communities and TON developers — TON Labs, the platform was launched under the name Free TON with the native Crystal token. What is Free TON and why is it perfect for the technological platform of the Internet of the future? While Bitcoin is only a payment system, Free TON is a full-fledged scalable P2P overlay network built on the advanced blockchain architecture 2.0. The Free TON blockchain architecture consists of a masterchain and up to 2^32 workchains, each of them with its own parameters and token. Workchain number 0 operate a token named Crystal. Workchains can be divided into shardchains, up to 2^64 in total. Each shardchain independently processes transactions, parallelizing calculations and providing millions of transactions per second. Shardchains are split or merged, maintaining the desired speed. For the formation of blocks, the mechanism Proof-of-stake (PoS) is used. Forking off the blockchain is not possible. In addition, a decentralized data storage service called TON Storage is provided within the blockchain. The mechanism of smart-contracts has been fully implemented, brought to its logical conclusion in Free TON — in fact, everything in the system, including user accounts, is smart-contracts. Smart-contracts are executed in a virtual machine — TON VM, which is part of the system. At the same time, depending on the spent computing resources (gas), the cost of executing a smart contract in Crystal also grows. Free TON provides a complete TCP/IP protocol-free mechanism for interacting with basic encryption both inside its own nodes and outside gateways, protected from malicious influences. Within the network, there is a platform for launching applications, as well as a semblance of a domain name system. In fact, Free TON is a distributed operating system. With several alternative technology solutions as competitors, such as Solana (high-speed blockchain platform), Polkadot (a platform that provides a communication mechanism between other blockchain platforms), IPFS (distributed file system), Free TON includes the functions of each of them. Free TON mechanisms simplify the implementation of all the necessary Internet services (means of communication, encryption, financial resources, file storage, video and audio services, user authentication) and ensure their communication with the services of the “current” Internet. At the same time, at the basic level, Free TON implements a distributed control system based on user consensus and devoid of the problems inherent in the modern Internet described above. Globally, Free TON is most suited to the role of the technological foundation of Web 3.0. As a result, the development prospects of Free TON depend on the creation of its infrastructure and final tools — smart contracts, decentralized bots (DeBots), client applications with high user abilities. And it depends on us — members of the Free TON community.
https://freeton.house/en/reasoning-on-the-topic-free-ton-technology-for-solving-internet-problems/
See Clinical Research on Page 1267 As genetics is introduced to all fields in medicine, there is a growing awareness of the dependence of genetics data on clinical information.[@bib1] Longitudinal studies combining deep-phenotyping and genetic testing of diverse populations are required to ensure an evidence-based usage of genetics in medicine, as both comprehensive clinical information and diverse genetic ancestries are crucial to the improvement of genetic variants' interpretation.[@bib2], [@bib3] Genetic research has enabled the discovery of many genes causing or significantly increasing the risk for kidney diseases and has uncovered the difficulty of clinically diagnosing certain genetic disorders in nephrology.[@bib4] For example, individuals with incomplete penetrance of Alport syndrome are easly misdiagnosed unless genetic testing is performed. Only large-scale genetic testing of individuals with chronic kidney disease will allow us to grasp the phenotypic variability of known genetic disorders. In addition, genetics is already used as an eligibility criterion in several clinical trials (Alport syndrome, Dent disease, and Fabry disease are a few examples[@bib5]; see [Supplementary References](#appsec1){ref-type="sec"}). As the stratification of cohorts based on genetic markers or mutations has empowered research in other medical fields and has led to new treatments, molecular diagnoses could facilitate the design of clinical trials in nephrology. It is estimated that up to 10% of adults reaching end-stage renal disease have a Mendelian form of kidney disease. However, even for congenital forms of kidney diseases, the diagnostic rate is reported to be only 10% to 15%.[@bib1] Although new genes causing Mendelian forms of kidney diseases, and new genetic variants predisposing to common forms of kidney diseases, are periodically identified, larger cohorts of well-characterized patients are needed to hasten the rate of discovery. Although the benefits of genetic research are clear, some ethical concerns also exist, explaining the allocation of research funds to the "ethical, legal and social implications research program" (ELSI) as part of the launch of the human genome project. The risk of genetic information misuse has also led many countries to approve laws protecting citizens undergoing genetic testing, such as the Genetic Information Nondiscrimination Act in the United States and the general mandate prohibiting genetic discrimination in the European Union. Nevertheless, not all forms of discrimination based on genetic information are covered by those legal measures. In addition, the controversial use of DNA in the criminal system, and the general distrust of some minorities towards the government, have been reported as preventing minorities from undergoing genetic tests and participating in genetic research,[@bib3] highlighting the importance of the informed consent process. The information provided during the consent process is crucial to build and maintain the trust of research participants and to ensure realistic expectations. As some aspects of genetic research are unique, such as the familial implications of the results, as well as the risk factors shared by certain minority groups (i.e., *APOL1*), the specific information that needs to be covered during the informed consent process has been thoroughly discussed and includes an in-depth discussion of risks and benefits.[@bib6] Although genetic counselors used to consent participants to genetic studies, the spreading of genetic research and the shortage of genetic counselors has constrained most studies to use clinical research coordinators (CRCs) without a formal training in genetics as recruiters for genetic research. Troost *et al.*[@bib7] report the recruitment of patients with chronic kidney disease for a genetic biobank as part of C-PROBE (Clinical Phenotyping and Resource BioBank Core), a prospective observational study conducted in 7 sites. A total of 1628 individuals recruited to C-PROBE were also invited to enroll to the genetic biobank. Strikingly, the vast majority (95.5%) of C-PROBE participants consented to the genetic biobank at the first approach, generating a diverse cohort of participants in terms of race, ethnicity, and educational level. This diversity is remarkable given previous reports highlighting the difficulty of recruiting minorities to research.[@bib3] In addition, as C-PROBE is a longitudinal study, participants were periodically asked to re-consent to the study as well as to the genetic biobank. This protocol enabled the investigation of the potential demographic, clinical, and socioeconomic factors associated with specific refusal to the genetic biobank, despite consent to the nongenetic components. The 73 C-PROBE participants who declined consent to the genetic biobank at the first visit can shed light on the motivations of decliners as well as the potential misunderstandings of individuals who did enroll. Similarly, although only a very small number of participants (50 individuals) changed their consent status over time, information regarding their reasons would be extremely valuable. If a possible reason to decline at the first visit may be lack of time or unwillingness to donate an additional blood sample, the decision to withdraw at the follow-up visit may uncover inconsistencies in the informed consent process, misunderstandings, and potential ethical concerns. Although surveys and qualitative interviews of these participants are desirable to answer those questions, the analysis provided by Troost *et al.*[@bib7] enables us to uncover some patterns. The recruitment site was the only factor significantly associated with the enrollment rate both at the first approach and at follow-up visits. As pointed out by the authors, there are several elements that can have an impact on site recruitment performance ([Figure 1](#fig1){ref-type="fig"}). In particular, the CRC experience and confidence in consenting for genetic research can play a key role. Although Troost *et al.*[@bib7] reported a shared training protocol for their recruiters, the changes in the consent status in low-performing sites, as well as the few questions raised by participants across sites, point to the possible limited discussion about genetic biobanking during the informed consent process. Likewise, the lack of difference in the consent rate of individuals with and without family history of kidney disease may reflect a lack of understanding of the aims of genetic biobanking. From our experience, CRCs who are more familiar with consenting for genetic studies are more likely to engage the potential participants and prompt them to ask questions. A centralized, professional training for CRCs recruiting for genetic studies is desirable to provide them with skills needed to adequately and uniformly approach potential participants. Similarly, differential provider endorsement for the genetic biobanking could affect a site performance. Genetics trainings for providers may also increase their capacity to discuss the benefits of genetic research with their patients. An additional element potentially affecting the consent rate is participant genetic literacy. Even though Troost *et al.*[@bib7] did not observe a significant association between education and consent rate, education has been shown to be a poor predictor of genetic literacy. Future studies directly measuring genetic literacy may uncover its association with consent rate, as well as fluctuations in consent status, and prompt the implementation of patient education tools as part of the recruitment for genetic research.Figure 1Factors affecting consent to genetic research and possible tools to address them. The factors affecting consent rate are listed on the left, and the possible tools to promote informed consent to genetic research are on the right. The arrows point to the relation between them. CRC, clinical research coordinator; IRB, institutional review board. As one of the explicit goals of the genetic biobank is to enroll a diverse population in terms of race and ethnicity, it is important to carefully analyze the impact of self-reported ancestry on consent rate. As reported in previous studies, individuals self-identified as African American had the lowest consent rate (7%) compared to all other groups, and this difference was statistically significant at the first approach. This group also had the highest rate of individuals declining to biobanking at the follow-up visit despite an intitial consent. However, the statistical difference based on ancestry disappeared at the follow-up visits. The longitudinal participation in a study and the familiarity with the study team may alleviate factors previously reported as preventing minorities from participating in genetic studies, such as mistrust. Similarly, some studies have suggested that diverse CRC teams are more effective at recruiting diverse populations. It is worth mentioning that the option for genetic biobanking was part of the broader study consent form and may have increased the consent rate for the genetic component. On the other hand, as the authors do not report the decline rate for the C-PROBE study itself, we do not know whether the reported consent rate is an underestimation of the differential participation rate to research between individuals from different ancestries. Recruitment efforts like that reported by Troost *et al.*[@bib7] provide crucial resources for the implementation of genetics in nephrology. This is one of multiple studies collecting longitudinal clinical information and biosamples, including genetic biobanking, from patients with kidney diseases (such as CureGN, CKiD, CRIC, FIND, Neptune, AASK, APOLLO, KPMP).[@bib8] Although many studies do not offer the return of genetic results to participants, the report recently issued by the National Academies of Sciences, Engineering, and Medicine (NASEM) encouraged researchers and regulators to return more information to study participants.[@bib9] Unfortunately, not all those studies have retention mechanisms, thus complicating the re-contact of participants for return of results or collection of additional clinical information needed to refine the interpretation of genetic findings. It would be interesting to see whether the C-PROBE study, as a longitudinal study, will offer the opportunity of return of genetic results in the future. However, the option of returning results introduces an additional level of complexity to the genetic informed consent process thus reinforcing the need of standardized procedures and policies[@bib9] ([Figure 1](#fig1){ref-type="fig"}). In conclusion, the C-PROBE experience demonstrates that, regardless of demographic and clinical factors, the vast majority of the patients with kidney diseases are willing to enroll into a genetic biobank. It also points to the potential impact of site-specific factors on the consent rate, thus highlighting the need of standardized procedures for the informed consent in genetic research, including educational tools for both providers and potential participants, as well as centralized training for CRCs enrolling to genetic studies. Finally, studies assessing the impact of return of genetic results on the consent rate and evaluating the participant understanding and perception of genetic research in nephrology would be highly valuable. Disclosure {#sec1} ========== All the authors declared no competing interests. Supplementary Material {#appsec1} ====================== Supplementary References **Supplementary References.** Supplementary material is linked to the online version of the article at [www.kireports.org](http://www.kireports.org){#interref0010}.
Destiny doesn't ask your permission. Sixteen-year-old Niah hates seeing the dead starship captain in her face. It's a constant reminder that she only exists to complete his mission: to save the world. She was supposed to be a perfect copy, a clone, but she's not. Something went wrong, a genetic mistake making her someone new. That genetic mistake is killing her. After a two-hundred year journey, Starship Elixr is returning to Nar. They finally found the cure that will save the planet from eternal darkness. But no one counted on Niah and her sister Wish, the backup plan of a malfunctioning droid and a now crew-less ship. The people of Nar have waited lifetimes for Captain Bellamy's return... But not everyone wants to be saved. In the depth of a world's despair, Lord Oliver saw an opportunity. He created an exclusive refuge, a virtual world where, for a price, you never had to experience a sunless sky again. But the starship's return threatens everything he built. And the life of a young girl is a small price to pay for power. Niah needs to save her sister. If she can fix her spaceship she might just save the world. But can she save herself? For fans of Stargate: Atlantis, Lost in Space, and Ender's Game comes a new adventure in space... Available Now.
https://www.rjjulia.com/book/9780648228639
For other uses, see Withdrawal (disambiguation). Withdrawal can refer to any sort of separation, but is most commonly used to describe the group of symptoms that occurs upon the abrupt discontinuation/separation or a decrease in dosage of the intake of medications, recreational drugs, and/or alcohol. In order to experience the symptoms of withdrawal, one must have first developed a physical dependence (often referred to as chemical dependency). This happens after consuming one or more of these substances for a certain period of time, which is both dose dependent and varies based upon the drug consumed. For example, prolonged use of an anti-depressant is most likely to cause a much different reaction when discontinued than the repeated use of an opioid, such as heroin. In fact, the route of administration, whether intravenous, intramuscular, oral or otherwise, can also play a role in determining the severity of withdrawal symptoms. There are different stages of withdrawal as well. Generally, a person will start to feel worse and worse, hit a plateau, and then the symptoms begin to dissipate. However, withdrawal from certain drugs (benzodiazapines, alcohol) can be fatal and therefore the abrupt discontinuation of any type of drug is not recommended. The term "cold turkey" is used to describe the sudden cessation use of a substance and the ensuing physiologic manifestations. The sustained use of many kinds of drugs causes adaptations within the body that tend to lessen the drug's original effects over time, a phenomenon known as drug tolerance. At this point, one is said to also have a physical dependency on the given chemical. This is the stage that withdrawal may be experienced upon discontinuation. Some of these symptoms are generally the opposite of the drug's direct effect on the body. Depending on the length of time a drug takes to leave the bloodstream elimination half-life, withdrawal symptoms can appear within a few hours to several days after discontinuation and may also occur in the form of cravings. A craving is the strong desire to obtain, and use a drug or other substance similar to other cravings one might experience for food and hunger. Although withdrawal symptoms are often associated with the use of recreational drugs, many drugs have a profound effect on the user when stopped. When withdrawal from any medication occurs it can be harmful or even fatal; hence prescription warning labels explicitly saying not to discontinue the drug without doctor approval. Central to the role of nearly all drugs that are commonly abused is the reward circuitry or the "pleasure center" of the brain. The science behind the production of a sense of euphoria is very complex and still questioned within the scientific community. While neurologists have discovered that addiction encompasses several areas of the brain, the amygdala, Prefrontal Cortex, and the nucleus accumbens are specifically responsible for the pleasurable feelings one may experience when using a mind or mood-altering substance. Within the nucleus accumbens neurotransmitter dopamine, so while specific mechanisms vary, nearly every drug either stimulates dopamine release or enhances its activity, directly or indirectly. Sustained use of the drug results in less and less stimulation of the nucleus accumbens until eventually it produces no euphoria at all. Discontinuation of the drug then produces a withdrawal syndrome characterized by dysphoria — the opposite of euphoria — as nucleus accumbens activity declines below normal levels. Withdrawal symptoms can vary significantly among individuals, but there are some commonalities. Subnormal activity in the nucleus accumbens is often characterized by depression, anxiety and craving, and if extreme can drive the individual to continue the drug despite significant harm — the definition of addiction — or even to suicide. In general, the longer the half-life of the drug, the longer the acute abstinence syndrome is likely to last. However, addiction is to be carefully distinguished from physical dependence. Addiction is a psychological compulsion to use a drug despite harm that often persists long after all physical withdrawal symptoms have abated. On the other hand, the mere presence of even profound physical dependence does not necessarily denote addiction, e.g., in a patient using large doses of opioids to control chronic pain under medical supervision. As the symptoms vary, some people are, for example, able to quit smoking "cold turkey" (i.e., immediately, without any tapering off) while others may never find success despite repeated efforts. However, the length and the degree of an addiction can be indicative of the severity of withdrawal. Withdrawal is a more serious medical issue for some substances than for others. While nicotine withdrawal, for instance, is usually managed without medical intervention, attempting to give up a benzodiazepine or alcohol dependency can result in seizures and worse if not carried out properly. An instantaneous full stop to a long, constant alcohol use can lead to delirium tremens, which may be fatal. An interesting side-note is that while physical dependence (and withdrawal on discontinuation) is virtually inevitable with the sustained use of certain classes of drugs, notably the opioids, psychological addiction is much less common. Most chronic pain patients, as mentioned earlier, are one example. There are also documented cases of soldiers who used heroin recreationally in Vietnam during the war, but who gave it up when they returned home (see Rat Park for experiments on rats showing the same results). It is thought that the severity or otherwise of withdrawal is related to the person's preconceptions about withdrawal. In other words, people can prepare to withdraw by developing a rational set of beliefs about what they are likely to experience. Self-help materials are available for this purpose. As mentioned earlier, many drugs should not be stopped abruptly without the advice and supervision of a physician, especially if the medication induces dependence or if the condition they are being used to treat is potentially dangerous and likely to return once medication is stopped, such as diabetes, asthma, heart conditions and many psychological or neurological conditions, like epilepsy, hypertension, schizophrenia and psychosis. To be safe, consult a doctor before discontinuing any prescription medication. Sudden cessation of the use of an antidepressant can deepen the feel of depression significantly (see "Rebound" below), and some specific antidepressants can cause a unique set of other symptoms as well when stopped abruptly. Discontinuation of selective serotonin reuptake inhibitors (SSRIs), the most commonly prescribed class of antidepressants, (and the related class serotonin-norepinephrine reuptake inhibitors or SNRIs) is associated with a particular syndrome of physical and psychological symptoms known as SSRI discontinuation syndrome. Effexor (venlafaxine) and Paxil (paroxetine), both of which have relatively short half-lives in the body, are the most likely of the antidepressants to cause withdrawals. Prozac (fluoxetine), on the other hand, is the least likely of SSRI and SNRI antidepressants to cause any withdrawal symptoms, due to its exceptionally long half-life. Many substances can cause rebound effects (significant return of the original symptom in absence of the original cause) when discontinued, regardless of their tendency to cause other withdrawal symptoms. Rebound depression is common among users of any antidepressant who stop the drug abruptly, whose states are sometimes worse than the original before taking medication. This is somewhat similar (though generally less intense and more drawn out) to the 'crash' that users of ecstasy, amphetamines, and other stimulants experience. Occasionally light users of opiates that would otherwise not experience much in the way of withdrawals will notice some rebound depression as well. Extended use of drugs that increase the amount of serotonin or other neurotransmitters in the brain can cause some receptors to 'turn off' temporarily or become desensitized, so, when the amount of the neurotransmitter available in the synapse returns to an otherwise normal state, there are fewer receptors to attach to, causing feelings of depression until the brain re-adjusts. Many analgesics including Advil, Motrin (ibuprofen), Aspirin (acetylsalicylic acid), Tylenol (acetaminophen or paracetamol), and some prescription but non-narcotic painkillers, which can cause rebound headaches when taken for extended periods of time. Sedatives and benzodiazepines, which can cause rebound insomnia when used regularly as sleep aids. With these drugs, the only way to relieve the rebound symptoms is to stop the medication causing them and weather the symptoms for a few days; if the original cause for the symptoms is no longer present, the rebound effects will go away on their own. Neonatal abstinence syndrome (NAS) is a withdrawal syndrome of infants, caused by administration of drugs. There are two types of NAS, prenatal and postnatal. Prenatal NAS is caused by substance abuse by the mother, while postnatal NAS is caused by discontinuation of drugs directly to the infant. The drugs involved are e.g. opioids, selective serotonin reuptake inhibitors (SSRIs), alcoholic beverages and benzodiazepines. Pseudoabstinence is a term used by some authors to describe signs of withdrawal although the dose remains constant. Such signs may arise in use of benzodiazepines and amphetamines. ^ Peter Lehmann, ed (2002). Coming off Psychiatric Drugs. Germany: Peter Lehmann Publishing. ISBN 1-891408-98-4. http://www.peter-lehmann-publishing.com. ^ Iqbal MM, Sobhan T, Ryals T (January 2002). "Effects of commonly used benzodiazepines on the fetus, the neonate, and the nursing infant". Psychiatric Services 53 (1): 39–49. doi:10.1176/appi.ps.53.1.39. PMID 11773648. http://ps.psychiatryonline.org/cgi/content/full/53/1/39.
http://www.thefullwiki.org/Withdrawal
Abilene Relocation Guide Welcome to our Abilene Relocation Guide. Find everything from real estate and relocation information, to home loans, career information, schools, insurance, apartments and rentals and... Read More Abilene Relocation Guide Abilene is located in Taylor and Jones counties in west central Texas. The population was 117,063 according to the 2010 census making it the twenty-fifth most populous city in the state of Texas. It is the principal city of the Abilene Metropolitan Statistical Area, which had a 2006 estimated population of 158,063. It is the county seat of Taylor County. Dyess Air Force Base is located on the west side of the city. Abilene is located off Interstate 20, between exits 279 on its western edge and 292 on the east. Abilene is 150 miles (240 km) west of Fort Worth, Texas. The city is looped by I-20 to the north, US 83/84 on the west, and Loop 322 to the east. A railroad divides the city down the center into north and south. The historic downtown area is on the north side of the railroad. The median income for a household in the city was $33,007, and the median income for a family was $40,028. Males had a median income of $28,078 versus $20,918 for females. The per capita income for the city was $16,577. About 10.9% of families and 15.4% of the population were below the poverty line, including 18.6% of those under age 18 and 9.2% of those age 65 or over.
http://123relocation.com/texas/abilene/
English language learners from non-English speaking nations are confronting an increasingly challenging environment as they try to develop language skills to meet the competing demands of contemporary social media on one hand and those of English for Specific Purposes (ESP) on the other. Social media’s explosion onto the global scene has created the need for non-English speakers to in effect learn two diverging contextual and communication patterns within what is supposed to be a common language. English, at least a form of English, dominates social media communications on Twitter, Instagram, Facebook, and a whole host of abbreviated format international social media platforms. Moreover, these platforms have developed communications mechanisms that do not even conform to normally accepted, conversational patterns of spoken or written English. The English of some social media platforms is informal, littered with special and unique abbreviations, grammarless, decidedly unstructured and abruptly short. The vocabulary is explicitly simple in most cases, consisting mostly of one and two syllable words. The introduction of the “emoji” graphics (now totaling over 2600 according to Unicode Standard, the emoji lexicographer) has added image elements to the phonetic root language vocabulary. The near total lack of punctuation, further complicates the process of learning to communicate effectively to other than a select audience or specific groups of people. ICT (Information and Communication Technology) tools are growing in use in education and in language teaching in particular, with Computer Assisted Language Learning (CALL) becoming widely used to facilitate vocabulary and structural grammar development among English Language Learners (ELLs) at all levels. It has been noted that blogs and other web-based tools have significantly enhanced writing and reading skills. The young non-native English speaking professional is simultaneously confronted with the increasing need to acquire skills in one or more forms of ESP, be it academic, occupational or both, to be a competitive member of the global economy. Simultaneously, the informal elements of social media ignore these demands and focus on a casual and frequently unconstrained set of language behaviors.The results of this study indicate that English for Speakers of Other languages (ESOL) students, particularly those developing ESP skills, are confronting what could logically be construed as two languages carrying the same name. This presentation and accompanying methodology explores the details and implications of this emerging phenomenon and is addressed by supporting materials, data and recommendations addressing the challenges of diverging language pathways between social media and English for specific purposes. References Crystal D. (2006), Language and the Internet (2nd Edition), Cambridge: Cambridge University Press. Danesi M. (2016), Language, Society, and New Media: Sociolinguistics Today, London: Routledge. Dudley-Evans T. (1998), Developments in English for Specific Purposes: A multi-disciplinary approach, Cambridge: Cambridge University Press (Forthcoming). Green J. (2010), How Bullets Saved My Life: Fun Ways to Teach Some Serious Writing Skills, Markham: Pembroke Publishers. Hutchinson T. & Waters A. (1987), English for Specific Purposes: A learner-centered approach, Cambridge: Cambridge University Press. Lamy MN. & Zourou K. (eds.) (2013), Social networking for language education, Baskingstroke: Palgrave Macmillan. Meskill C. (2015), Online teaching and learning: Sociocultural dimensions. New York: Continuum Books. Tanner D. & Trestor A.M. (eds.) (2013), Discourse 2.0: Language and new media, Washington: Georgetown University Press. The British Sunday Times, (January 10, 2010). Internet sources Jabbari N. et al (2015), The benefits of using social media environments with English language learners. Proceedings of SITE2015, Las Vegas, NV, USA, March 1-6, 2015, pp. 2382-2836, Retrieved from https://www.researchgate.net/publication/285356520_The_Benefits_of_Using_Social_Media_Environments_with_English_Language_Learners, Sep 5, 2017. Kleanhous A. & Cardoso W. (2016), Collaboration through blogging: the development of writing and speaking skills in ESP courses [in:] S. Papadima-Sophocleous, L. Bradley & S. Thouesny (Eds.) CALL communities and culture – short papers from EUROCALL 2016 (pp. 225-229) Research-publication.net. https://doi.org/10.14705/rpnet.2016.eurocall2016.556 Retrieved Sep1, 2016. Lo SK (2008), The Nonverbal Communication Functions of Emoticons in Computer-Mediated Communication, “CyberPsychology & Behavior”, Volume: 11 Issue 5: September 25, 2008, p. 595-597, https://doi.org/10.1089/cpb.2007.0132 retrieved Sep 6, 2017. Maltais M. (2012), YSK teens 2 fluent in TXT, “Los Angeles Times”, August 2, 2012 retrieved from https://www.latimes.com/business/la-xpm-2012-aug-02-la-fi-tn-texting-ruining-kids-grammar-skills-20120801-story.html, Sep 6, 2017. Marvin LE. (1995), Spoof, Spam, Lurk, and Lag: the Aesthetics of Text-based Virtual Realities, “Journal of Computer-Mediated Communication”, Vol. 1, 2. http://dx.doi.org/10.1111/j.1083-6101.1995.tb00324.x retrieved Sep 7, 2017. McWhorter J. (2013), Txting is killing language…JK!!!, TED2013 (Februay 2013) retrieved from https://www.ted.com/talks/john_mcwhorter_txting_is_killing_language,jk?language=en retrieved Sep 12, 2017. Wang S. & Vásquez C. (2012), Web 2.0 and second language learning: What does the research tell us?, “CALICO Journal”, 29(3), p. 412-430. Retrieved from http://dx.doi. org/10.11139/cj.29.3.412-430 Sep 2, 2017.
https://apcz.umk.pl/CSNME/article/view/CSNME.2018.011
Researchers at the Johns Hopkins Bloomberg School of Public Health in Baltimore have developed new software, known as Myrna, to improve the speed at which scientists can analyze RNA sequencing data using cloud computing, according to an article published online in Genome Biology. Faster, cost-effective analysis of gene expression could be a valuable tool in understanding the genetic causes of disease, the researchers stated. To test Myrna, Ben Langmead, a research associate in the Bloomberg School's Department of Biostatistics, and colleagues Kasper Hansen, PhD, a postdoctoral fellow, and Jeffrey T. Leek, PhD, senior author of the study and assistant professor in the Department of Biostatistics, used the software to process a large collection of publicly available RNA sequencing data. Processing time and storage space were rented from Amazon Web Services. According to the authors, Myrna calculated differential expression from 1.1 billion RNA sequencing reads in less than 2 hours at cost of about $66. "Biological data in many experiments—from brain images to genomic sequences—can now be generated so quickly that it often takes many computers working simultaneously to perform statistical analyses," concluded the authors. "The cloud computing approach we developed for Myrna is one way that statisticians can quickly build different models to find the relevant patterns in sequencing data and connect them to different diseases. Although Myrna is designed to analyze next-generation sequencing reads, the idea of combining cloud computing with statistical modeling may also be useful for other experiments that generate massive amounts of data." The Myrna software is available for free download here.
http://www.healthimaging.com/topics/health-it/johns-hopkins-researchers-develop-cloud-computing-software-rna-sequencing
The groups directly or indirectly impacted by landfills include residents nearby that experience the adverse effects, wildlife, governments regulating operations, and patrons of MSW service. Studies of wildlife, particularly avian scavengers, have found that landfills often do not have detrimental effects while the availability of food wastes could be beneficial. The resources being used for landfills consist primarily of the property directly taken that could have been used for other uses but also includes expanses of surrounding land which is then of limited value for natural or developed uses. Resources affected also consist of groundwater which can be restricted for both consumption and agricultural uses. Emissions from methane gas are effectively a use of air resources within and surrounding landfills. Use of Goods and Services Landfills affect both public and private use of goods and services as they are developed on and impact property owned by both sectors. Subtitle D, an amendment to the Resource Conservation and Recovery Act, imposed new requirements for landfills built after 1988. These include pit liners, monitoring wells, and methane gas/leachate treatment systems. As a result, newer landfills are safer but now must be built as “megafills” to be economical. Thus, the market shed has shifted from local areas to crossing regional and state lines for disposal of MSW. New challenges have emerged such as externalities from increased highway truck traffic transporting MSW, environmental justice concerns with locations often near low-income or minority communities, in addition to the standard ecological concerns of air, water and noise pollution. Exurban and rural locations are now receiving larger amounts of MSW for landfilling from major cities disproportionate to the wastes generated at these outer locations. The sheer size of these operations can be detrimental to nearby economic development. As of 1999, there were no reported failures of MSW landfills established since the new 1988 rule. Nevertheless, the potential of punctures to underlying barriers is possible in concentrated areas and likely may not be detected by monitoring wells, whereas releases from unlined landfills would be discovered due to the more widespread infiltration of contaminants into groundwater. Localities such as King and Queen County Virginia have found that the economic benefits of their mega-landfills outweigh the costs. Tipping fees from the 400-acre landfill comprise a substantial portion of the County’s budget, allowing them to move forward with major capital projects. The adverse effects to residents are the associated odors, large numbers of scavengers carrying and dropping trash, and the increased truck traffic effect of emissions and road congestion/safety. Trucks at times are uncovered and leak. State’s seeking to limit mega-landfills and the related traffic face interference with interstate commerce laws. One study found that restrictions such as quotas and surcharges can have the effect of increasing interstate shipments and could reduce overall social welfare. However, this did not include monetized values for externalities such as noise and truck traffic. Incentives can be used for recycling and composting to reduce the demand for landfill space. Variable rate pricing or “pay-as-you-throw” programs serve to alter consumer behavior by charging for the quantity of disposed MSW. For example, Lakeshore Recycling Systems charges the following rates per week based on container size: 35-gallon, $1.43; 65-gallon, $2.86; and 95-gallon, $4.29. Stickers must be purchased for additional refuse at commensurate rates. There is no charge for the 65-gallon recycling container as costs are integrated into the overall MSW fee. Thus, citizens have the incentive to recycle and keep other MSW waste to a minimum. Such programs can be mandated by the public sector and imposed by the private sector. Both are motivated by reduced marginal costs and increased marginal revenues. These “pay-as-you-throw” programs, based on volume or weight, were in place at more than 10 percent of U.S. communities as of 2001. This is a substantive improvement over the inefficiency of conventional practices where a set fee is imposed via billing or property taxes. The conventional method effectively makes the marginal cost of disposal faced by the household zero while the MSW collection firm has a positive marginal collection and disposal cost. The pay-as-you-throw programs have had significant success in volume reductions of landfilling and increases in recycling. Several studies have quantified landfilling reductions ranging from 6-74 percent depending primarily on the charge rate. Weight-based systems have the advantage of higher reductions as volume can be impacted by customer compaction. Although, there is an added expense of measuring the weight which can also extend collection time by about 10 percent. A downside is that recycling costs are often subsidized by municipalities via general revenues so that the public does not realize the true costs. The incentive to reduce MSW and increase recycling can be imposed by the public sector on residents and companies competing to handle these services. If allowed by local ordinance, private haulers have more of an incentive to initiate or participate in pay-as-you-throw if permitted to match weight-based pick-up fees with weight-based tipping fees as opposed to mismatching weight and volume. This assumes that recycling costs are integrated into the price. Ultimately, both the public and private sectors have the motivation to impose incentive-based MSW disposal and recycling systems if they are required to account for all social costs while maximizing societal net economic benefits. Some countries and U.S. states require private manufacturers to implement producer take-back programs, also known as extended producer responsibility (EPR), which diminishes consumer obligations for final disposal of goods that have reached their useful lives. Without such requirements, the only incentive that companies have to do this is through voluntary product stewardship which improves firm image to the public and could increase overall sales. EPR programs are justified because they counter some recycling markets that do not signal producers to account for waste and disposal in their costs. Sustainability Economic and environmental impacts of MSW handling should consider sustainability or the ability to ensure the availability of natural resources for future use. Land or soil in of itself is a finite or non-renewable resource that can be degregated from compaction, erosion, acidification, salinization and hazardous materials. The ability of soil to recover can require multi-generations and may be incessant without intervention. In the interim, options for reuse are limited. Landfills impact ground/surface water and wetlands when hazardous materials and leachate migrate to other areas. Fresh water is considered a renewable resource as rain soaks into the ground to replenish groundwater and surface water evaporates to be released as rain. Yet, only 3 percent of the earth’s water is fresh with about 1/3rd of that safe for drinking, while it is scarce in various locations. Natural attenuation of leachate contamination, including volatile organic compounds (VOCs) occurs over time. Landfills release pollution into the air such as VOC’s, carbon dioxide and methane. Yet, air is considered renewable. Technologies available besides landfilling and recycling/composting include incineration, gasification, and pyrolysis. Incineration occurs over 850o C, oxidizing waste and converting it to water and carbon dioxide with remaining non-combustible residuals such as metals, glass and carbon. Gasification uses some oxygen, combustion occurs at lower temperatures (above 650 o C), and the primary product is synthesis gas (syngas), containing hydrogen, carbon monoxide, and methane. Pyrolysis is thermal degradation of MSW at lower temperatures (300-850 o C) without oxygen and also produces syngas. Syngas has 50 percent the energy density of natural gas and is used as raw material in fuel for producing steam, electricity, and chemicals. Mechanical-biological processing (MBP) varies but generally is an intermediate treatment process which includes physical shredding, metals separation, and heat/steam treatment. The biological component consists of aerobic decomposition and anaerobic digestion. Outputs are recovered metals and glass, liquid digestate, fraction for composting, and refuse derived fuel (RDF) pellets. Khan et al. developed a decision-making model to aid local officials in selecting the appropriate mechanism for handling MSW based upon economic and technical parameters. This includes identification of a site/optimal size for a MSW facility, the appropriate disposal method(s), transportation costs, and comparison of nine waste conversion technologies or scenarios and landfilling methodologies. For the last element, the researchers developed the FUNdamental ENgineering PrinciplEs-based ModeL for Estimation of Cost of Energy and Fuels from MSW (FUNNEL-Cost-MSW). The model is used to compute gate fees charged by ton of MSW received and internal rate of return (IRR) or the interest received on the unrecovered balance such that the first payment net present value (NPV) is zero. Initially, site selection is evaluated through 12 separate criteria/specifications such as environmentally sensitive areas, roads, land surface gradient, and urbanized areas. For each of the nine scenarios, other potential revenues are quantified based on sales of biofuel, electricity, and compost in addition to carbon credits (default value for CO2 saved) and incentives. The model uses equations for capital and operating costs developed from empirical data. Higher capital costs mandate increased tipping fees. The model is applied to Parkland County Alberta and, in terms of calculated gate fees, finds for MSW of 25,000-50,000 tons/year that composting is cheapest due mainly to higher capital costs for the other technologies. At MSW of 50,000-150,000 tons/year, an electricity-producing gasification facility along with composting is the cheapest option. This technology is also optimal when considering calculated IRR at MSW of 50,000-100,000 tons/year which results in the highest IRR (8.87-13.17 percent). Gasification-producing electricity performs second best with an IRR of 6.79-11.49 percent at 50,000-150,000 tons per year. Landfilling has the worst IRR at 70,000 tons/year and above. Gasification-producing biofuel has the lowest IRR at 50,000-70,000 tons per year. Chang et al. uses life cycle assessment (LCA) to quantify environmental impacts of MSW in terms of global warming potential (GWP). The study addresses the inability of cost-effectiveness to consider these impacts by improving upon a benefit-cost analysis (BCA) approach to determine optimal levels of landfilling and recycling. The study focuses on such operations in Lewisburg, Pennsylvania by analyzing five scenarios with the goal of minimizing total costs: 1) cost minimization; 2) benefit maximization; 3) GWP minimization; 4) combination of 2 and 3; 5) BCA under a carbon-regulated environment. Findings indicate that scenario 4 is optimal in the sense that benefits and GWP are balanced and maximized with increased recycling. The net benefit is -$75,400 which is negative but higher than the other alternatives. Hellweg et al. found that when only considering net private costs of waste treatment (total capital and operating costs less energy sales revenues), the order of performance for 4 MSW options is landfills, MBP, grate incineration (GI), and staged thermal process (pyrolysis and gasification) (PECK). The analysis takes into consideration co-products that are developed such as heat, electricity and metals. The researchers then identify the metric environmental cost efficiency (ECE), or net environmental benefits divided by costs to, compare MSW options while accounting for emissions to determine net social benefits. They find that the life-cycle assessment (LCA) rankings from most to least ECE in the long-run is PECK, MBP, GI, sanitary landfills. The ECE rating for incineration is better than landfills and MBP due to superior energy recovery and lower air emissions of methane, nonmethane VOCs, and nitrogen oxide. MBP outperforms landfills in all categories. PECK is less toxic than incineration as it prevents metal emissions but overall the two are about equal in ECE. In recent years, it has been difficult for private firms to make a profit in the recycling industry as prices for these materials have dropped significantly due to the slowdown in the Chinese economy, strong U.S. dollar, and low oil prices. The result is that there have been closures of recycling facilities and increases in MSW deposited in landfills. Part of the problem has been aggressive recycling promotion, larger recycling bins, lighter packaging materials, and policies that do not require customer sorting. Consequently, more inappropriate materials are added to recycling bins which increases contamination and costs for sorting. To compensate, local governments will need to take on more of the financial responsibility for companies to continue in the recycling business. This means that increases in customer user fees and pre-sorting may be necessary. Changing from single-stream (no sorting), which is predominant, to dual stream methodology decreases contamination and streamlines facility processing. With dual stream, customers are only required to separate fiber materials such as paper and cardboard from containers such as glass, metal and plastic. The pickup truck maintains the dual separation. A study by Lakhan in Ontario, Canada found that areas with single-stream recycling generally recover more materials (~4 percent) than multi-stream systems but at higher overall costs. The reason is that savings in collection costs are outweighed by increases in processing costs and reductions in sales revenues due to lower quality recovered materials. Nevertheless, the author concludes that single-stream recycling may be desirable in larger high-density regions as it is more cost-efficient when processing higher volumes. Multi-stream recycling may be preferable for smaller communities hesitant to invest in more expensive/complex separation technology. Economic Valuation Methods Economic valuation (EV) is monetization of benefits/costs from the impacts of policies on ecosystems based upon revealed preferences such as purchasing habits. Contingent valuation (CV) is a stated preference approach via surveys to identify willingness to pay (WTP) for changes in such impacts. The downside of CV is time and the difficulty people have in accurately expressing WTP. CV/EV do not include non-use value of the ecosystem. Hite et al. studied housing values near landfills in Columbus, Ohio. They found that increases are expected in the range of 18-20 percent at a distance of 3.25 miles from a landfill vs. 0.5 miles. Nelson et al. studied home values near a landfill in Ramsey, Minnesota and similar to other studies, found reductions of 12 percent near the border, 6 percent at 1 mile, and no effects beyond 2-2.5 miles. Decisions by localities to implement recycling are often based upon cost effectiveness as opposed to net social benefits. Due to budget shortfalls, they often reduce or terminate curbside recycling programs (CRP). Aadland et al. compared EV/CV to identify preferences of mandatory, voluntary and no CRP using surveys of more than 4,000 homes in 40 western U.S. cities. They found bias and that the public’s WTP or the social net benefit of CRP is almost zero. The results vary by area and some CRPs appear to be inefficient. The authors surmise the differences may be due in part to the public’s beliefs regarding landfill limitations based on messaging from officials. Kinnaman analyzed CRP in Lewisburg, Pennsylvania and found direct costs exceeded net benefits by $10.00 per home per year. This did not include dis-benefits of those residing near landfills and results of a CV survey indicating residents WTP more than $90 yearly for a CRP. The author is concerned that WTP is skewed by altruistic preferences due to inaccurate information regarding landfill space. A study of paper recycling in the United Kingdom finds that it is unprofitable for the private sector but is preferable when considering social costs and benefits such as reductions in landfill scarcity and greenhouse gases. Consumptive use value refers to the worth of market and non-market resources/products. Non-consumptive use value entails ecosystem goods/services. These include the value of plants in cycling nutrients, limiting soil erosion, and in providing food, clothing and shelter. Animals are a source for food and contribute to soil fertility, limiting pests, and plant pollination. Bacteria contributes to the cycling of nutrients/gases, agriculture/biotechnology, and drugs/antibiotics. Together, these non-consumptive use values range into the trillions of dollars. Recycling is a consumptive use value as reused paper, glass, plastic, etc. contribute to new products. Landfills can displace productive agricultural uses of property which has a long-term impact on consumptive use value. The products from MPB and PECK can be considered consumptive use value due to the goods they produce. Non-consumptive use values are more applicable to the negative impacts upon the environment and mankind’s enjoyment of nature imposed by landfills. Other valuation methods include the averting behavior model regarding WTP to avoid environmental harm. Another is the Delphi Method whereby experts provide values for benefits and costs and come to consensus based upon professional economic judgement. The replacement cost method determines a value to restore an asset damaged by pollution. Option value is the worth placed on an environmental amenity based on WTP so it can be used later. Existence value is based on WTP to preserve environmental features. Intrinsic value is the worth of an environmental amenity in of itself. These are all legitimate methods to measure environmental impacts of MSW disposal but may not be appropriate if they cannot be accurately quantified. Recommendations It is recommended that local governments seeking to modify existing MSW practices identify the market and social costs of the various alternatives, determine WTP through previous research or a new survey, and meld the results to inform a selection that balances optimization of both methodologies. This approach will minimize the impacts of landfills and promote efficient use of resources by closely matching marginal benefits with marginal costs. The typical MSW options are landfilling, recycling (single/dual stream), incineration, gasification, pyrolysis, MBP, and combinations of each. To maximize expenditures for analysis, MSW alternatives and facility siting exploration at least initially should occur as part of a coordinated comprehensive planning process for land use and transportation. This establishes a long-range vision for growth and development including potential locations for services such as MSW management. In addition, a decision-making tool such as the Khan et al. FUNNEL-Cost-MSW, benefit-cost analysis (BCA) or a comparable framework should methodically evaluate the alternatives more in-depth to optimize economic/operating efficiencies while minimizing environmental and human health impacts. Thus, a potential MSW-handling methodology and site would need to be dictated by economic analysis that quantifies noise, emissions, transportation, capital and operating costs among others to ensure efficient use of resources. To lessen the detrimental effects of a landfill, siting would likely need to be at least 2-3 miles from areas expected to become urbanized over the long-term. MSW facilities for recycling and alternative technologies may be able to function with minimal impacts in areas zoned for heavy industrial uses. From an economic perspective, since recycling and advanced MSW technologies have higher costs, they should be justified or supported accordingly by the WTP research or local survey. In turn, estimates are needed for revenues from those options that would generate reusable materials and energy in the form of electricity or biofuels. From a financial perspective, these sales revenues along with tipping fees or charges to customers must be sufficient to cover operating costs to ensure long-term viability, including market downturns for recycling products. Local government general funds could be used to make up any difference, however, this would effectively be a subsidy that would reduce economic efficiency. To further efficient use of resources, WTP surveys and implementation of the chosen technology should consider incentives, such as charging by waste (unrecycled) volume, thereby enticing respondents to separate recyclables if they realize there is a cost savings. WTP for advanced technologies may be higher in larger urbanized areas due to limited land availability and the costs of moving wastes farther for landfills as opposed to rural areas where space is plentiful and costs are lower. The WTP research or survey should attempt to quantify use and non-use values as appropriate for landfill and alternative MSW sites. By doing so, local officials can ensure that valuations are as comprehensive as possible to confirm residents are willing to pay not only the private costs of MSW disposal and processing, but also the social costs such as impacts to ecosystem services, open space, agricultural production, and wildlife. U.S. Environmental Protection Agency. (2013). Advancing Sustainable Materials Management: Facts and Figures. Viewed on November 4, 2016 via https://www.epa.gov/smm/advancing-sustainable-materials-management-facts-and-figures. Taylor D. (1999). Talking Trash. Environmental Health Perspective. 107:8, A405-A409. Viewed on November 4, 2016 via https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1566504/pdf/envhper00513-0024-color.pdf. Roberts J. (2016). Garbage: The Black Sheep of the Family. A Brief History of Waste Regulation in the United States and Oklahoma. Oklahoma Department of Environmental Quality. Viewed on November 6, 2016 via http://www.deq.state.ok.us/lpdnew/wastehistory/wastehistory.htm. Vrijheid M. (2000). Health Effects of Residence Near Hazardous Waste Landfill Sites: A Review of Epidemiological Literature. Environmental Health Perspectives. 108: 1. Viewed on November 6, 2016 via https://www.epa.gov/sites/production/files/2014-03/documents/health_effects_of_residence_near_hazardous_waste_landfill_sites_3v.pdf. U.S. Environmental Protection Agency (2016). Defining Hazardous Waste: Listed, Characteristic and Mixed Radiological Waste. Viewed on November 6, 2016 via https://www.epa.gov/hw/defining-hazardous-waste-listed-characteristic-and-mixed-radiological-wastes#listed. Bloom AD, de Serres F. (1995). Ecotoxicity and Human Health: A Biological Approach to Environmental Remediation. CRC Lewis Press: New York. p. 14. Viewed on November 6, 2016 via https://books.google.com/books?id=-Q6UekC6sQIC&pg=PA14&lpg=PA14&dq=cumulative+landfill+cleanup+costs+in+the+united+states&source=bl&ots=CkjFLhhk4y&sig=5gIOoB6u3r72N9zY0KC1D4Ap2Ak&hl=en&sa=X&ved=0ahUKEwjDsIrEpZTQAhUCWCYKHbs8BHsQ6AEIGzAA#v=onepage&q=cumulative%20landfill%20cleanup%20costs%20in%20the%20united%20states&f=false. Rumbold DG, Morrison MB, Bruner, MC. (2009). Assessing Ecological Risk of a Municipal Solid Waste Landfill to Surrounding Wildlife: a Case Study in Florida. Environmental Bioindicators, 4: 246-279. Viewed on November 6, 2016 via http://www.environmentalindicatorsjournal.net/Journal/DisplayArticle/tabid/57/ArticleId/116/Assessing-the-Ecological-Risk-of-a-Municipal-Solid-Waste-Landfill-to-Surrounding-Wildlife-a-Case-Stu.aspx. Taylor D. Lee E, Macauley MK, Salant SW. (2000). Spatially and Intertemporally Efficient Waste Management: The Costs of Interstate Flow Control. Journal of Environmental Economics and Management. Viewed on November 5, 2016 via http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.199.4637&rep=rep1&type=pdf. Lakeshore Recycling Systems. (2016). What You Need to Know About Garbage, Recycling & Yard Waste Collection. Customer mailing. U.S. Environmental Protection Agency. (2001). The United States Experience with Economic Incentives for Protecting the Environment. National Center for Environmental Economics. Viewed on November 5, 2016 via https://yosemite.epa.gov/ee/epa/eerm.nsf/vwAN/EE-0216B-13.pdf/$file/EE-0216B-13.pdf. Johnson J. (2003). Waste News, 8: 26. Viewed on November 6, 2016 via http://eds.b.ebscohost.com.ezproxy.snhu.edu/eds/detail/detail?sid=7e3cffe4-88c5-439a-99f4-37dcb5f3667e%40sessionmgr101&vid=19&hid=121&bdata=JnNpdGU9ZWRzLWxpdmUmc2NvcGU9c2l0ZQ%3d%3d#AN=9510103&db=f5h. Palmer K, Walls M. (2002). Economic Analysis of the Extended Producer Responsibility Movement: Understanding Costs, Effectiveness, and the Role for Policy. International Forum on the Environment. Resources for the Future. Viewed November 6, 2016 via http://www.rff.org/files/sharepoint/WorkImages/Download/RFF-RPT-prodsteward.pdf. Food and Agriculture Organization of the United Nations. (2015). Soil is a Non-renewable Resource. Viewed on November 21, 2016 via http://www.fao.org/3/a-i4373e.pdf. Reference. (2016). Why is Water a Renewable Resource? Viewed on November 21, 2016 via https://www.reference.com/science/water-renewable-resource-8aab095490f3e393#. U.S. Geological Survey. (2016). Quantifying Subsurface Biodegradation, Toxics Program Remediation Activities. Viewed on November 21, 2016 via http://toxics.usgs.gov/topics/rem_act/biodegredation_rates.html. Eganhouse RP, Cozzarelli IM, Scholl MA, Matthews, LL. (2001). Natural Attenuation of Volatile Organic Compounds in the Leachate Plume of a Municipal Landfill: Using Alkylbenzenes as Process Probes. Groundwater. 39:2. 192-202. U.S. Environmental Protection Agency. (2016). EPA Issues Final Actions to Cut Methane Emissions from Municipal Solid Waste Landfills. Viewed on November 21, 2016 via https://www.epa.gov/newsreleases/epa-issues-final-actions-cut-methane-emissions-municipal-solid-waste-landfills. Mastellone, ML. (2015). Waste Management and Clean Energy Production from Municipal Solid Waste. New York, NY: Nova Science Publishers, Inc. Department for Environment Food & Rural Affairs. (2013). Advanced Thermal Treatment of Municipal Solid Waste. Viewed on November 24, 2016 via https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/221035/pb13888-thermal-treatment-waste.pdf. Biofuel. (2016). What is Syngas. Viewed on November 24, 2016 via http://biofuel.org.uk/what-is-syngas.html. Environment Agency. (2016). The Mechanical Biological Treatment of Waste and Regulation of the Outputs. Viewed on November 24, 2016 via http://www.wastedataflow.org/documents/guidancenotes/Specific/GN12_EA_Guide_to_MBT_1.0.pdf. Khan MUH, Jain S, Vaezi M, Kumar A. (2016). Development of a decision model for the techno-economic assessment of municipal solid waste utilization pathways. Waste Management. 48. 548-564. Viewed on December 23, 2016 via http://ac.els-cdn.com.ezproxy.snhu.edu/S0956053X15301653/1-s2.0-S0956053X15301653-main.pdf?_tid=445ed074-c90c-11e6-8062-00000aab0f26&acdnat=1482496700_f3ffda5e360343b568af064fb3d3130b. Chang NB, Qi C, Islam K, Hossain F. (2012). Journal of Cleaner Production. 20. 1-13. Viewed on December 23, 2016 via http://ac.els-cdn.com.ezproxy.snhu.edu/S0959652611003179/1-s2.0-S0959652611003179-main.pdf?_tid=0fb53850-c91a-11e6-8f46-00000aab0f6b&acdnat=1482502625_0a630cb42477c795f392ea3e19adad42. Hellweg S, Doka G, Finnveden G, Hungerbuhler K. (2015). Assessing the Eco-efficiency of End-of-Pipe Technologies with the Environmental Cost Efficiency Indicator. A Case Study of Solid Waste Management. Journal of Industrial Ecology. 9:4. 189-203. Davis, AC. (2015). Why the U.S. recycling industry is feeling down in the dumps. Washington Post. Viewed on December 31, 2016 via https://www.theguardian.com/environment/2015/jun/27/recycling-unprofitable-oil-china-dollar. Lakhan, C. (2015). A Comparison of Single and Multi-Stream Recycling Systems in Ontario, Canada. Resources. 4: 384-397. Field BC, Field MK. (2013). Environmental Economics: An Introduction. New York, NY: McGraw-Hill Companies, Inc. 140-8. Property-Value Impacts of an Environmental Disamenity. (2001). Journal of Real Estate Finance and Economics. 22:2/3, 185-202. Viewed on November 26, 2016 via http://eds.b.ebscohost.com.ezproxy.snhu.edu/eds/pdfviewer/pdfviewer?vid=2&sid=030724a9-a82e-49c6-b808-5d89a5826958%40sessionmgr104&hid=122. Nelson AC, Genereux John, Genereux M. (1992). Land Economics. 68:4, 359-65. Viewed on November 26, 2016 via http://eds.b.ebscohost.com.ezproxy.snhu.edu/eds/pdfviewer/pdfviewer?vid=4&sid=030724a9-a82e-49c6-b808-5d89a5826958%40sessionmgr104&hid=122. Aadland D, Caplan AJ. (2006). Curbside Recycling: Waste Resource or Waste of Resources? Journal of Policy Analysis and Management. 25:4. 855-874. Viewed on November 26, 2016 via http://eds.b.ebscohost.com.ezproxy.snhu.edu/eds/pdfviewer/pdfviewer?vid=1&sid=2fe80e1e-b8c4-4f8a-a7c6-6d82e7b353b5%40sessionmgr107&hid=114. Kinnaman, TC. (2000). Explaining the Growth in Municipal Recycling Programs. The Role of Market and Non-market Factors. Public Works Management & Policy. 5:1. 37-51. Viewed on November 26, 2016 via http://pwm.sagepub.com.ezproxy.snhu.edu/content/5/1/37.full.pdf+html?. Hanley N, Slark R. (1994). Cost-Benefit Analysis of Paper Recycling: A Case Study and Some General Principles. Journal of Environmental Planning & Management. 37: 2. 189-197. Viewed on November 26, 2016 via http://eds.b.ebscohost.com.ezproxy.snhu.edu/eds/detail/detail?vid=8&sid=0617b717-946b-4281-9aed-c374531567e7%40sessionmgr120&hid=114&bdata=JnNpdGU9ZWRzLWxpdmUmc2NvcGU9c2l0ZQ%3d%3d#AN=9611147621&db=eih. University of Michigan School of Natural Resources and Energy. (1998). The Values of Biological Diversity. Lecture Summary. Viewed on November 26, 2016 via http://www.snre.umich.edu/~dallan/nre220/outline18.htm. Bein, Peter. Monetization of Environmental Impacts of Roads. B.C. Ministry of Transportation and Highways, 1997, Table 4.13 (http://www.geocities.ws/davefergus/Transportation/4CHAP4.htm). Bein. CBA Builder. (2016). Revealed Preference Methods. Viewed on November 16, 2016 via http://www.cbabuilder.co.uk/Quant3.html. King DM, Mazzotta M. (2016). Ecosystem Valuation, Essentials, Section 2. Valuation of Ecosystem Services. Viewed on November 17, 2016 via http://www.ecosystemvaluation.org/1-02.htm.
https://www.resilientlandtransportplanning.com/solid-waste
Technology of Malaysia is as follows: i) Sewage effluent reuses - potential application in Malaysia – to identify the potential users for sewage effluent reuse and their required quality, including the most cost effective method to improve the quality of sewage effluent for suitable reuses. Current practice. Setting the foreign waste aside, the overall recycling rate (for many types of waste) in Malaysia is estimated at 10.5%, but from our discussions, this is mostly for construction and demolition waste. For MSW specifically, the recycling rate remains largely unknown but could be very low, as domestic segregation of recyclables in Malaysia is not common practice. Mar 20, 2015· In Singapore, about 1.69 million tonnes of construction debris was generated in 2013 and the recycling rate is 99%. Construction and demolition (C&D) waste is usually sorted for the recovery of materials such as wood, metal, paper and plastics, and processed into aggregates for use in construction activities. Oct 17, 2016· Many opportunities exist for the beneficial reduction and recovery of materials that would otherwise be destined for disposal as waste. Construction industry professionals and building owners can educate and be educated about issues such as beneficial reuse, effective strategies for identification and separation of wastes, and economically viable means of promoting environmentally and socially ... Aug 30, 2017· Why Recycling Construction Materials is Important for Your Business . With the environment becoming an increasingly popular topic of discussion, recycling construction materials is more important than ever. The most common method of disposing of C&D waste in the past has been sending it to landfills. Oct 24, 2016· Thousands of people live on and around the Dhapa landfill in India, where 4,000 tonnes of waste are dumped each day. Many make a living processing the … Construction Construction And Demolition Waste Disposal In Kolkata. ... to encourage the recycling and reuse of cd debris 3 4 introduction o construction waste recycling is the separation and recycling of recoverable waste materials generated during construction and demolition c d ... Dolomite Quarry Mining In Kajang Malaysia. Robo Sand ... of New Mexico's municipal solid waste from landfills by 1995 and 50% by July 1, 2000. In order to manage waste, the Environmental Protection Agency (EPA) and the Solid Waste Act favor an integrated solid waste man-agement strategy that includes 1) reducing the amount of solid waste generated, 2) recycling as much refuse as 'Waste to wealth'–recycling in Malaysia is a thriving industry driven by informal players Recycling is a thriving industry-Industry value is estimated at RM476mil in 2005 and more than RM600mil in 2011 -There is already an establish informal recycling network that covers every part of the SWM value chain from storage to disposal Construction and Demolition waste are not new terms in Construction Industry as for years, construction industry has been producing enormous amount of waste. Growing rate of waste generation has led to various environmental problems. The paper tells PDF | On Jan 1, 2012, M.A Kazerooni Sadi and others published Reduce, Reuse, Recycle and Recovery in Sustainable Construction Waste Management in Advanced Materials Research | … trash and industrial waste as well as commercial waste can be managed in order to improve the situation. We all should try to reduce waste and store it in an efficient way. The best waste management companies in Kolkata take over control the whole waste processing, from waste collection to transportation and garbage disposal. We are a Plastic waste recycling compant based in Kuala Lumpur, Malaysia. Main trade products : LDPE & LLDPE in rolls, Post industrial grade of Injection & Blow grade of HDPE & LDPE & PP, PBT, PC, PC/ABS, PVC (Soft & Hard) and many others Plastic products. We are selling the Carbon… As for construction waste, it is bulky and costly to throw into a landfill and, hence, the easy way (out) is to dump illegally," he says at a recent interview. ... In Malaysia, recycling is in a ... Construction waste can and should be managed in the same way as other home building operations. Reduce, reuse, and recycle construction waste may save money, reduce liability, keep job sites cleaner and safer, and conserve valuable landfill space 1.4 Scope Of Study,F This study focus on the waste management at construction site in Kuantan which generation of waste at source and recycling which stand for recovery ot reuse a waste material . Malaysia is moving towards the adoption of Industrial Building system (IBS) which it is said to be able to control the waste generation during the construction activities and it is environmental friendly . The management of construction waste is not only of government's responsibility but also responsibility of the developer of the particular land area. There are two ways to manage the waste will be discussed later. 2.8.1 Reuse. The reuse of waste material is one of the important form of pollution prevention. YB Yeo has also assured that the ministry has imposed a freeze on the import of plastic waste categorised under HS Code 3915 which concerns the management and registration of imported plastic waste.. Malaysia could cooperate with other countries to solve this crisis. Image from mycoolteam.com. The Basel Convention plays a huge role in regulating waste disposal by other countries in Malaysia. Construction and Demolition (C&D) debris is a type of waste that is not included in municipal solid waste (MSW). Materials included in the C&D debris generation estimates are steel, wood products, drywall and plaster, brick and clay tile, asphalt shingles, concrete, and asphalt concrete. Sep 16, 2019· To learn what your city will remove and accept at landfill and how to prepare materials for recycling, contact: Your local municipality's solid waste and recycling department; Your local waste/recycling haulers; 3) Places to buy or sell reusable construction materials. Habitat for Humanity ReStores in Canada, the US, New Zealand or Australia Population growth has led to an increase in generation of solid waste in Malaysia. According to the government, it has become a crucial issue to be solved. In 2005, the waste generated in Malaysia amounted to 19,000 tons per day (recycling rate: 5 percent). Eleven years later, 2016, the quantity was 38,200 tons/day (recycling rate: 17.5 percent). Our audit services benchmark public standards, such as the Recycling Industry Operating Standard (RIOS) or the Responsible Recycling© Standard (R2) or customer's own criteria. Contribute to a cleaner environment and call our teams today for an environmental waste … 1. Food Waste is still a significantly major component of generated waste (45%) and contains high organic compound 2. Due to unseparated waste, more than 30% potentially recyclable materials such as paper, plastic, aluminum, glass are still directly disposed in landfills 3. Diapers has becoming a major component (12.1%). W ith fast-growing cities and ballooning population, developing countries like Malaysia are facing numerous challenges in sustainably managing wastes. The waste generated in Malaysia in 2005 was 19,000 tons per day at a recycling rate of 5%. The quantity rose to 38,000 tons per day thirteen years later in 2018, despite the increased recycling rate of 17.5%. Apr 02, 2019· New Delhi: Only 14 of India's 35 regional pollution boards filed information on plastic waste generation in 2017-18, according to the latest report of the Central Pollution Control Board (CPCB). Thus, the CPCB estimate of plastic waste generated in India in 2017-18--660,787.85 tonnes, enough to fill 66,079 trucks at 10 tonnes a truck--does not reflect the situation in more than 60% of … Therefore, it is not surprising that the recycling of local plastic waste is encouraged by the Government through the Malaysian Investment Development Authority (MIDA). Incentives, namely the Pioneer Status (PS) or Investment Tax Allowance (ITA) are offered to further Malaysia's priority for sustainable waste management practices. Construction is one of the industries that generate wealth to the country in Malaysia. Construction industry contribute a large of waste stream and facing a problem in poor construction waste ... The popular and well-known concept of "3R" refers to reduce, reuse and recycle, particularly in the context of production and consumption. It calls for an increase in the ratio of recyclable materials, further reusing of raw materials and manufacturing wastes, and overall reduction in resources and energy used. Characterization of reuse and recycling potential was done using descriptive statistics. It is estimated that the waste generated by the housing sector is approximately 16% of the gross materials used, which is about 8.8 million tons/year, and 32% of such waste (approximately 2.8 million tons/year) has the potential for reuse and recycling. Current Status of Recycling in Malaysia Malaysia's recycling rate is only at 10.8 % according to the JPSPN's waste audit. Current market in recycling sector in Malaysia: 60 plastic manufacturers, 10 paper mills, and More than 100 e-waste recyclers. alternatives for waste prevention and the initiatives to reduce, reuse and or recycle waste produced which are referred to as the three R's of construction waste management. A waste hierarchy has been widely adopted as a guide for construction managers, in line with the principles of sustainable construction. The Waste hierarchy suggests that:
https://www.fish-wine.pl/38428_kolkata_reuse_construction_waste_in_malaysia.html
Unlike our archaic adolescent years of having to wait on a modem to dial-up for Internet (if we even had it at all), today’s teens have been raised on wifi and all the instant gratification it brings. The problem with this is that reliance on “right now” and dissatisfaction with the present doesn’t stop at the Internet. It carries over into every aspect of their lives and comes with a slew added emotional baggage. “Why didn’t she text me back yet? She doesn’t like me anymore!” “He checked my InstaStory DM but didn’t reply.” Sound familiar? Parents of tweens and teens often shrug off such anxious and gloomy thinking as normal irritability and moodiness — because, in large part, it is. Still, the beginning of a new school year, with all of the required adjustments, is a good time to consider just how closely the habit of negative, exaggerated “self-talk” can affect academic and social success, self-esteem and happiness. And, with the rise in popularity of shows like Netflix’s Selena Gomez produced 13 Reasons Why it’s important to pay close attention to your teens’ mood and be communicative as they ebb and flow through the challenges associated with growing up. Psychological research shows that what we think can have a powerful influence on how we feel emotionally and physically, and on how we behave. Research also shows that our harmful thinking patterns can be changed. You may not be of much help when it comes to sharpening your teen’s calculus skills. But you can play a huge role in helping your children to develop a critical life skill: the ability to take notice of their thoughts, to step back and view the bigger picture, and to decide how to act based on that more realistic perspective. Taking notice of an alarmist or pessimistic inner voice is a universal experience. It has survival value; it often protects people from danger. And it’s often true that a worrying thought can act as a motivating force – to study, for example. Still, the insecurities that adolescents feel as they undergo the multiple transitions necessary in growing up make them especially vulnerable to believing the worst. This tendency can lead to chronic anxiety, depression, and anger, and can interfere with relationships and success in school. Teaching children about the importance of thinking more realistically may help protect them in the real world once they leave the comfort of your nest. According to a 2016 survey by the American College Health Association, 38% of undergraduates at 50 colleges and universities reported they had felt so depressed at some time during the previous year that it was tough to function. Some 60% had experienced an episode of debilitating anxiety. The power of thoughts to impact feelings and behavior is a foundational principle of cognitive behavioral therapy. CBT teaches people how to recognize faulty negative self-talk, to notice how it makes them feel and act, and to challenge it. Parents can practice this skill themselves, and act as models as they guide their kids to question a thought by looking at the evidence for and against it. If your child seems withdrawn, sad or angry, you may be able to identify a problematic thinking pattern by listening closely. Here are four key styles of negative self-talk to listen for: Catastrophizing. One common thought habit is the tendency to jump to the worst-case scenario (“What if I fail the test and can’t play in the playoff football game?”). Scanning constantly for disaster ahead acts as a huge contributor to anxiety. And catastrophizing often leads teens to avoid people or become reluctant to try new things. Focusing on the negative. Ruminating on a disappointment without taking into account the many positive and neutral aspects of one’s experience is often associated with sadness and depression. A missed field goal might overshadow everything else that happens one day – the good grade on a test, the lunch with friends, the new episode of their favorite show – and consume your high-schooler for days. The “it’s not fair” complex. Interpreting every let down as a grave injustice – the “it’s not fair!” habit – often underlies teens’ anger and can harm friendships and family relationships. The “I can’t” complex. Reacting habitually to difficult situations or to new opportunities with “I can’t,” rather than “I can try,” leads to helplessness. Changing the thought to “I can try!” encourages problem-solving and a willingness to be proactive, to take positive action — both keys to being successful and resilient. For parents, the idea is not to dismiss the negative thought. Research has found that attempting to dismiss the thought can actually make it stick longer. Rather, challenge your child to face the thought, examine it thoroughly, and swap it with a more realistic and helpful perspective. For example, if your child complains of not having any friends at their new school, pose questions that carefully and analytically examine the situation: “You had a group of friends at your old school and at camp – realistically, what are the chances you can’t make friends now? What actions can you take to reach out? What would you say to somebody else who worries about this?” A helpful replacement thought might be: “It probably will take a few weeks to get to know people, but I’ve made friends before and there are things I can try. I can sign up for sports and after-school activities to meet people that way.” More realistic and balanced thinking leads to positive action, which, in turn, tends to bolster confidence, enhance self-esteem and result in greater happiness. If you found this article helpful, please share with friends and family by clicking the button below.
https://www.healthspiritbody.com/teen-depression/
World's oldest known figurative artwork found in Indonesian cave Archaeologists have discovered what they claim to be the oldest example of figurative art made by human hands. An ochre painting of pigs, found on a cave wall in Indonesia, has been dated to be at least 45,500 years old. The art was found above a high ledge, along the rear wall of a pristine limestone cave called Leang Tedongnge in South Sulawesi, Indonesia. Along with a pair of human handprints, it seems to show a species of wild boar that lives on the island. “It shows a pig with a short crest of upright hairs and a pair of horn-like facial warts in front of the eyes, a characteristic feature of adult male Sulawesi warty pigs,” says Adam Brumm, co-leader of the research team. “Painted using red ochre pigment, the pig appears to be observing a fight or social interaction between two other warty pigs.” It can be hard to figure out how old an artwork like this may be, but in this case the archaeologists got lucky. A small deposit of calcium carbonate had formed over the rear foot of one of the pictured pigs. These deposits are somewhat easier to date, and since the painting was obviously there first, it could return a minimum age for the human handiwork. Using uranium-series analysis, the researchers determined the deposit was at least 45,500 years old. That makes the painting the oldest known artwork depicting a recognizable subject in the world, just pipping a previously described image from the same cave by a little under 2,000 years. However it’s not the oldest known artwork full-stop – that honor currently belongs to a South African rock criss-crossed with a series of abstract red lines, dated to over 70,000 years ago. While the newly described painting may be the oldest currently known figurative artwork, the team doesn’t necessarily expect the record to remain standing for too long. Archaeological evidence shows that humans have lived in Australia for at least 65,000 years, and their most likely route into the continent was across these oceanic islands. Finding and dating other examples of rock art in Indonesia could help narrow down that window. The research was published in the journal Science Advances, and the team describes the work in the video below.
https://newatlas.com/science/worlds-oldest-figurative-artwork-cave/
It is ideal for graduate and PhD students and working engineers interested in posing and solving problems using the tools of logico-mathematical modeling and computer simulation. Continuing its emphasis on the integration of discrete event and continuous modeling approaches, the work focuses light on DEVS and its potential to support the co-existence and interoperation of multiple formalisms in model components. New sections in this updated edition include discussions on important new extensions to theory, including chapter-length coverage of iterative system specification and DEVS and their fundamental importance, closure under coupling for iteratively specified systems, existence, uniqueness, non-deterministic conditions, and temporal progressiveness legitimacy. Undergraduate students and graduate students, especially PhD students, researchers, and all workers in computational-based fields benefiting from modeling and simulation, both traditional and non-traditional. In the twenty-first century, computer simulations have become ubiquitous. It is hard to think of any sciences, from the natural to the social, to the life sciences and the humanities, that have not developed, in one way or another, methodologies involving computational tools and, in particular, computer simulations. However, what are computer simulations? Research has been devoted to the many questions raised by computer simulations, be they epistemological, political, social or economic. This special issue collects four historical case studies that focus on exemplary instances of what are today regarded as computer simulations: mathematical problem-solving with the ENIAC computer Electronic Numerical Integrator and Computer in the s; the introduction of Monte Carlo simulations in particle physics of the s; the development of the Paris-Durham shock model in astrophysics from the s to today; and the history of digital modeling in design and architecture in the s and s. Computer simulation is the process of mathematical modelling , performed on a computer , which is designed to predict the behaviour of or the outcome of a real-world or physical system. Since they allow to check the reliability of chosen mathematical models, computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics computational physics , astrophysics , climatology , chemistry , biology and manufacturing , as well as human systems in economics , psychology , social science , health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions. Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible or perhaps even imaginable using traditional paper-and-pencil mathematical modeling. In , a desert-battle simulation of one force invading another involved the modeling of 66, tanks, trucks and other vehicles on simulated terrain around Kuwait , using multiple supercomputers in the DoD High Performance Computer Modernization Program. The following documents include tutorials with step-by-step instructions for using a previous version of the STELLA software. Although the documents have not been updated to reflect the functionality in newer versions of STELLA, the basic ideas presented are still relevant. These images are courtesy of isee systems and Ventana Systems, Inc. Forrester, Jay. Principles of Systems. This material has 23 associated documents. Select a document title to view a document's information. This file is included in the full-text index. This file has previous versions. The importance of computers in physics and the nature of computer simulation is discussed. The nature of object-oriented programming and various computer languages also is considered. We introduce some of the core syntax of Java in the context of simulating the motion of falling particles near the Earth's surface. Search this site. Ads Training PDF. Advances in Plasma Physics: v. This site features information about discrete event system modeling and simulation. It includes discussions on descriptive simulation modeling, programming commands, techniques for sensitivity estimation, optimization and goal-seeking by simulation, and what-if analysis. Skip to search form Skip to main content You are currently offline. Some features of the site may not work correctly. DOI: - Дьявольщина. Джабба начал яростно отдирать каплю остывшего металла. Она отвалилась вместе с содранной кожей. Чип, который он должен был припаять, упал ему на голову. - Проклятие. Телефон звонил не переставая. Джабба решил не обращать на него внимания. An Integrated Introduction to Computer Graphics and Geometric Modeling. 1st ed. USA: CRC Press; Law AM. Simulation Modeling and Analysis. - Второе, что никогда не ставилось под сомнение, - это чутье Мидж. - Идем, - сказала она, вставая. - Выясним, права ли. Бринкерхофф проследовал за Мидж в ее кабинет. Она села и начала, подобно пианисту-виртуозу, перебирать клавиши Большого Брата. - Это невозможно. Он перезагрузил монитор, надеясь, что все дело в каком-то мелком сбое. Но, ожив, монитор вновь показал то же. Чатрукьяну вдруг стало холодно. У сотрудников лаборатории систем безопасности была единственная обязанность - поддерживать ТРАНСТЕКСТ в чистоте, следить, чтобы в него не проникли вирусы. Все. Не упустите . Нужно было думать о долге - о стране и о чести. Стратмор полагал, что у него еще есть время. Он мог отключить ТРАНСТЕКСТ, мог, используя кольцо, спасти драгоценную базу данных. Глаза Сьюзан сузились. Она терпеть не могла, когда он называл ее Сью. Computer Modeling & Simulation. 1. Introduction to Computer Modeling & Simulation. Figure 3. Modeling, Simulation and Analysis. Book chapters to study.
https://honeycreekpres.org/and-pdf/148-introduction-to-computer-simulation-and-modeling-pdf-750-400.php
Counselling is a collaborative process between client and counsellor that takes place in a safe and confidential setting. It is supportive and encouraging, allowing clients to explore their thoughts, experiences, feelings and behaviours without fear of judgement, and to set goals for the changes they wish to make in their life. Psychologist Carl Rogers believed that human beings are essentially resilient and know the solutions to their own struggles. He suggested that healing and transformation happens in an environment condusive to creative thinking and growth, much like plants need the right environment in which to flourish. Counselling is therefore not advice-giving. Instead, the counsellor's role is to facilitate the client's process of self examination and discovery, and to help them access those answers that exist within themselves.
http://cornelleellis.co.za/Counselling/
Administrative law judges work in various positions in the federal government and deliver rulings on many different areas of statutory law. The administrative law judges who work for the Social Security Administration are the ones responsible for rendering decisions on Social Security Disability Claims during hearings. As you can imagine, an administrative law judge’s decision could make or break your case and affect your ability to collect the benefits you need. If your Social Security Disability Insurance (SSDI) claim is denied, you may believe that there is little you can do to reverse this decision. However, if you believe that your disability claim was unfairly denied, you can appeal this decision by requesting a hearing before an administrative law judges. The administrative law judge will review your claim, including all the medical evidence you’ve submitted, and evaluate your disability. This hearing offers you the best chance of winning disability benefits. Before the hearing, the administrative law judge will read your exhibit file. Your file contains all of your medical records, work history, and the reason why you were initially denied for benefits. Social Security hearings are not open to the public, so only you, your representative, the Administrative Law Judge, and the hearing assistant will be present. If there are any expert witnesses requested to give their testimony, then they will also be allowed to attend. During the hearing, the judge will question you about your disability. The judge isn’t trying to disprove your disability, they’re merely trying to gather information about your medical condition, how this has affected your employment, and your past work history. The judge will use this information to determine if you are entitled to disability benefits or not. It is important to answer the administrative law judge’s questions honestly and thoroughly. Try to include examples of how your disability has affected your life and how it has impacted your ability to work or perform everyday activities. Following your testimony, the judge will give your attorney the chance to speak on your behalf. If there are any expert witnesses to be called, the judge will hear their testimony at this time as well. Medical experts and vocational experts are often used by both sides to testify regarding your ability to sustain gainful employment. Once the hearing is concluded, you will usually need to wait 3-4 weeks before learning whether you’ve won your appeal or been denied. In some rarer situations, the judge will issue a bench decision on the day of your hearing. It is important to know, however, that if you lose your appeal and the administrative law judge does not approve your disability benefits, you have a slim chance of being approved for benefits at a later stage. As such, it is important to understand the importance of the administrative law judge in your case and prepare thoroughly for your disability hearing. At Fetterman & Associates, we know that collecting your disability benefits is a crucial component in providing for your family. Disability benefits, such as SSDI, can be an important part of moving forward into the future and being able to pay for medical care, medications, and long-term care. In order to collect these benefits, however, you need an experienced West Palm Beach social security disability lawyer at Fetterman & Associates on your side. Contact the law team today at (561) 845-2510 for a free initial consultation and review of your case. We can help you after a serious disability or work injury and protect you every step of the way. Additional Reading Common Injuries Covered by Social Security Disability in Florida Common Myths and Misconceptions Regarding Social Security Disability 648 US Highway One North Palm Beach, FL 33408 Phone: 561-983-4771 ADA Disclaimer: Fetterman & Associates, PA is in compliance with the Americans with Disabilities Act (ADA) and all applicable website standards including WCAG 2.0. It does not discriminate on the basis of disability. If you have any issues observing the content of this website, please contact us. Upon request, reasonable accommodations will be made. © Fetterman & Associates, PA. All rights reserved.
https://lawteam.com/how-an-administrative-law-judge-could-affect-your-ssdi-claim/
So You Want to be an Expert Witness? By C. Scott Litch March 2019 Volume LIV Number 2 From time to time a pediatric dentist may be asked by an attorney to provide expert testimony in a pending legal proceeding. Usually the expertise will be that of a clinical pediatric dentist, in the attorney’s effort to demonstrate whether or not the standard of was followed. Is it ethical to do so, and what are consideration before accepting such an offer? The ADA Code of Ethics1 advises the following: "4.D. EXPERT TESTIMONY. Dentists may provide expert testimony when that testimony is essential to a just and fair disposition of a judicial or administrative action. ADVISORY OPINION 4.D.1. CONTINGENT FEES. It is unethical for a dentist to agree to a fee contingent upon the favorable outcome of the litigation in exchange for testifying as a dental expert." But I suspect you might be interested in more guidance beyond than these two sentences. The American Association of Endodontists ran a column2 about this topic a couple of years ago which has some very useful and relevant points for a pediatric dentist which I’ll summarize and paraphrase below: You should talk to another pediatric dentist who has served as an expert witness, to learn from their experiences. The legal process is a lengthy process, so keep full and careful track of the time you spend reviewing records and literature review for a case. This will ensure you are properly paid for your effort. If your opinion or position is formalized by the attorney, then the next step is a deposition with opposing attorneys. Depositions take time—it could be all day. You must have the ability to stay calm and confident. Cases may be complicated by the fact that often on the plaintiff’s side there is a family member who is a lawyer. The key question you are addressing is the standard of care in pediatric dentistry. "As an expert witness, you play an integral role in the development of a case." This especially relates to whether a case is worth pursuing to trial versus accepting a settlement. You can select the cases you feel comfortable serving as an expert witness, and reject others if you feel they have no merit. "I have found that serving as an expert witness is like solving a puzzle; you have pieces and perspectives from multiple sources, and putting everything back together again is an interesting challenge." You must have knowledge of current and classic pediatric dentistry literature. You should be aware of procedures and treatments and philosophies taught at your local pediatric dentistry residency program (s). You should be familiar with AAPD clinical recommendations (best practices and guidelines). You should be familiar with state dental board regulations and recent rulings. Finally, being a pediatric dentist expert witness or consultant can be very important for pediatric dentistry in regards to Medicaid dental audits or False Claims Act (FCA) cases. The key issue will usually be medical/dental necessity and appropriateness of services. The AAPD strongly encourages the relevant auditor or government agency to utilize pediatric dentist experts to review pediatric dentist cases, in order to ensure appropriate peer-to-peer review. According to a recent American Health Lawyers Association (AHLA) article, an effective expert demonstrates "independence and objectivity, proper demeanor, and relevant experience."3 Further: "Perhaps the most important characteristic of an effective expert is having relevant clinical experience. The medical [dental] expert should be able to explain highly technical facts and interpret complex medical [dental] issues in a manner that can be understood by others who do not have extensive provider or medical [dental] experience." Also, in terms of reviewing documentation of treatment, it is important to have the ability to "discern the difference between technical documentation deficiencies and the standard of care/ medical necessity deficiencies." For further information contact Chief Operating Officer and General Counsel C. Scott Litch at (312) 337-2169 ext. 29 or slitch@ aapd.org. This column presents a general informational overview of legal issues. It is intended as general guidance rather than legal advice. It is not a substitute for consultation with your own attorney concerning specific circumstances in your dental practice. Mr. Litch does not provide legal representation to individual AAPD members. 1ADA Principles of Ethics and Code of Professional Conduct (ADA Code), pp. 9-10. 2Marc P. Gimbel, DMD, The Making of an Endodontic Expert Witness, AAE Communique, March 2016. https://www.aae.org/specialty/2016/03/18/the-making-of-an-endodontic-expert-witness/. Accessed January 6, 2019. 3Seikaly, Hibbert, Zapolska and Stearns, The Use of Medical Experts in Medical Necessity Cases, AHLA Connections, December 2018, pp. 12 – 20. Click here for a PDF version of this article. Popular Articles Dr. Joe Castellano Reflects on Presidential Year Mason's Miracle Feature Story Browse by Topic General Information Travel Information Welcome Governance Legislative and Regulatory Update Annual Session Membership Columns Insurance Education Meetings & Sponsored Courses Healthy Smiles, Healthy Children Copyright © 2012-2019 PDT. All Rights Reserved. Follow Us:
http://www.pediatricdentistrytoday.org/2019/March/LIV/2/news/article/980/
A categorical assessment of peer-reviewed literature published in April 2016 found that out of 685 published papers on the impacts of unconventional gas development, ‘84% of public health studies indicate risks to public health, 69% of water studies show actual or potential water contamination and 87% of air quality studies indicate elevated air pollution’. Fracking for shale and tight gas is an extremely water-intensive practice. Each well may require up to ten fracks over its production life. The Australian gas industry provides a figure of 11 million litres per shale or tight gas frack. Other sources suggest that water use is often much higher. According to one UN report, a single frack operation on a shale gas well will use between 11 and 34 million litres of water, roughly 360 – 1100 truckloads. Drilling a shale or tight gas well also requires around 1 million litres per well. The gas industry is at pains to point out that chemical additives make up only a very small proportion of fracking fluids- ‘approximately’ .5%. In reality, the amounts used range from .5 to 2%, and while this is a small proportion relative to the large volumes of water used, it translates to very large quantities of chemicals. A typical 15 million litre fracturing operation would use 80 - 330 tons of chemicals. Groundwater contamination can also occur if gas and toxic flowback fluids migrate from gas wells into aquifers through natural underground faults or fractures created during fracking operations.Recent research from the USA found higher levels of arsenic and other heavy metals, plus higher salinity, in water bores which were less than 3km from shale gas wells. Other research has found increased methane concentrations in water bores closer to shale gas wells, creating an explosion hazard. Surface water pollution can occur when there are accidental spills of fluids or solids at the surface, when well blow outs occur, and through discharge of insufficiently treated waste water into waterways. Studies from Duke University in the US have found high levels of radioactivity in a creek used for disposal of wastewater . According to industry sources, around 30% of the fracking fluid flows back to the surface. However, as little as 6 to 8% may be recovered. A 2012 case study in the US also found serious evidence of harm to domestic stock from shale gas drilling waste contamination, including cattle deaths, stillbirths and reproductive problems . With the 4 year construction phase of the CSG production gasfields in Queensland now coming to an end, the gas ‘boom-towns’ of Dalby, Roma and Chinchilla has seen a crippling economic down turn with associated job losses and loss of revenue for local businesses who had initially benefitted from the boom. European Parliament, Economic & Scientific Policy Dept, Impacts of shale gas and shale oil extraction on the environment and on human health. UNEP Global Environmental Alert Service: Gas Fracking: Can we safely squeeze the rocks? WA Govt: Natural gas from shale & gas fact sheet: water use & management. Frackers guzzle water as Texas goes thirsty: http://nation.time.com/2013/09/29/frackers-guzzle-water-as-texas-goes-thirsty/; Western Organization of Resource Councils: Watered Down: Oil & gas production & oversight in the west. Broomfield Mark, Support to the identification of potential risks for the environment and human health arising from hydrocarbons operations involving hydraulic fracturing in Europe. AEA Technology, 2012. Marcellus Shale Exposed, Antony Ingraffea, http://www.youtube.com/watch?v=7DK3fODCZ3w; ANTHONY R. INGRAFFEA , PH.D., P.E., FLUID MIGRATION MECHANISMS DUE TO FAULTY WELL DESIGN AND/OR CONSTRUCTION. Osborn et al 2013. Methane contamination of drinking water accompanying gas-well drilling and hydraulic fracturing. PNAS, May 17 2011. Western Organization of Resource Councils: Watered Down: Oil & gas production & oversight in the wes; Fracking: the evidence, https://docs.google.com/file/d/0B1cEvov1OlyHdzRBRjk4dElfbVE/edit?pli=1; Hansen, Mulvaney & Betcher, Water resources reporting and water footprint from Marcellus Shale development in West Virginia & Pennsylvania. Bill Chameides, “Natural Gas, Hydrofracking and Safety: The Three Faces of Fracking Water,” National Geographic, September 20, 2011. Centre for Environmental Health: Toxic and Dirty Secrets: The Truth about Fracking and Your Family’s Health. MICHELLE BAMBERGER, ROBERT E. OSWALD, IMPACTS OF GAS DRILLING ON HUMAN AND ANIMAL HEALTH.
http://www.frackfreewa.org.au/shale_tight_gas_fact_sheet
Policy making is a fundamental function of governments which is the intervention process in order to achieve results and realize the government’s political vision. Accordingly, the study of public policy seeks describing and explaining the governments’ policy making and the way in which it is affected and changed. different frameworks and approaches have been introduced with the aim of explaining and prescribing in the field of public policy. The common point of these approaches is paying attention to the dynamics of actors, the contexts and their impact on policy making from prioritizing an issue in the policy environment to evaluatation and feedback on the policies implemented on that issue. Based on this, the field of science, technology and innovation as one of the most important public problems that directly affects society and its national competitiveness is always influenced by government interventions either directly or indirectly. In the field of science, technology and innovation policy, the concept of governance, due to addressing the roles and interactions of role-makers that lead to policy-making, has always been at the center of the policy-making process for researchers and policymakers. This paper, while analyzing the foundations and concepts of public policy and governance, highlights the implications of these two theoretical areas in the field of science, technology and innovation, and the case study of the ICT sector portrays one of the uses of these topics.
http://jstp.nrisp.ac.ir/article_13687.html
Oyster Bay Arbor Hills Upholstered Bed by LexingtonToday's casual transitional styling blends lighter wood tones, natural textures and relaxed shades of ivory, taupe and gray, with designs that embody a feeling of laid-back sophistication. Oyster Bay offers a casual, comfortable and understated interpretation of luxe living. An arched crown and nailhead trim give this upholstered platform bed a refined look. A light oyster shell finish complements the gray and ivory tones of the upholstered headboard. Materials: Wood and fabric upholstery Measurements:
https://smartfurniture.com/products/oyster-bay-arbor-hills-upholstered-bed-by-lexington/
System Dynamics has proven an effective method to make explicit mental models as a way to identify discrepancies and to induce a fruitful dialogue between parties, such as the actors in the public sector, and between them and those in the private sphere. Such a dialogue is a prerequisite for building mutual understanding, confidence and trust between these parties and to establish a foundation for organizational learning, a key component in organizational development. Making the public sector more transparent and understandable is a prerequisite to enhance decision makers’ accountability, since it allows one to frame the impact of policies on performance. The series aims to contribute bridging a gap between System Dynamics and (more broadly) modeling research studies, and their applications in real organizations, with a specific focus on the public sector and on performance management. Main contributions will arise from the research and teaching work of the PhD program in “Model Based Public Planning, Policy Design and Management”, which is run as a double degree between the University of Bergen (Norway) and the University of Palermo (Italy). The collaboration between the two Universities on this program is further enhanced by the joint “European Master in System Dynamics” degree, which is delivered by Bergen and Palermo as an “Erasmus Mundus” EU-funded program with the Radboud University of Nijmegen (The Netherlands) and the University of Lisbon (Portugal). The research associated to a PhD focused on System Dynamics modeling to support planning, policy design and management in Public Administrations can give an important contribution to both develop new researchers in such a field and to provide Public Institutions with insightful perspectives which are able to facilitate communication and learning processes in such contexts. Such skills and methods are also relevant for private enterprises. In fact, also in business organizations there are financial and organizational constraints, due to resource scarcity, due a lack of perception of delays, and to the reluctance of different departments to cooperate each other according to a systems view.
http://ced4.com/a-new-series-with-springer/
Running a business, and a team of people, are no easy tasks. There are multiple aspects that leaders need to consider when leading a team to success. Sometimes, however, it can feel frustrating when employees are disengaged and you are trying to reach your goals for success. Have you considered the possibility that it just might be that you are the one who is causing your team members to be disengaged from their work? This may not necessarily be the cause, but it might be helpful to do some retrospection and see if you truly are the cause. Here are six mistakes that leaders make that causes their employees or team members to be disengaged: 1. No recognition When you have a strong team, and they do well on all projects, you might feel that recognition is not needed. This is because you possibly assume that they know they are a great team. The Gallup Organization surveyed four million people on recognition in the workplace, and found that when employees don’t experience recognition, their productivity lowers and over time they disengage. When you create a culture of recognition within your team, you have the ability to keep them engaged at all times, but also make them feel like they are doing a great job of contributing to something bigger. This will ensure that your employees and team members perform above expectations because they get that extra pat on the back that says: “well done today, we are proud to have you on the team.” 2. Unclear goals and expectations According to a survey done by Gallup, half of employees surveyed did not know what they’re expected to do in the workplace. Having a team means that you should manage and lead them in the direction that you intend them to go. You should ask yourself the question: “do my employees know what I expect from them?”. Then, you need to ask your team that same question. This will give you a clear view to see if your employees are on the same track as you. Leading a team does not only entail telling people what to do, it means journeying with the team on the road to success. Each member of your team is vital to reaching the goals of the business, and if they don’t have a clear vision, then how will they know if they are heading in the right direction? As a leader you should set clear expectations and goals for your team to reach. 3. No room for career growth Most people search for growth, the ability to learn new skills, and an overall personal improvement when it comes to choosing a career and a company. This means that when a company does not offer this opportunity to their employees, they will more than likely disengage from the workplace. A survey conducted in Australia, by the Institute of Managers and Leaders, shows that around 60% of those surveyed quit their jobs because it did not offer career advancement or growth opportunities. Therefore, leaders should offer their employees and team members opportunities to grow and reach new heights. By engaging more with your team members you will get a feel for what they expect from their careers to help you ensure that they receive career growth. 4. Disengaged leaders A major factor that plates into the disengagement of employees, are leaders who are disengaged themselves. When team members feel like their leaders are not responding to their cries of help or that their leaders aren’t interested in what they are doing, then this can cause disengagement amongst employees. Studies mainly focus on the disengagement of employees, but not many focus on how leaders are disengaged from their teams. Disengaged leaders can have a large impact on the wellbeing of their team members and also the success of their team. 5. Stressful work environment Stress can have an impact on employee health and your productivity which can lead to mental and physical illnesses. Not just that, but stressful environments are one of the main reasons why employees become disengaged. An environment can be stressful due to the location, the staff within the company, the late hours, or even the aircons that are too cold. A balance should be reached, and the best way to do it is by asking the opinion of your employees. Then, listen to them and value their opinions. Take charge and make the necessary changes to ensure that the work environment is less stressful for the team. Making sure that everyone is comfortable will lower the possibility of having withdrawn employees and team members. 6. Overall lack of leadership When a team is led by someone, they are expected to be the leader, take charge, and steer everyone in the right direction. Sometimes we find people in leadership positions who are inexperienced in effective leading. They may have unclear goals, they do not take responsibility, leave everything up to chance, or they simply don’t care. People in leadership positions, who do not understand what leadership is all about, can cause employees and team members to feel led astray. This may cause them to operate in silos, feeling there is no point in trying to achieve something if the goal is unclear. Therefore, as a business owner, you need to ensure that your leaders are actually good at leading. And likewise if you are the leader. If you do not know how to take charge of a team and lead them to success, then maybe you should consider hiring someone to step in or develop the skills to do so. Whatever the case might be, it is important to realise that ‘detached’ employees are not always at fault without reason. It is critical for leaders to create the best environment for their teams so that they are happy and can prosper. By being aware of common mistakes, we are that much closer to having engaged teams that have clear goals, and can enjoy receiving incentives for great performance and work.
https://blog.paypugs.com/six-mistakes-leaders-disengages-employees/
A study of more than 27,370 adverse events self-reported by Colorado physicians and published in the October Archives of Surgery found that 132 wrong-patient and wrong-site procedures were reported to the Colorado Physician Insurance Co. from 2002 to 2008. An independent, not-for-profit organization, The Joint Commission accredits and certifies more than 18,000 health care organizations and programs in the United States. In 2004 the Joint Commission mandated a three-step protocol requiring physicians and other health professionals to perform a pre-procedure verification process, mark the correct site for the procedure and conduct a “timeout” discussion as a final check before the procedure begins. In an unexpected finding the study found that peak annual numbers of reports for both categories occurred after the commission’s protocol was implemented. This is not due to a flaw in the protocol but a reflection of the fact that it is not followed rigorously. The procedure problem is not limited to the operating room. A quarter of wrong-patient cases reported in the Archives of Surgery study involved internists, and 32% of all the incorrect procedures involved nonsurgical specialists such as radiologists and dermatologists. The one death reported was due to a chest tube being placed on the incorrect side, causing acute respiratory failure. Thirty-four patients were significantly harmed or impaired according to the study. The Joint Commission has published a brochure for patients to help them avoid mistakes in their surgery. The brochure suggests that patients prepare a list of questions to ask their doctors concerning the surgery in the pre-operative meeting. Patients should also read the consent form to ensure that it correctly identifies them and the kind of surgery to be performed. Before the surgery a health care professional will mark the spot to be operated on. The patient should make sure that only the correct part is marked. This helps to avoid mistakes. If the patient cannot be awake for the marking then a family member or friend or another health care worker can watch the marking to make sure the correct body part is marked. More information concerning the frequency of medical error can be found on our medical adverse event studies page.
https://www.obradovich.net/wrong-site-surgery-increases-introduction-universal-protocol/
Bushfires are a part of life in Australia, and when they have run their course we pick up where we left off and carry on. But if you happen to be a small animal, surviving the bushfire is only the start of your worries. Research now suggests that survival in recently-burnt areas is not a level playing field. Higher chances of predation are putting our native fauna at risk. Bottom-up and top-down effects The effects of bushfires are felt on a large scale throughout the landscape. Big fires burn away vegetation and other ground cover, drastically simplifying and altering the structure of the habitat. This simplification, in turn, affects the distribution and abundance of animals which rely on vegetation for food and shelter. This process begins with bottom-up control of the ecosystem, where the fire-effect is exerted at the level of the primary producers – the vegetation – and flows upward from there. Predators are also recognised as major shapers of ecosystems. Their influence is via top-down regulation, which means that they exert their impacts at higher levels in the food chain and the effects trickle down. These effects and their consequences are best grasped by looking at a simplified food chain containing three distinct feeding positions (trophic levels): primary producers or vegetation (green), herbivores (yellow) and carnivores (red). Each level relies on the level below for food. Under normal circumstances, the bottom levels contain more biomass than the top levels which is depicted by the size of the box. Fire reduces vegetation biomass which impacts higher trophic levels from the bottom up. In contrast, an increase of predators affect lower levels from the top down. When both occur simultaneously the trophic levels in the middle get heavily impacted. Predators introduced outside their native range, such as feral species, can have particularly strong effects on native species. Indeed, the deleterious impact of feral predators in Australia, specifically of cats and red foxes, has been well documented. From predator or fire to predator and fire Both bushfires and predators are subject to intense management by humans. We traditionally manage both forces individually, using separate plans of management such as trapping and baiting for predators and back-burning regimes for fire. Therein lies the crux of the problem: the reality is that they are not independent ecosystem drivers and managing them as such is not appropriate. Fire interacts directly with predators and, in turn, predators adjust their behaviour after fire. This means much of our native fauna is threatened by the simultaneous bottom-up and top-down effects of both fire and predation. A study done from the stony gibber plains in far-western Queensland provides a good example of this interaction. Populations of small native mammals such as the spinifex hopping mouse (Notomys alexis) and the long-haired rat (Rattus villosissimus) seemed quite resilient to bushfires that burnt much of the area, remaining abundant immediately after the fire. But eventually, populations declined regardless. The decline was attributed to the onset of top-down control by abundant predators, particularly cats and foxes, in the area coupled with the bottom-up control of diminished availability of food resources due to fire. It appears that reduction in the structural complexity of vegetation and increased openness due to fire increases the exposure of small animals to predators, making them easier to detect and capture. What about the ferals? In one study, scientists from Deakin University investigated the interactions between different fire regimes and fox distribution. Desert Ecology Research Group , Author provided The authors warned that foxes appear to be extreme habitat generalists, able to endure just as well in recently burnt areas as in unburnt ones. This poses a very real threat to Australia’s native fauna in burnt vegetation where the fire has already reduced cover and food resources. The impact of cats is not much better. Other research published shows that feral cats actively select hunting grounds in recent fire scars. Interestingly they only do this when the fire has been particularly intense, leaving no pockets of unburnt vegetation behind for native animals to hide in. Desert Ecology Research Group , Author provided To make matters worse, cats specifically select the burnt areas where small mammals, their preferred prey, are highly abundant. These are the areas that are of particular concern for conservation and where predators can do much damage, causing healthy mammal populations to decline rapidly. Management strategy It’s clear we have a problem, but it’s not all doom and gloom. There are ways to mitigate the interactive impacts of fire and predators. Firstly, we need to reduce the frequency of broad-scale, high intensity bushfire events. Fires that burn at a lower intensity usually leave patches of vegetation unburnt. These patches can act as refuges for surviving wildlife. Mild fires are also often stopped by riparian and alluvial strips, which are again important refuge areas for a lot of small mammals post-fire. These refuges may hold the key to recovery of small animal populations, especially in places where the density of native fauna is high. Secondly, we know that predation often intensifies in the wake of a bushfire so this should act as a trigger for intensive control of cats and foxes. Only by considering both of these actions can we plan to give our natural wildlife a chance to survive both the bushfires and the attacks from predators that bushfires can attract.
Gateways determine what path is taken through a process that controls the flow of both diverging and converging Sequence Flows. That is, a single Gateway could have multiple inputs and multiple output flows. The term “gateway” implies that there is a gating mechanism that either allows or disallows passage through the Gateway–that is, as tokens arrive at a Gateway, they can be merged on input and/or split apart on output as the Gateway mechanisms are invoked. If the flow does not need to be controlled, then a Gateway is not needed. Gateways, like Activities, are capable of consuming or generating additional control tokens, effectively controlling the execution semantics of a given Process. The main difference is that Gateways do not represent ‘work’ being done and they are considered to have zero effect on the operational measures of the Process being executed (cost, time, etc.). All Gateways are represented with a diamond shape, with different icons within to distinguish the type of Gateway. In BPMN we can divide Gateways element into the following categories: A diverging Exclusive Gateway (or XOR Gateway) is used to create alternative paths within a Process flow. For a given instance of the Process, only one of the paths can be taken. When the execution of a workflow arrives at this gateway, all outgoing sequence flows are evaluated in the order in which they are defined. The sequence flow whose condition evaluates to true is selected for propagating the token flow. Note that the semantics of an outgoing sequence flow: The following diagram shows an exclusive gateway that will choose one sequence flow based on the value of a property, in this example, the invoice amount. Only two flows have conditions on them going to CFO Approval and Finance Director Approval. The last sequence flow has no condition and will be selected by default if the other conditional flows evaluate to false. The event-based gateway also can be used to instantiate a process. When this is the case the Event-Based Exclusive Gateway icon has only a single circle within the diamond. When used to start a process, the Event-Based Exclusive Gateway allows the process to start in several ways based on the event that triggers it. Parallel gateways are used to represent two tasks in a business flow. A parallel gateway is used to visualize the concurrent execution of activities. A parallel gateway models a fork into multiple paths of execution, or a join of multiple incoming paths of execution. A parallel gateway can have both fork and join behavior if there are multiple incoming and outgoing sequence flows for the same parallel gateway. In this case, the gateway will first join all the incoming sequence flows, before splitting into multiple concurrent paths of execution. The following diagram shows a definition with two parallel gateways. An inclusive Gateway specifies that one or more of the available paths will be taken. They could all be taken, or only one of them. The first OR gateway represents the control of the flow of the process along one or more paths in the model. The second OR gateway represents the reconnection of those paths and the continuation of flow. An exclusive event-based gateway is used to branch a process when alternative paths are determined by events (various messages or signals) rather than by conditional flows. This can happen when the decision about one of the alternative paths is taken by someone out of the process. A signing contract process expects a signal regarding a client’s decision during the negotiation process. Further development of the process depends on this decision. An exclusive event-based gateway, the decision is made based on whichever the associated intermediate event occurs first. A complex decision gateway allows for a more expressive decision within a business process. Multiple factors, rules, and analyses can all combine to yield results. The analysis should result in at least one path always being taken. A student takes an SAT examination. If the student scores under an 800 (the possible scores range from 200 to 1600), the student will enroll in an expensive class to improve his test score – and then retake the exam. If the student performs moderately, he will read a low-cost book designed to help him improve his score – and then retake the exam. If the student scores above 1000, he will immediately attend university. A Parallel Event-Based gateway is similar to a parallel gateway. It allows for more than one process to happen at the same time. It is important to note that while the Event-Based Parallel Gateway will allow multiple events to pass through and start the corresponding portion of the process, it does not wait for all of the events to arrive. That is, it does not wait and synchronize the events before the start of each processing path is permitted. Event Gateways can be used to instantiate a Process. By default the Gateway’s instantiate attribute is false, but if set to true, then the Process is instantiated when the first Event of the Gateway’s configuration is triggered. In this example, if your Bank Manager Approval event is triggered then the Increase Overdraft process will be executed. We now know how to use the seven different types of gateways in BPMN modeling. Gateways can define all the types of Business Process Sequence Flow behavior:
https://www.visual-paradigm.com/guide/bpmn/bpmn-gateway-types/
, Environmental performance , Concrete , Life-cycle assessment , Global warming potential (GWP) , Finite element for thermal analysis URI https://www.researchgate.net/publication/265292901_Environmental_and_Thermal_Performance_of_a_Typical_Concrete_Based_External_Wall_System https://hdl.handle.net/11511/78840 Conference Name 11th International congress on advances in civil engineering, 2014 Collections Department of Civil Engineering, Conference / Seminar Suggestions OpenMETU Core Life cycle assessment of a basic external drywall system in Turkey Akgül, Çağla; Özcelik, Gökçe (null; 2014-10-24) The construction sector, being one of the greatest consumers of raw materials, has been under the spotlight for many years in parallel to the increased environmental awareness. Specifically, building of walls are of critical importance. For each wall element, there are different manufacturing steps; but, in general, the emissions from a wall element are dominated by one or two manufacturing steps. For example, for concrete, both energy use and calcination in cement manufacturing are in focus; for clay brick... Local air quality impacts due to downwash around thermal power plants: Numerical simulations of the effect of building orientation Kayın, Serpil; Tuncel, Süleyman Gürdal; Yurteri, Coşkun (1999-09-01) One of the primary adverse environmental impacts associated with power generation facilities and in particular thermal power plants is local air quality. When these plants are operated at inland areas the dry type cooling towers used may significantly increase ambient concentrations of air pollutants due to the building downwash effect. When one or more buildings in the vicinity of a point source interrupt wind flow, an area of turbulence known as a building wake is created. Pollutants emitted from relative... Impact Assessment of Urban Regeneration Practices on Microclimate in The Context of Climate Change Akköse, Gizem; Balaban, Osman; Department of Earth System Science (2022-2-10) The annual ever-increase in population, urbanization, industrialization, and CO2 or greenhouse gas emissions brings with it important environmental crises. Climate change and urban heat islands (UHI) are crucial environmental crises that have serious consequences on the performance of the built environment and the comfort and health of users. Moreover, mass migrations from the urban areas and unhabitable cities are envisaged as a result of climate change. To minimize the impacts of climate change and ensure... Parametric analysis of BIM-based building energy performance for supporting multi-objective optimization Can, Esra; Akçamete Güngör, Aslı; Department of Civil Engineering (2022-5-11) Building energy efficiency comes into prominence as buildings constitute a significant portion of world energy consumption and CO2 emissions. To achieve energy-efficient buildings, energy performance assessments should be conducted meticulously, yet it is difficult to comprehensively estimate the buildings’ energy consumption since energy performance assessments are complex multi-criteria problems that are affected by many factors such as building orientation, envelope design, climatic conditions, daylight ... Energy consumption of a day-occupied building utilizing phase changing material incorporating gypsum boards Akgül, Çağla; Gürsel Dino, İpek (null; 2018-09-14) In the architecture, engineering and construction industry, there is an urgent need for a shift to low-energy, sustainable buildings to address the environmental concerns, increase occupant comfort demands and lower the operational cost of buildings. In this direction, one of the most effective technologies is using PCM’s (phase change materials) as part of building envelopes. PCMs can increase building performance by increasing the thermal storing capacity of buildings by decreasing the heat transfer betwe... Citation Formats IEEE ACM APA CHICAGO MLA BibTeX A. Sassani, Ç. Akgül, and O. Pasaoglu, “Environmental and Thermal Performance of a Typical Concrete Based External Wall System,” presented at the 11th International congress on advances in civil engineering, 2014, Istanbul, Turkey, 2014, Accessed: 00, 2021. [Online]. Available: https://www.researchgate.net/publication/265292901_Environmental_and_Thermal_Performance_of_a_Typical_Concrete_Based_External_Wall_System. METU IIS - Integrated Information Service [email protected] You can transmit your request, problems about OpenMETU.
https://open.metu.edu.tr/handle/11511/78840
Enlightenment was an intellectual movement which concerned such things as God, Nature, Human beings, Reason etc. Center of the Enlightenment thought were the use of reason, the power by which humans understand the universe and improve their own condition. The Enlightenment included a range of ideas centered on reason as the primary source of knowledge and advanced ideals such as liberty, progress, toleration, fraternity, constitutional government and separation of church and state. In this regard, with Enlightenment; Western civilization fortified its intellectual capacity in a broad sense. The way west understand art, literature, philosophy and politics changed due to the Enlightenment. It is certainly an intellectual shift and improvement. On the other hand, not only did these things change but also tradition and the way people view tradition. In this sense, I have got to explain it explicitly. Enlightenment thought put human into the center of the world. Before enlightenment, center of the world was religion, its institution and its revelation. Religion encapsulates tradition in it. Enlightenment razed it to the ground with its thoughts and ideas which include secularism, individuality, non-collectivism and intellectual hatred toward tradition. Although the Enlightenment is seen as the sole era that changed everything, one cannot ignore the influences of previous philosophical stages such as the Late Medieval Era, Renaissance and Protestant Reformation. The intellectual and political structure of Christianity, seemingly impregnable in the Middle Ages, fell in turn to the assault made on it by humanism, the Renaissance, and the Protestant Reformation. Humanism bred the experimental sciences of Francis Bacon, Copernicus, Galileo, Descartes, Leibniz and Newton. The Renaissance rediscovered much of classical culture and revived the notion of humans as creative beings, and the reformation, more directly but in the long run no less effectively, challenged the monolithic authority of the Roman Catholic Church. For Martin Luther, as for Bacon or Descartes, the way to truth lay in the application of human reason. Received authority, whether of Ptolemy in the sciences or of the church in matters of the spirit, was to be subject to the probings of unfettered minds. Since this article is not about Renaissance and Reformation, it is necessary to continue with Enlightenment era. Now, it is rational to touch on the Enlightenment figures. Rene Descartes’ rationalist philosophy laid the foundation for enlightenment thinking. He is a skeptic philosopher and promoted skepticism in certain things. His skepticism was refined by John Locke’s essay ‘’Concerning Human Understanding’’ (1690) and David Hume’s writings in the 1740s. As for Immanuel Kant (1724–1804), his writings were mostly about rationalism, religious belief, individual freedom and political authority. He wanted to reconcile them. His main work is ‘’Critique of Pure Reason’’. This work is so powerful that it is still being read as a textbook for philosophy classes in many universities. He affected the Western thought in a groundbreaking way. His main impact may be through the notion of human dignity. While speaking about Kant, we have to mention German idealism, in which unique philosophers such as Hegel, Fichte and Schelling took part. In spite of the fact that there are several differences between these names, they share a commitment to the idealism. Kant’s transcendental idealism is important in this sense. Transcendental idealism is about the differences between appearances and things in themselves. On the other hand, there is the notion of “Absolute Idealism” which Hegel, Fichte and Schelling radicalized and transformed. German idealism concerned about major parts of philosophy such as moral philosophy, political philosophy and metaphysics. Although German idealism is closely related to the developments in the intellectual history of Germany in the eighteenth and nineteenth centuries, such as classicism and romanticism; which involved intellectual such as Goethe, Herder and Schiller, it is also closely related to larger developments in the history of modern philosophy. German Idealism also shaped the way of philosophy for the rest of the 19th century. Later 19th century figures, such as Marx and Freud, were thoroughly influenced by the ideas of prominent German idealists. Even in the 20th century, idealist theories of Kant and post-Kantians were still largely debated and studied by modern philosophers. Although our way of thinking has come a long way since the 19th century, one can still clearly see the impacts of Idealism in today’s philosophical and intellectual movements.
NORFOLK, Va. — Norfolk’s Public Library’s Horace C. Downing branch honored the late Dr. Martin Luther King Jr. with a program on African-American spirituals and how the songs were intertwined with the civil rights movement. North Carolina native, Calvin Earl led the program. He’s a nationally renowned musician known for his African-American spirituals, and his successful lobby of Congress to recognize them as a national treasure. He performed a selection of songs that date back to slavery and explained how these gospels became freedom songs during the civil rights moment in the 1950s and 1960s. “I shall not be moved – talking about Dr. King,” he explained to the audience. “When you look at your opponent, your aggressor in the face with love – you’re going to get it back, sooner or later.” Audience members were encouraged to sing along. Earl paid homage to one of the most tumultuous but critical movements in U.S. history: a decades-long struggle to end legalized racial discrimination. The library’s youth associate Carolyn Miller said the performance is meant to demonstrate how African American spirituals transformed into songs with the civil rights movement, and how Dr. King’s message of equality Is still relevant today. “We’re still having turmoil in the world,” Miller said. “[Earl] explained how important the music was, even from the time of slavery on up to modern-day civil rights movement with Martin Luther King Jr.” She said she hopes Dr. King’s legacy will continue to inspire future generations. “I encourage the kids to have dreams and turn those dreams into something positive and something they can use and carry on in their legacy,” she said. Author: Dana Smith Published: 1:24 PM EST January 19, 2020 Updated: 7:47 PM EST January 19, 2020 Norfolk library celebrates Dr. Martin Luther King Jr. with program on African-American spirituals, civil rights movement I most enjoyed the combination of historical content as it relates to spirituals…would love to attend a full course on relationship of spirituals to Underground Railroad ….very relevant to my work as historian, educator an docent…positive learning experience I very much appreciate the honor brought to the table to recognize the importance and cultural value in planning and presenting this program. I liked Calvin Earl’s performing the songs. His first presenting the background of a song for better understanding, but the performance was the crown jewel. Thank you so much for bringing Mr. Earl to the library. An awesome evening of learning and enjoyment!
https://www.calvinearl.com/reviews-spirituals-concert/
Living in the Pacific Northwest means that we are close to saltwater, charismatic stand-alone mountains, and forests. These landscape features lend themselves to create unique weather patterns and other notable events that we can see every year. Here are three of the most noticeable late-summer happenings. 1) Bioluminescent Plankton Have you ever seen the waters of the ocean or the Puget Sound glow during a dark summer night? When conditions are right, a type of bioluminescent phytoplankton blooms and causes waves or other water disturbances to glow a bright blue. These phytoplankton thrive in the nutrient-rich waters of the west coast. When the phytoplankton have enough energy and are in present in large quantities, their glow is visible to the naked eye. The conditions behind the blooms are not well understood, but their occurrence can still be predicted. When shorelines are warm (most often in late summer), it’s possible to view the glow in a number of places in oceans, bays, and the sound. Best viewing conditions occur near high tide on dark shorelines in places where the water is relatively still. 2) Mt. Rainier’s “Hat” There are times in late summer when we have cloudless days… except for one. Across a stretch of blue sky, a peculiar cloud hovers over the top of Mt. Rainier. It can look like a disc, or a funnel, or a pyramid, but it clearly has a relationship with the mountain. This type of cloud is a useful indicator for the weather ahead, and is called a lenticular cloud. Lenticular clouds form when moderately moist air rises up and over a peak. Around Mt. Rainier, air can rise from all sides of the mountain and condense into a lens-shaped cloud above it. When the air is moved by wind, the movement of air pushed over the peak mimics a wave. Imagine the mountain as a bump in the air’s road. The air rises on one side and then descends on the opposite side of the peak. If the temperature at the top is cool enough, moisture-filled air in otherwise cloudless sky rises and condenses into a cloud, and then sinks again. If you’re near the lenticular clouds, it can indicate that rain may be in the forecast. These clouds often appear in late summer or early fall, because they provide a visual cue that moisture is moving into the area after a dry period. Depending on how far away you are, these clouds can be an early warning sign of coming rain. 3) Summertime Haze Why do we often see hazy skies in the summer? Haze is typically thought of as a diffuse layer of something in the atmosphere which restricts visibility. The answer is pretty straightforward. Haze is the visual result of air pollution, and it is more noticeable during the summer. Due to the recent increase in wildfires, we’ve come to expect smoky skies at some point every summer. Smoke and combustion particulates from fires deflect light and create a hazy or smoky look in the sky. When smoke, dust, or combustion particulates are suspended in the air, that is considered to be a “dry haze”. But haze doesn’t have to be connected to a forest fire. A major cause of haze is human-emitted air pollution. Human-induced haze is worse in the summer because sunlight and humid air favor the creation of chemically stable forms of pollution, called “wet haze”. Chemicals from manufacturing, combustion, or other sources form tiny droplets surrounded by water in the atmosphere. Sulfur dioxide, a common air pollutant, is an example of a chemical that does undergoes this process. In droplet form, the pollutants become stable in the atmosphere. These droplets stay in the air for days, deflecting light and creating hazy look in the sky. Haze is also more prevalent during the summer because it is more visible. Light-deflecting pollution is more visible in the sky because there are fewer clouds to hide it, and less rain to pull it out of the atmosphere. It’s worth noting that the atmosphere on any given day can contain a mix of smoke, fog, pollution, and more. Haze may be part of the picture, but not the only thing impairing visibility. The term “smog” often refers to the often refers to human-induced pollution, or the whole atmospheric soup of visibility-impairing pollution. Interestingly, blue light is preferentially deflected in haze because it has a shorter wavelength. The reduction of blue light is why sunlight appears more red through haze or smoke. When you see bioluminescence, lenticular clouds, or haze in late summer or fall, remember that they are the effects of large climate processes at work.
https://capitollandtrust.org/understanding-climate-3-recognizable-late-summer-phenomena/
Imagine this: you’re swimming at your favorite beach and you feel something slide across your foot. You panic, but only for a moment, because you realize that what you were touching was just a long, spindly water plant. Sure, you may have seen such plants washed up on beaches, or maybe you have removed it from a boat as you left the water for the day. But have you ever stopped to think about what these plants actually do? It turns out, they actually support an entire ecosystem under the water! The term used for a rooted aquatic plant that grows completely under water is submerged aquatic vegetation (SAV). These plants occur in both freshwater and saltwater but in estuaries, where fresh and saltwater mix together, they can be an especially important habitat for fish, crabs, and other aquatic organisms. SAV is a great habitat for fish, including commercially important species, because it provides them with a place to hide from predators and it hosts a buffet of small invertebrates and other prey. They essentially form a canopy, much like that of a forest but underwater. Burrowing organisms, like clams and worms, live in the sediments among the roots, while fish and crabs hide among the shoots and leaves, and ducks graze from above. It has been estimated that a single acre of SAV can be home to as many as 40,000 fish and 50 million small invertebrates! SAV in the Chesapeake Bay One of the places we work to protect these aquatic plants—and other habitats important for fish)—is in the Chesapeake Bay (Maryland and Virginia). The Bay is home to several different species of SAV. They live in the relatively freshwaters near the head of the Bay and down to its mouth, which is as salty as the ocean. Approximately 90 percent of the historical extent of SAV disappeared around the mid-20th century due to poor water quality, coastal development activities, and disease. Since then, there have been major efforts to reduce pollution to the bay and help SAV reestablish in areas where it was historically found. We regularly work with the U.S. Army Corps of Engineers to ensure that coastal projects avoid damaging this important habitat. For example, the Corps might propose to issue a permit to a private landowner to build a structure such as a pier or breakwater in SAV. We would then make recommendations to avoid these areas. If the areas are unavoidable, we advocate for different construction approaches to minimize impacts such as shading or filling. Dead Zones Giving You Heartburn? Have an Antacid! One amazing recent finding is that SAV actually changes the acidity of near-shore waters. A recent study in the journal Nature describes this phenomenon in the Chesapeake Bay. SAV located at the head of the bay reduces the acidity of water in areas downstream. Areas of low oxygen form when carbon dioxide gas is released by fish, bacteria, and other aquatic organisms. As they respire, or breathe, they take up oxygen and release carbon dioxide as part of normal biological operation. These “dead zones” are areas with oxygen levels below what is necessary to support fish, shellfish, and other aquatic life. During the warm summer months in the Chesapeake Bay, there are many aquatic organisms respiring. This results in much of the available oxygen being consumed and leaving an excess of dissolved carbon dioxide. Another effect of all this carbon dioxide is that parts of the Bay become more acidic. This is stressful for many organisms especially those with shells like oysters and mussels. That’s where SAV comes in. During the growing season, SAV absorbs dissolved carbon dioxide. With help from the sun’s rays, they turn that carbon into leaves, shoots, and roots much like other plants. In the process, oxygen is released into the water, as well as small crystals of calcium carbonate. They essentially behave as antacids as they flow into acidic waters downstream. This is a great example of how conservation of one resource can have cascading effects. SAV carbon filtration benefits commercial fisheries such as oyster aquaculture and, ultimately, the entire Chesapeake Bay ecosystem. SAV as a Carbon Sponge Increasing carbon dioxide levels in the atmosphere is also a major contributing factor in global climate change. There is a lot of interest in harnessing the power of our natural biological environment to soak up this excess carbon. SAV is an important piece in this puzzle. Aquatic plants are highly productive, which means they absorb a lot of carbon dioxide. The carbon captured by these plants has been termed “blue carbon” since it primarily occurs in the water. Blue carbon has been receiving a lot of attention lately as scientists have discovered that aquatic plants are very efficient at storing carbon in sediments. They also keep it there over long periods of time. Studies have estimated that underwater grasses globally can store approximately 10 percent of the carbon in the entire ocean in the form of rich aquatic soils. Ultimately, this means that efforts to protect and restore SAV can also help to reduce the effects of climate change. Take a Second Look at SAV Maybe next time you feel a spindly plant brush your foot in the water, you won’t run away. Instead, dive down and see what critters may be hiding among the underwater grasses! You might be surprised to find a crab lumbering through the stems or school of young fish cruising through the green leaves.
https://www.fisheries.noaa.gov/feature-story/submerged-aquatic-vegetation-habitat-worth-sav-ing
My professional resume, including my main projects completed at each organization. Education Certification in Nonprofit Management University of Texas, 2014 Masters Degree in Art History University of North Texas, 2010 Bachelor's Degree in Creative Writing Texas A&M University, 2008 VOLUNTEER WORK Communications Chair Lena Pope Young Professional Advocates (YPAs), Aug. 2016-Sept. 2017 Volunteer Curator / Display Designer Camp Fire First Texas, Dec. 2013 Keynote Speaker Tarleton University's Texas Social Media Research Institute Conference, Nov. 2013 RésumÉ PROFESSIONAL SUMMARY • Earned a master’s degree in Art History and has more than a decade’s experience doing PR, branding and marketing across traditional and new media for museums, nonprofits and government. • Specialties include brand development, community engagement, and inspiring people to care about organizations or causes through relatable, well-written copy. • Described by supervisor as “dedicated” and “achievement-oriented”, having developed a broad, successful track record that includes fundraising, sales and organizational awareness campaigns using a combination of traditional and new media. • Built and currently operates a side project dedicated to making museum experiences more relatable and engaging to different museum audiences, and visits museums around the country to keep current with industry. City of Fort Worth COMMUNICATIONS OFFICER (Sep. 2014 - Present) CONTENT STRATEGY SPECIALIST (Jan. 2014 - Sep. 2014) • Strategized and wrote the organization-wide communication standards and branding guidelines for the sixteenth largest city in the U.S., while ensuring alignment with the city’s strategic communications goals. • Cross-departmentally manages marketing for the Park & Recreation Department and other departments without a dedicated communication staff. • Developed and implemented strategic communication protocol and standard operating procedures for all Parks department marketing, including social media. • Developed, organized and art directed the first quarterly Park & Recreation program catalog, now produced quarterly by the organization. • Oversees the creation of creative content for smaller campaigns (local community center events) and larger city-wide campaigns, providing art direction, organizing photo shoots and making data-supported media buys. • Fundraised for past Parks events, the effectiveness of which raised $1,700 in a single week. • Writes stories that are pitched to the media, and assists with the fulfillment of media inquiries. • Writes and manages the City of Fort Worth’s internal communications. • Assists with communication strategy for the city website, and update the website using a combination of HTML and CSS languages. FREELANCE MARKETING & DIGITAL MEDIA CONSULTANT THE AGGIE CENTURY TREE PROJECT (Dec. 2015-Present) • Completely re-developed the organization’s business model for its transition from fundraising project to online business. • Designed, branded and built the AggieCenturyTreeProject.com website. • Managed email marketing campaigns, created and purchased print ads, and implemented automated infrastructure to simplify organizational processes. • Designed and executed a two-week targeted ad campaign that generated one-third of the business’ total sales for 2016. THE BOTANICAL RESEARCH INSTITUTE OF TEXAS / BRIT (Jan. 2013 - Dec. 2013) • Developed, designed and executed a social media campaign for North Texas Day of Giving that raised $35,000 in one day – more than any other organization in Fort Worth’s Cultural District. • Organized, curated and installed BRIT’s Festive by Nature holiday exhibit, including all marketing and wall text copy. • Improved engagement on BRIT’s social media channels, updated website, wrote press releases, and wrote feature articles for the 2013 annual report. Modern Art Museum of Fort Worth ONLINE MEDIA COORDINATOR (July 2010 - Aug. 2012) ASSISTANT TO THE DIRECTOR (July 2010 - Dec. 2010) COMMUNICATIONS INTERN (Aug. 2009 - July 2010) • Researched and wrote The Modern Blog, creating engaging, educational content that thoughtfully interpreted the museum’s collection and exhibitions with a focus on audience relatability. • Managed museum social media, developing content calendars and strategy that led to the Modern’s Facebook page averaging 3,505 new fans per year, while its Twitter presence gained an average of 140 new followers per month. • Gave public tours of the museum’s collection and exhibits after completing docent training. • Served for 6 months as an assistant to the director, coordinating board meetings and travel arrangements, and organizing correspondence. UNIVERSITY OF NORTH TEXAS TEACHING ASSISTANT (Jan. 2009-May 2010) • Assisted with an art appreciation class, History of Communication Design class, and postmodernism class. • Evaluated student performance, supported professor during lectures, assisted with student questions. The Association of Former Students AT TEXAS A&M UNIVERSITY STUDENT ASSISTANT COORDINATOR FOR AGGIE MUSTER (April 2005 - Aug. 2008) • Worked closely with over 300 organizers worldwide to attract 250,000 Texas A&M former students to their local Aggie Muster through the coordination of event communications. • Wrote and distributed instructional handbooks, built relationships and answered questions via phone and email; oversaw the design and mailing of Muster invitations specific to over 150 Muster events. Interested in actual examples of my work? View my online portfolio.
https://www.museumpalooza.com/resume
- Teams start with $100,000 in their portfolio and $50,000 on margin. - Teams have ten weeks to invest. - To enter an order, a ticker symbol, the number of shares and the type of order is required. - Stocks must have price of at least $3.00 per share. - Buy and sell orders can be in any quantity. - Teams may not invest more than 25% of their total equity in any one company. - 3X4 rule. By the end of the 4th week, each team should have at least 3 stocks in their portfolio and continue to hold at least three for the remainder of the program. Teams that do not meet this will not be eligible for awards. - A 1% broker’s fee is charged every time the team buys or sells. For example, if you buy 100 shares of a stock at $10 per share, you must pay the 1% of $1000 or $10. - Teams may buy stocks on margin (borrow money). 8% interest is charged weekly on funds borrowed. - No more than 100 buys or sells can be made. - Teams are allowed to short sell and short cover.
https://edu.stocktrak.com/icee/stock-market-simulation-rules/
Antique items, such as communion chalices, teapots or pendant lights, bear witness to the everyday lives of people in the past. Traces left by the work of earlier silversmiths or other professionals indicate the methods of production employed at the time; scratches, dents and other signs of wear and tear confirm that the artefact was actually used. While restoration seeks to return silver items to a specific historical condition, repairs are designed to make them functional again. Both procedures entail intervening in the object’s history, while at the same time opening a new chapter. With expertise and care, I restore, repair and conserve antique silver, liturgical hollowware, tableware and items made of non-precious metals. I maintain contacts with colleagues and collaborate with professionals in other disciplines. I would be happy to discuss your requirements.
https://www.barbaraamstutz.ch/en/info/restaurierung/
This would not be unusual if the vocabulary in an Accounting course increases and becomes discipline-specific as students delve further into the course. Opportunities to get involved with sport at university are plentiful and the UK has some world-class sporting institutions. What are the sources of noise in the clasroom communication setting. Effective interpersonal communication equips business professionals with the skills they need to efficiently instruct employees on both the technical and soft skills necessary for them to carry out their duties. The skills students need for technological fluency. As other researchers also emphasize the idea is not to simplify the concepts in content-areas but rather simplify the language level used to convey the concepts taught Chevalier et al. The modal calss is 3 and the Mean response is 2. The test is designed to asses ones most intimate relationships, that being a partner, friend, or family member. One reason is that these people are making a good impression on other group members instead of listening to the demands with full attention and self discipline. There should be some room for flexibility. In her study she also found, when studying two groups of people, that the group who preferred online communication was perceived less socially skillful than those who preferred face-to-face interactions Munoz, The study found out that Hereditory, Mannerisms, Accents and Purpose of information are important determinants of a teachers communicationn skills. The problem is so much that it hasled to the widely acclaimed fallen standard of education in Ogun State and Nigeria at large. Olusegun is happily married with children Contacts info E-mail: Until recently communication skills are very rarely seen as central to the overall learning processEven in the most advanced schools in developed countries, communications skills are generally not considered central to the teaching and learning process. Language can have a significant effect on the ability to commit information to memory which can greatly impact academic performance. He is prepared to answer questions that may not be answered throughout the lesson notes. The availability of instant communication seems to distract us from the communication opportunities in front of us. Gulf Perspectives, 3 1. Let us know in the comments section below. In this role, the teacher provides an opportunity to get things started. Avery Influence of Communication This sample represents the millennial generation and their knowledge of mobile technologies. Training A significant amount of employee training occurs internally within a business organization. How can the effect of noise on the intergrity of information in classroom setting be reduced. Therefore, as the internet naturally offers all ICT users anonymity, if they decide, iPredators actively design online profiles and diversionary tactics to remain undetected and untraceable. The people form of communication The latter is characterized by presence of two or more individuals who have the capacity to supply information for others to act upon in a social and academic context. The teacher is the liaison that links the classroom with outside sources. It also affects our psychological well-being. Factors influencing Communication Skills Std. With the growth and expansion of social media, Dr. Limited research has emphasized the importance of language proficiency in enhancing performance in content-based cognitive skills. This is because, it determine whether the message has been effective or not and that desired action has taken place. Declarative knowledge involves knowing what something is, but excludes the ability to use that knowledge. The instrument was divided into two parts, section A and section B. Comprehension cognitive skills involve grasping the meaning of concepts and terms while knowledge skills involve the ability to recognize items. Cyberbullying can also happen accidentally. It was recomended that steps should be taken to improve teachers communication skills. Tom Jenkins Chasing deadlines and running late to lectures are the most strenuous forms of exercise many students engage in. Research Instrument As stated above, the researcher intends to carry out a purposive study that will bare a comprehensive results. Teacher-student communication may also affect students’ extrinsic motivation. Regular teacher-parent communication provides parents with information about their child’s performance. Possessions of the different skills such as oral communication skills have great impacts on the learning process itself, thus affecting their academic performance. Impacts of oral communication competence The oral channels of communication are prompt and valuable in obtaining and exchanging information. Good communication skills will help you get hired, land promotions, and be a success throughout your career. Top 10 Communication Skills. Want to stand out from the competition? These are the top 10 communication skills that recruiters and hiring managers want to see on your resume and cover letter. Pragmatics Deficits in these skills affect listening, problem solving, reading Organizational skills comprehension, study skills, oral and written language, and social Sequencing information interactions. Critical thinking Making judgments & inferences Social appropriateness of interactions Nonverbal communication @ Speaking of. The effect of new communication technology in amplifying political uses of academic research is discussed.” “Public School Uniforms: Effect on Perceptions of Gang Presence, School Climate, and Student Self-Perceptions”.
https://besawicuma.omgmachines2018.com/how-communication-skills-affect-academic-performance-10204ts.html
The Austrian theory of competitive process argues that competitive process is a process of dynamic coordination of independently-acting market participants, which promotes discovery by market participants of the availability of and need for exchangeable goods and services (Kallay 2004, p.82). In this respect, goods and services play a very important role towards enhancing the competitive process under the Austrian theory. Therefore, the producers of goods and services focus on innovations and advancements on their goods and services to gain a competitive advantage over their rivals in the market. Thus, good and services are vital in setting price levels as opposed to the influence of the dominant firms in the market. On the other hand, the Post-Keynesian theory of the Competitive process stipulates that price of good and service are determined by the dominant firms in a particular market rather than movement of goods and services in these markets. Therefore, the issue of innovation and further development of good and services play a minimal role in setting competitive prices. In consistent with this, competition under the Post-Keynesian theories is inherently about dominance in the market. Note that the dominating firms in the market play a vital role in setting the stage for competition. In line with this, the competing firms under the Post-Keynesian theory must be able to possess certain influence in the market, rather than depending on the self-determining market prices. Therefore, Post-Keynesian theory of competitive process depends on the mark-up prices as a way of enhancing the competitive process. On the contrary, the Austrian theory of competitive process argues that the market process is driven by entrepreneurial activity. Furthermore, competition is seen through the lens of entrepreneurial rivalry. In other words, entrepreneurial rivalry is one of the most important factors which promote the competition process. In reference to Prinz, Steenge & Vogel (2001, p.16), competition should be conceived as an ongoing process of entrepreneurial rivalry rather than the existence of perfect competition. Uncertainly is one of the aspects that are seen in the competitive process under the Austrian theories. In other words, firms are uncertain about the future but they have to speculate on the market conditions, leading a spontaneous process of market activities that results in competition at the end. Prinz, Steenge & Vogel (2001, p.17) argues that for the entrepreneurs involved, competition implies a discovery process or 'a voyage of exploration into the unknown'. However, these firms are motivated by the fact that they would gain profits whenever they participate in this uncertainty process. Furthermore, creativity and innovation is paramount if the firms involved desires to remain in the competitive market. Note that creativity contributes to changing of prices of goods and services that are offered on the market, with firms changing their prices every time they introduce a new product or modify the existing product (Kuper 2003, p.201). This contributes to competition. On the contrary, the Post-Keynesian theories insist that competition cannot be achieved under creative, innovative and uncertainty condition. In this regard, there is an argument that many of the greatest economic evils of our time are the fruits of risk, uncertainty and ignorance (Dunn 2008, p.84). Furthermore, particular individuals fortunate in situation or in abilities are able to take advantage of uncertainty and ignorance, and this contributes to increased unfairness on the market rather than propagating competitive activities and behaviours (p.84). In line with this, the Post-Keynesian theory of competitive process dismisses the uncertainty that is perceived in the market as the resultant of a competitive environment. To the Post-Keynesian theorists, uncertainty promote unfair competitive environment thus making it impossible to develop the competitive process in a particular market.
https://exclusivepapers.com/essays/economics/theories-of-the-competitive-process.php