content
stringlengths
7
2.61M
Two-dimensional cubic nonlinear coupling estimation in nonzero-mean multiplicative noise This paper describes an algorithm based on cyclic statistic to estimate the two-dimensional (2-D) frequency of harmonic process which cubic nonlinear coupling exists. We defined a fourth-order time-average moment spectrum. It can be applied to obtain the coupled and coupling frequencies in nonzero-mean multiplicative noise. This method needn't constrain the distribution and the color of noises. Simulation examples illustrate the algorithms.
// All retrieves all items of the specified kind, with optional caching. func (w *FeatureStoreWrapper) All(kind ld.VersionedDataKind) (map[string]ld.VersionedData, error) { if w.cache == nil { return w.core.GetAllInternal(kind) } cacheKey := featureStoreAllItemsCacheKey(kind) if data, present := w.cache.Get(cacheKey); present { if items, ok := data.(map[string]ld.VersionedData); ok { return items, nil } } items, err := w.core.GetAllInternal(kind) if err != nil { return nil, err } return w.filterAndCacheItems(kind, items), nil }
In spurts and bursts and flashes, a sublime novelist at work reveals herself. In Annie Proulx’s new novel, there are breath-taking pages and set pieces of extraordinary power. A man on board a ship, as the temperature plummets, sees all those around him embedded in ice before the catastrophe falls on him; a logging run down a river blocks, builds and explodes with the force of missiles; a wall of fire sweeps across a forested wildness. There are individual chapters of great dramatic force, as Proulx’s people confront the possibilities before them and produce their own solutions. But are those flashes enough? Barkskins in the end seems to me a work of profound error, in which a novelist’s conscious decisions have done a good deal to suppress what that novelist can do best. Proulx is an unpredictable writer, whose career has sometimes tested the patience of even the most sympathetic admirer. In my view, she has written two of the great American novels of the last 25 years, in The Shipping News and That Old Ace in the Hole. They are books primarily of place, where a dauntless spirit of inventive improvisation and passionate devotion to arcane detail rules over more ordinary virtues. But there is, too, a serious disappointment in Accordion Crimes, where, as in her promising first novel, Postcards, an arbitrary decision about the novelistic structure cramps her freedom. When she is good, as in her great short story ‘Brokeback Mountain’, it’s because her novelist’s eye follows a character into their world, not quite knowing what they will find there; when she fails, it is because she has decided that it would be interesting to fill a predetermined shape, like a model village, with suitable inhabitants. Proulx’s story begins with two indentured servants arriving in Canada from France in 1693. They are effectively white slaves to a landowner, Trépagny, for a theoretically limited period, after which they will have earned their freedom and the means to support themselves. One, René Sel, stays where he is, marrying into the local Indian people, the Mi’kmaq. The other, Charles Duquet, soon escapes and inserts himself into the global networks of trade and commodity, running timber and furs from Canada to China. The novel concerns itself with the two characters’ descendants and their engagements with trade and land as the world changes around them. One family is destroyed by the white man’s imperatives; the other rules the world through despoliation. Three hundred years is a colossal period for any narrative to try to cover. Another recent novel by an ambitious American woman novelist, Jane Smiley, also tried to narrate a long stretch of time — in her case a century. It poses a number of problems for the novelist. No character will be given much space to breathe, and some are born and die of old age with disconcerting rapidity. Proulx’s novel succeeds when she allows herself to follow a single life with some relaxed spaciousness: those of the two original men, for instance, or of Lavinia Duke, who (as 19th-century heroines tend to be these days) is a strong woman making her way in a man’s world — in this case, the logging business. But when Proulx runs quickly through quite similar generations — René Sel’s Mi’kmaq children and grandchildren, for instance, or the rush of plutocrats that follow Lavinia — we struggle. A novel can’t just be about trees, and at the end — when the Mi’kmaq descendants are reduced to explaining the state of forestry and conservation to each other — the limits of Proulx’s approach are all too apparent. On this scale, and with this kind of treatment, whole groups of people are inevitably flattened out — I regret to say, in accordance with preconceived ideas. In the first half of the book, the British are simply remote villains, disgusting thugs and murderers who are never worth engaging with; by the end, the novel is still at it, and ‘English women have no sense of humour’, preposterously. Native Indians, on the other hand, are invariably wise, understand everything from saplings onwards, live in wikuoms (not wigwams, please) and are particularly good at medicine. One of the characters dies from an infected wound from a nail in his boot. I would have thought that the novelist might admit that the forest medicine of her wise women in wikuoms would be less effective in this case than the antibiotics that the thuggish English would go on to discover, but there you go. On the other hand, one important part of American history goes almost unnoticed; it is extraordinary how little impact the institution of slavery and the fight to abolish it makes on Proulx’s world. Of course, the novelist is not bound to give a full account of history, and Proulx is a famous exponent of detail. Here, the historical detail takes an interesting form: she is interested in pioneers, and repeatedly appears to be teasing the reader with apparent anachronisms. Almost always, they turn out to be (just about) possible. The invention of mayonnaise is noted; ‘newspapers’ in Boston in 1710; a sea captain prescribing ‘scurvy-preventing lemon juice’ in the 1720s; and a 17th-century wedding serves champagne and whisky. (‘Champagne’ would not, I think, have been called that exactly, and would certainly not have resembled the modern drink.) Could there really have been a medical doctor in 18th-century America called Mukhtar? There were a few Asians in North America at that time, but as far as I know they were all domestic servants. Did loggers in camps in the 1750s really drink tea, and offer it to each other? The one indubitable anachronism I found was that no one would have described houses in London as ‘Georgian’ in 1825 — but that falls into the territory of the despised enemy. But if Captain Luther Pearfowle’s storm was imaginary, the tempest that caught Bludwesp was terrifyingly real. Great seas rose and fell on them with shuddering crashes. The bare masts groaned and the rigging ropes howled. A black monster swelled on the horizon, raced towards them…. Really, it would have been better just to write ‘There was a storm’ rather than extend the description in this half-hearted manner. So, when Proulx is detailed and leisurely, she can be most compelling — and what is That Old Ace in the Hole but a gigantic shaggy-dog story? But when she feels she has to get through 50 years in 100 pages, and can’t afford to dawdle, the writing grows generalised and summary — the vision of the statistician, the historian of society, the ecologist, and not that of the novelist fascinated by the individual case. I finished this book with real regret: that such an interesting novelist should have devoted her time to a project that was not just beyond her powers to bring off, but one actually inimical to what the novel form does best.
Go-Jek, the on-demand motorbike taxi service that rivals Uber and Grab in Indonesia, has closed $550 million in new funding, a source close to the company told TechCrunch. The deal values Go-Jek at $1.3 billion (post-money) and could be announced as soon as this week, we understand. Go-Jek did not respond to a request for comment or confirmation. Update: The round has been confirmed. The company plans to spend the money growing its services businesses, and continuing to compete with its fierce rivals in Indonesia. It doesn’t appear that this round will fund an expansion outside of Indonesia, according to our source. The Wall Street Journal last month reported that Go-Jek was in talks to raise $400 million, with KKR and Warburg Pincus among the potential new backers. The startup’s existing investors include Sequoia Capital, DST Global and Singapore-based NSI Ventures. The deal makes Go-Jek one of the few unicorns in Southeast Asia. Others tech companies valued in excess of $1 billion include games firm Garena ($3.75 billion), Go-Jek rival Grab (estimated $1.5 billion-$1.6 billion) and Amazon-like shopping site Lazada ($1.5 billion valuation). Go-Jek was founded in 2010, but it didn’t begin to take off in a major way until 2014, as this profile explains. It then grew faster after it introduced a mobile booking app in early 2015. Go-Jek claims 200,000 motorbike drivers — known as “ojeks” in Indonesia — in its fleet across Indonesia, which is the world’s fourth largest country with a population of more than 250 million people. The company is best known for hailing motorbike taxis on demand, a type of transportation popular in parts of Southeast Asia where heavy urban traffic makes two wheels faster than four. Demand is particularly high in Jakarta, which is home to some 30 million people and is one of the planet’s most congested cities. In addition to regular rides, Go-Jek offers on-demand services like food, shopping and package deliveries. The company said recently that it processed 20 million booking requests in June 2016, or around 667,000 per day. Internal documents seen by TechCrunch show it fulfilled 256,000 rides per day, as of April 2016. Grab introduced a similar service, GrabBike, to Indonesia last year, and Uber’s UberMoto competitor showed up in the country this year — but Go-Jek is widely acknowledged to be leading the pack in Indonesia. This new fund raising comes at an interesting time for Southeast Asia’s on-demand services. Uber’s decision to sell its China operations to Didi Chuxing is likely to mean that the U.S. company — valued at $66 billion — will divert more resources into Southeast Asia and India, potentially increasing the competition with Grab in its six markets and Go-Jek in Indonesia. Uber has already started launching new services across Southeast Asia and reached operational profitability in two countries. Grab meanwhile welcomes Didi’s deal with Uber as evidence that the U.S. ridehailing giant can be defeated by local rivals. The Didi-Uber deal seemed to throw Grab’s alliance with Didi, which invested in the Singapore-based company last year, into jeopardy, but Didi is reportedly leading a new round of investment in Grab, according to both Bloomberg and The Wall Street Journal. That round could reportedly rise to $1 billion, while Grab has said that it is still yet to touch the $350 million Series E round that it raised one year ago. Internal documents viewed by TechCrunch show that Go-Jek had $104 million in cash on its books as of March and that it spent $73 million over the previous six-month period. Given the increased competition it is likely to face, this new raise is hugely important if it is to continue to compete with its cash-rich rivals on subsidies and marketing.
A carbon dot doped lanthanide coordination polymer nanocomposite as the ratiometric fluorescent probe for the sensitive detection of alkaline phosphatase activity. The development of sensitive methods for alkaline phosphatase (ALP) activity analysis is an important analytical topic. Based on the stimulus-responsive lanthanide coordination polymer, a simple ratiometric fluorescence sensing strategy was proposed to detect ALP activity. A carbon dot (CD) doped fluorescent supramolecular lanthanide coordination polymer (CDs@Tb-GMP) was prepared with Tb3+ and the ligand guanine single nucleotide (GMP). To construct a ratiometric fluorescence biosensor, the fluorescence of Tb-GMP was used as a response signal, and the fluorescence of CDs was used as a reference signal due to its good stability. When excited at 290 nm, the polymer network Tb-GMP emits characteristic fluorescence at 545 nm, while the CDs encapsulated in the polymer network emit fluorescence at 370 nm. After adding ALP to the system, the substrate GMP can be hydrolyzed by ALP, resulting in the destruction of the polymer network. Accordingly, the fluorescence of Tb-GMP significantly decreased, while the fluorescence of CDs slightly increased due to their release from the polymer network. By comparing the relationship between the fluorescence intensity ratio of the two signals and the concentration of ALP, sensitive detection of ALP could be achieved with the linear range from 0.5 to 80 U L-1 and a detection limit of 0.13 U L-1. Furthermore, the proposed ratiometric sensing system was applied to the detection of ALP in human serum samples with desirable results, indicating potential application in clinical diagnosis.
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> #include <sys/types.h> #include <sys/stat.h> #include <gbpLib.h> #include <gbpMath.h> #include <gbpHalos.h> #include <gbpTrees_build.h> void assign_depth_first_index_vertical_recursive(tree_node_info *halo, int *depth_first_index) { // Set and increment the depth-first index of this halo halo->depth_first_index = (*depth_first_index)++; // Walk the tree tree_node_info *current_progenitor = halo->progenitor_first; while(current_progenitor != NULL) { assign_depth_first_index_vertical_recursive(current_progenitor, depth_first_index); current_progenitor = current_progenitor->progenitor_next; } }
<filename>app/src/main/java/ro/ao/benchmark/task/custom_tasks/FloatingPointTestTask.java package ro.ao.benchmark.task.custom_tasks; import android.util.Log; import java.util.Random; import ro.ao.benchmark.model.benchmark.synthetic_testing.SyntheticTest; import ro.ao.benchmark.model.benchmark.synthetic_testing.SyntheticTestResult; import ro.ao.benchmark.task.AOThread; import ro.ao.benchmark.task.TaskListener; import ro.ao.benchmark.util.Constants; public class FloatingPointTestTask extends AOThread.AOTask { private volatile long startTime, endTime; private TaskListener listener; public FloatingPointTestTask(TaskListener listener) { this.listener = listener; } @Override public void doAction() { startTime = System.currentTimeMillis(); try { double a = new Random().nextDouble(); double b = new Random().nextDouble(); for (double i = 0.1; i <= 999999999.0; i += 1.21) { b += a * 2.3412 + b * 33412.12; a *= b / 2.1234213; a /= b * i; a += b * 31283.11 / i; b -= 129381912.3121 - i; } } catch (Exception e) { Log.e(Constants.TAG_SYNTHETIC_TESTS, "FloatingPoint Exception", e); listener.onException(e); } } @Override public void onFinish() { endTime = System.currentTimeMillis(); SyntheticTestResult result = new SyntheticTestResult( SyntheticTest.FLOATING_POINT_TEST, "Perform math operations using huge floating numbers", (endTime - startTime), startTime, endTime ); Log.d(Constants.TAG_SYNTHETIC_TESTS, "FloatingPoint TEST RESULT: " + result); listener.onTaskDone(result); } }
<gh_stars>1000+ // Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. // Code generated by Microsoft (R) AutoRest Code Generator. package com.azure.resourcemanager.security.models; import com.azure.core.util.ExpandableStringEnum; import com.fasterxml.jackson.annotation.JsonCreator; import java.util.Collection; /** Defines values for ReportedSeverity. */ public final class ReportedSeverity extends ExpandableStringEnum<ReportedSeverity> { /** Static value Informational for ReportedSeverity. */ public static final ReportedSeverity INFORMATIONAL = fromString("Informational"); /** Static value Low for ReportedSeverity. */ public static final ReportedSeverity LOW = fromString("Low"); /** Static value Medium for ReportedSeverity. */ public static final ReportedSeverity MEDIUM = fromString("Medium"); /** Static value High for ReportedSeverity. */ public static final ReportedSeverity HIGH = fromString("High"); /** * Creates or finds a ReportedSeverity from its string representation. * * @param name a name to look for. * @return the corresponding ReportedSeverity. */ @JsonCreator public static ReportedSeverity fromString(String name) { return fromString(name, ReportedSeverity.class); } /** @return known ReportedSeverity values. */ public static Collection<ReportedSeverity> values() { return values(ReportedSeverity.class); } }
Head Motion Controlled Wheelchair for Physically Disabled People Physically disabled people face difficulties in daily life because of their body impairment from their birth or due to an accident or illness. The projects goal is to design a wheelchair that could function for a disable-person who cannot move other parts of the body correctly, keeping their words in mind with the help of head movements. Medical equipment manufactured to assist disabled peoples are very complicated, limited, and costly. A head motion controlled wheelchair is an intelligent wheelchair with facilities for navigating, recognizing obstacles, and moving automatically by managing detectors and motions. The prototype of the wheelchair performs head motion through a microcontroller. Furthermore, data processing is performed with the help of an accelerometer. The controller filters the indication and allows the action of the wheelchair for its navigation. The ultrasound detector helps to resist impediments. Usually, it is expensive, but we have designed it at an inadequate cost so that ordinary people from underdeveloped or developing countries can use it. The system memorizes the head gesture for further referencing it as the stable gesture or neutral position after identifying the start signal. The dc motors will drive the wheelchair during the gesture of control mode. The motors will not work, and consequently, the wheelchair will not run when the head is neutral.
/** * Returns the unit instance for the given long (un)localized name. * This method is somewhat the converse of {@link #symbolToName()}, but recognizes also * international and American spelling of unit names in addition of localized names. * The intent is to recognize "meter" as well as "metre". * * <p>While we said that {@code UnitFormat} is not thread safe, we make an exception for this method * for allowing the singleton {@link #INSTANCE} to parse symbols in a multi-threads environment.</p> * * @param uom the unit symbol, without leading or trailing spaces. * @return the unit for the given name, or {@code null} if unknown. */ private Unit<?> fromName(String uom) { /* * Before to search in resource bundles, check for degrees units. The "deg" unit can be both angular * and Celsius degrees. We try to resolve this ambiguity by looking for the "C" suffix. We perform a * special case for the degrees units because SI symbols are case-sentive and unit names in resource * bundles are case-insensitive, but the "deg" case is a mix of both. */ final int length = uom.length(); for (int i=0; ; i++) { if (i != DEGREES.length()) { if (i != length && (uom.charAt(i) | ('a' - 'A')) == DEGREES.charAt(i)) { continue; } if (i != 3 && i != 6) { break; } } if (length == i) { return Units.DEGREE; } final int c = uom.codePointAt(i); if (c == '_' || Character.isSpaceChar(c)) { i += Character.charCount(c); } if (length - i == 1) { switch (uom.charAt(i)) { case 'K': // case 'K': return Units.KELVIN; case 'C': return Units.CELSIUS; case 'N': case 'E': return Units.DEGREE; } } break; } /* * At this point, we determined that the given unit symbol is not degrees (of angle or of temperature). * Remaining code is generic to all other kinds of units: a check in a HashMap loaded when first needed. */ Map<String,Unit<?>> map = nameToUnit; if (map == null) { map = SHARED.get(locale); if (map == null) { map = new HashMap<>(128); copy(locale, symbolToName(), map); if (!locale.equals(Locale.US)) copy(Locale.US, getBundle(Locale.US), map); if (!locale.equals(Locale.ROOT)) copy(Locale.ROOT, getBundle(Locale.ROOT), map); /* * The UnitAliases file contains names that are not unit symbols and are not included in the UnitNames * property files neither. It contains longer names sometime used (for example "decimal degree" instead * of "degree"), some plural forms (for example "feet" instead of "foot") and a few common misspellings * (for exemple "Celcius" instead of "Celsius"). */ final ResourceBundle r = ResourceBundle.getBundle("org.apache.sis.measure.UnitAliases", locale, UnitFormat.class.getClassLoader()); for (final String name : r.keySet()) { map.put(name.intern(), Units.get(r.getString(name))); } map = Collections.unmodifiableMap(map); /* * Cache the map so we can share it with other UnitFormat instances. * Sharing is safe if the map is unmodifiable. */ synchronized (SHARED) { for (final Map<String,Unit<?>> existing : SHARED.values()) { if (map.equals(existing)) { map = existing; break; } } SHARED.put(locale, map); } } nameToUnit = map; } /* * The 'nameToUnit' map contains plural forms (declared in UnitAliases.properties), * but we make a special case for "degrees", "metres" and "meters" because they * appear in numerous places. */ uom = uom.replace('_', ' ').toLowerCase(locale); uom = CharSequences.replace(CharSequences.replace(CharSequences.replace(CharSequences.toASCII(uom), "meters", "meter"), "metres", "metre"), DEGREES, "degree").toString(); /* * Returns the unit with application of the power if it is part of the name. * For example this method interprets "meter2" as "meter" raised to power 2. */ Unit<?> unit = map.get(uom); appPow: if (unit == null) { int s = uom.length(); if (--s > 0 && isDigit(uom.charAt(s))) { do if (--s < 0) break appPow; while (isDigit(uom.charAt(s))); if (uom.charAt(s) == '-') { if (--s < 0) break appPow; } unit = map.get(uom.substring(0, ++s)); if (unit != null) { unit = unit.pow(Integer.parseInt(uom.substring(s))); } } } return unit; }
// Autogenerated from CppHeaderCreator on 7/27/2020 3:09:49 PM // Created by Sc2ad // ========================================================================= #pragma once #pragma pack(push, 8) // Begin includes #include "utils/typedefs.h" // Including type: System.MulticastDelegate #include "System/MulticastDelegate.hpp" // Including type: System.Runtime.Remoting.Lifetime.Lease #include "System/Runtime/Remoting/Lifetime/Lease.hpp" #include "utils/il2cpp-utils.hpp" // Completed includes // Begin forward declares // Forward declaring namespace: System namespace System { // Skipping declaration: IntPtr because it is already included! // Skipping declaration: TimeSpan because it is already included! // Forward declaring type: IAsyncResult class IAsyncResult; // Forward declaring type: AsyncCallback class AsyncCallback; } // Forward declaring namespace: System::Runtime::Remoting::Lifetime namespace System::Runtime::Remoting::Lifetime { // Skipping declaration: ILease because it is already included! } // Completed forward declares // Type namespace: System.Runtime.Remoting.Lifetime namespace System::Runtime::Remoting::Lifetime { // Autogenerated type: System.Runtime.Remoting.Lifetime.Lease/RenewalDelegate class Lease::RenewalDelegate : public System::MulticastDelegate { public: // public System.Void .ctor(System.Object object, System.IntPtr method) // Offset: 0x104C2C8 static Lease::RenewalDelegate* New_ctor(::Il2CppObject* object, System::IntPtr method); // public System.TimeSpan Invoke(System.Runtime.Remoting.Lifetime.ILease lease) // Offset: 0x104C514 System::TimeSpan Invoke(System::Runtime::Remoting::Lifetime::ILease* lease); // public System.IAsyncResult BeginInvoke(System.Runtime.Remoting.Lifetime.ILease lease, System.AsyncCallback callback, System.Object object) // Offset: 0x104C2DC System::IAsyncResult* BeginInvoke(System::Runtime::Remoting::Lifetime::ILease* lease, System::AsyncCallback* callback, ::Il2CppObject* object); // public System.TimeSpan EndInvoke(System.IAsyncResult result) // Offset: 0x104C4E8 System::TimeSpan EndInvoke(System::IAsyncResult* result); }; // System.Runtime.Remoting.Lifetime.Lease/RenewalDelegate } DEFINE_IL2CPP_ARG_TYPE(System::Runtime::Remoting::Lifetime::Lease::RenewalDelegate*, "System.Runtime.Remoting.Lifetime", "Lease/RenewalDelegate"); #pragma pack(pop)
Curl-crested aracari Description It measures 40–45 cm (16–18 in) long and weighs 190–280 g (7–10 oz). On account of its relatively long tail and curly crest (the latter only visible up close), it was formerly placed in the monotypic genus Beauharnaisius. Distribution and habitat The curl-crested aracari is found in the south-western section of the Amazon Basin, with the Amazon River being its northern range limit. Near the Amazon River, its range extends east to about the Madeira River, while in the southern half of its range it extends east to the Xingu River. It is generally rare to uncommon, but regularly seen at several localities, including the Tambopata National Reserve in Peru, the Noel Kempff Mercado National Park in Bolivia, and the Cristalino State Park near Alta Floresta in Brazil. Its natural habitat is tropical moist lowland forests. Behaviour It is primarily a frugivore, but will also take nestlings of birds such as the yellow-rumped cacique. Status and conservation Due to its extensive range, it is considered to be of least concern by BirdLife International and consequently the IUCN.
def changed_update_freq(self, v): self.update_freq = v self.input_update_freq_label.setText(Lang.TEXT["update_every"][self.parent.lang]+" {} %".format(self.update_freq))
Q: Can a gentile go to Synagogue on Shabbat and what is forbidden on Shabbat for a gentile? Is a gentile allowed to participate in Shabbat prayers in a Synagogue (of course he/she will not count for a minyan) if they do so in order to gain knowledge of Jewish customs for the purpose of conversion? As far as I know it is forbidden for a gentile to 'observe Shabbat' but I do not know what does it exactly mean and I would like for someone with more knowledge to clarify what is forbidden for a gentile to do on Shabbat that Jews shall do. Also how should a gentile behave in a Synagogue, are there any special rules different than the 'normal' rules applicable to the Jews? As a prospective convert I was told by a rabbi to start celebrate Jewish holidays and customs, but what am I forbidden from doing on Shabbat? A: Gentiles can certainly attend synagogue services on Shabbat (or at any other time). I know many converts and all of them were required by their rabbis to start doing this fairly early on in the process. Conversion is in part about joining a community, so you'd better get to know it. Also, while you can practice prayers on your own, you need the experience of praying in that setting to really start to "get" those skills. Further, sometimes non-Jewish friends of the family attend a bar mitzvah on Shabbat morning. If it's ok for them, who are there out of friendship and not out of belief, then how much the moreso should it be ok for someone who wants to join with the Jewish people? It is forbidden for a gentile to fully observe Shabbat (see discussion here, thanks @ba). So long as there is one aspect that you violate, you're not doing that. If there's no eiruv where you live, carrying your keys in your pocket would satisfy that. As far as behaving in a synagogue in general, decline any honors that are offered to you (like an aliyah) and you already know they can't count you in the minyan. Beyond that, you can just do what other people are doing. It it usually expected that men cover their heads; it's ok to use a kippah to do that. (Kippot are not restricted to Jews.) For more on head-covering see this question (thanks @ba again). I wish you well on your journey.
The ITPRIPL1- CD3 axis: a novel immune checkpoint controlling T cells activation The immune system is critical to fighting infections and disease. The molecular recognition of harmful entities takes place when antigen-presenting cells (APC) harboring major histocompatibility complex (MHC) molecules bound to peptides derived from harmful antigens (ligand) dock on specific T cell receptor (TCR)-CD3 complex (receptor) at the surface of CD8+ T cells. The discovery of a general immune checkpoint mechanism to avoid the harmful impact of T cell hyperactivation provoked a paradigm shift. The clinical relevance of this mechanism is highlighted by the fact that PD-1 and PD-L1 inhibitors are very effective at boosting immune reactions. Still, immune evasion frequently happens. The observation that some PD-1/PD-L1 negative tumors have a poor immune response opens the door to identifying a novel immune checkpoint mechanism. Here, we discovered that ITPRIPL1, a gene with unknown function, impairs T cell activation. Surprisingly, we found that CD3 is the direct receptor of ITPRIPL1. This novel immune checkpoint was validated as a drug target using ITPRIPL1 KO mice and monoclonal antibodies. Thus, targeting the ITPRIPL1-CD3e axis, especially in PD-1 - PDL-1 negative patients, is a promising therapeutic strategy to reduce immune evasion.
Honeypreet Insan was a constant companion of Gurmeet Ram Rahim Singh right until he was jailed Highlights Honeypreet is 'adopted daughter' of Gurmeet Ram Rahim Singh Honeypreet charged with sedition, plotting Ram Rahim's escape Haryana police put out lookout notice for Honeypreet Insan Honeypreet was seen accompanying Ram Rahim Singh in the chopper in which he was flown to Rohtak. The policemen were also suspicious when Ram Rahim and his "adopted daughter" kept lingering in the corridor of the Panchkula court complex and appeared to be buying time. It appeared that they were waiting for mobs of supporters. On her Twitter and Facebook accounts, she mentions herself as ‘PAPA's ANGEL’. Honeypreet Insan, the adopted daughter of so-called spiritual leader Gurmeet Ram Rahim Singh, is being hunted by the Haryana police, which today issued an alert at airports and other exit points for her.A constant companion of Ram Rahim right untill he was jailed last week for raping two followers, Honeypreet Insan has been charged with sedition for allegedly plotting the self-appointed guru's escape from a court in Panchkula where he was held guilty. Another senior aide of the Dera Sacha Sauda sect headed by Ram Rahim, Aditya Insan, has also been charged. Honeypreet was seen with the 50-year-old Dera Sacha Sauda chief in court, holding his bags, and she stayed with him as he was flown to a Rohtak jail in a special government chopper.Reports suggest she also clashed with the jail authorities, demanding to be allowed to stay with the Baba in jail. Ram Rahim claimed that she was an "acupressure specialist" who would help ease his chronic back pain.The Haryana police have revealed how they stymied an attempt by Ram Rahim's "commandos" to free him on Friday, when his conviction led to violence by mobs of Dera supporters in which 32 were killed.Soon after he was declared guilty by the court, Ram Rahim demanded a "red bag" that he had brought from Sirsa, the main base of his Dera Sacha Sauda sect."The Dera chief demanded the bag, saying his clothes were in it. It was actually a signal for his men to spread the news of his conviction among supporters so that they could resort to causing disturbance," Inspector General of Police KK Rao told reporters.When the police tried to shift Ram Rahim from his SUV to the car of a police officer, his security guards ringed around him.A route change was devised to take Ram Rahim to the chopper site without incident.It was a tense cat-and-mouse game until Ram Rahim was inside the special chopper that was to take him to the Rohtak jail. Honeypreet 's original name is Priyanka Taneja. On social media, she describes herself as "Papa's angel, philanthropist, director, editor and actress". She also refers to the Dera chief as "Rockstar Papa".
<filename>src/compiler-rt/lib/builtins/clear_cache.c /* ===-- clear_cache.c - Implement __clear_cache ---------------------------=== * * The LLVM Compiler Infrastructure * * This file is dual licensed under the MIT and the University of Illinois Open * Source Licenses. See LICENSE.TXT for details. * * ===----------------------------------------------------------------------=== */ #include "int_lib.h" #if __APPLE__ #include <libkern/OSCacheControl.h> #endif #if defined(__NetBSD__) && defined(__arm__) #include <machine/sysarch.h> #endif #if defined(ANDROID) && defined(__mips__) #include <sys/cachectl.h> #endif #if defined(ANDROID) && defined(__arm__) #include <asm/unistd.h> #endif #if defined(__aarch64__) && !defined(__APPLE__) #include <stddef.h> #endif /* * The compiler generates calls to __clear_cache() when creating * trampoline functions on the stack for use with nested functions. * It is expected to invalidate the instruction cache for the * specified range. */ COMPILER_RT_ABI void __clear_cache(void* start, void* end) { #if __i386__ || __x86_64__ /* * Intel processors have a unified instruction and data cache * so there is nothing to do */ #elif defined(__arm__) && !defined(__APPLE__) #if defined(__NetBSD__) struct arm_sync_icache_args arg; arg.addr = (uintptr_t)start; arg.len = (uintptr_t)end - (uintptr_t)start; sysarch(ARM_SYNC_ICACHE, &arg); #elif defined(ANDROID) register int start_reg __asm("r0") = (int) (intptr_t) start; const register int end_reg __asm("r1") = (int) (intptr_t) end; const register int flags __asm("r2") = 0; const register int syscall_nr __asm("r7") = __ARM_NR_cacheflush; __asm __volatile("svc 0x0" : "=r"(start_reg) : "r"(syscall_nr), "r"(start_reg), "r"(end_reg), "r"(flags)); if (start_reg != 0) { compilerrt_abort(); } #else compilerrt_abort(); #endif #elif defined(ANDROID) && defined(__mips__) const uintptr_t start_int = (uintptr_t) start; const uintptr_t end_int = (uintptr_t) end; _flush_cache(start, (end_int - start_int), BCACHE); #elif defined(__aarch64__) && !defined(__APPLE__) uint64_t xstart = (uint64_t)(uintptr_t) start; uint64_t xend = (uint64_t)(uintptr_t) end; // Get Cache Type Info uint64_t ctr_el0; __asm __volatile("mrs %0, ctr_el0" : "=r"(ctr_el0)); /* * dc & ic instructions must use 64bit registers so we don't use * uintptr_t in case this runs in an IPL32 environment. */ const size_t dcache_line_size = 4 << ((ctr_el0 >> 16) & 15); for (uint64_t addr = xstart; addr < xend; addr += dcache_line_size) __asm __volatile("dc cvau, %0" :: "r"(addr)); __asm __volatile("dsb ish"); const size_t icache_line_size = 4 << ((ctr_el0 >> 0) & 15); for (uint64_t addr = xstart; addr < xend; addr += icache_line_size) __asm __volatile("ic ivau, %0" :: "r"(addr)); __asm __volatile("isb sy"); #else #if __APPLE__ /* On Darwin, sys_icache_invalidate() provides this functionality */ sys_icache_invalidate(start, end-start); #else compilerrt_abort(); #endif #endif }
Music for Galway has been running a mid-January chamber music festival for the last 15 years. It took place at Taibhdhearc na Gaillimhe from 2004 to 2006, relocated to its current venue, the Town Hall Theatre, in 2007, and has been branded as the Midwinter Festival since 2012, the last year that Jane O’Leary programmed Music for Galway’s activities. In 2013, pianist Finghin Collins became the organisation’s artistic director, and he has used his position to make Music for Galway’s output more consistently thematic both across full seasons – this year’s is called Wanderlust – and also within the Midwinter programmes, which this year is entitled, Swansong – Intimations of Mortality. His biggest innovation is planned for 2020, when Galway will share the title European City of Culture with Rijeka in Croatia. Music for Galway is using the Galway 2020 celebrations as a “springboard” for Cellissimo. This cello-focused festival is intended to become a triennial event, and, in the words of the publicity, “will have it all: star performers, performances big and small, for big and small”. This year’s Midwinter programme didn’t really stay close to the implications of the “intimations of mortality” subtitle. If it did it would at the very least have had to include Schoenberg’s late string trio, completed after the composer had a near-death experience. Schoenberg’s heart stopped beating after complications from benzedrine as a treatment for asthma. He incorporated references to the hypodermic needle that was used to make an injection directly into his heart into the trio, and, he told Thomas Mann, he even dealt with “the male nurses and all the other oddities of American hospitals”. The idea of swansong goes back to the Greeks, and comes from the belief that mute swans only sing just before they die. Swansong is probably best captured in music in Orlando Gibbons’s madrigal, The Silver Swan, published at the safe remove of 13 years before the composer’s death. The Oxford English Dictionary defines swansong as “a song like that fabled to be sung by a dying swan; the last work of a poet or musician, composed shortly before his death”. Take for instance the three late and light children’s songs by Mozart, beautifully delivered by soprano Ailish Tynan with Collins himself at the piano. No intimations of mortality there. Nor in the often relentlessly busy Piano Trio (played by Bogdan Sofei, violin, Adrian Mantu, cello, and Leon McCawley, piano) that Fanny Mendelssohn wrote in 1846, intended as birthday present for her younger sister Rebeckah. An American reviewer praised in 1856 as being “vigorous, so full of fire”. The composer was close to death but didn’t know it. She survived the trio’s first performance in 1847 by just over a month. The day she died she is reputed to have said, “It is probably a stroke, just like mother’s,” just before she lost consciousness after the second of two strokes on the one day. The first had rendered her unable to play the piano, the second killed her. Her famous brother, Felix, would also die of a stroke just months later. Concern with mortality is not an obvious feature of the trio, nor is it of the Grosse Fuge composed by Beethoven in 1825, two years before his death from cirrhosis of the liver. This fugue is one of the most daunting ever written, energetic, not quite graspable, a piece that still, nearly two centuries on, presses at the boundaries of the possible. That pressure was very evident in the performance by the ConTempo String Quartet. On the other hand Richard Strauss, who would die of kidney failure after a heart attack in 1949, certainly had death on his mind when he wrote the songs that were published posthumously as Vier letzte Lieder (Four Last Songs). He even changed the text of one of them, replacing a key word in the final line of Eichendorff’s Im Abendrot (In Sunset). Eichendorff’s original Ist das etwa der Tod? (Is that perhaps death?) was turned into something personal, Ist dies etwa der Tod? (Is this perhaps death?). The Vier letzte Lieder, beautifully and achingly handled by Tynan and Collins, is one of the greatest expressions of the reflective and autumnal in music, as is Brahms’s late Clarinet Quintet, written after the composer was prompted out of retirement by the playing of the clarinettist Richard Mühlfeld. Brahms wrote it in the summer of 1891, some six years before his death from cancer at the age of 63. English clarinettist Michael Collins joined the ConTempo String Quartet in a performance that dwelt lovingly on the music’s often floating reflectiveness. The other standout performance in this always thought-provoking festival featured Tynan, and the two Collinses (no relation) in Schubert’s hauntingly virtuosic Der Hirt auf dem Felsen. And the ConTempos gave a performance of John Kinsella’s Fifth Quartet of 2013 that was much more abstract in its effect than the premiere at the West Cork Chamber Music Festival by the Vanbrugh Quartet. Kinsella, who will turn 87 in April, was present for an alert and stimulating pre-performance interview with Finghin Collins. The composer is thinking about his next symphony, but he wasn’t asked and didn’t say whether intimations of mortality would have any bearing on it.
Sonic boon: ultrasound enhances angiogenic cell therapy. This editorial refers to Ultrasound stimulation restores impaired neovascularization-related capacities of human circulating angiogenic cells by Y. Toyama et a l., pp. 448459, this issue. Peripheral vascular disease (PVD) affects ∼8 million people per year in the USA alone, and causes significant morbidity and mortality worldwide. Amputation remains the definitive treatment for severe PVD, despite the numerous negative effects the loss of a limb can have for a patient with PVD.1 Recent advances in molecular and cellular therapeutics offer the possibility that therapeutic angiogenesis may improve patient outcomes and improve the standard of care for patients with PVD.1,2 Gene therapy approaches have been developed to increase neovascularization in models of critical limb ischaemia (CLI) by expressing gene(s) associated with angiogenesis. These approaches have demonstrated much promise at the preclinical level, and have led to numerous efficacy and safety studies with a variety of therapeutic genes. Clinical gene therapy studies have been challenged by the relatively short duration of gene expression in some systems, as well as by the suboptimal distribution of gene expression, and have failed to demonstrate a significant benefit of gene therapy over placebo.
<gh_stars>0 package com.github.xuyh.tacos.domain.service; import com.github.xuyh.tacos.domain.model.Taco; import java.util.List; public interface TacoService { Taco save(Taco taco); List<Taco> findAll(); Taco findById(Long id); List<Taco> recentTacos(); }
Berkshire or Rail Volume: Which Is King? Berkshire Hathaway's purchase of Burlington Northern sparks debate on whether Warren Buffett single-handedly move the rails -- or if rail volumes hold greater sway on the sector. NEW YORK ( TheStreet) -- Pop-quiz, hotshot: Which is more important to the short-term fortunes of railroad stocks, rail volume data or Warren Buffett? To many sector watchers, there is no debate; the primacy of rail volume data is irrefutable. But it's a testament to the power of Berkshire Hathaway's ( BRK.A - Get Report)acquisition of Burlington Northern ( BNI that it may have derailed convention long-term sector wisdom -- at least temporarily. Indeed, even though the November acquisition had an immediate and potentially negative secondary impact -- some institutional investors will be forced to divest Berkshire holdings once Burlington's stock is folded within it -- buy-and-hold railroad investors received a big wish in the form of a big boost to the railroad industry. Reinvestment is likely to come in the true railroad stocks left on the landscape, anyway, stocks which investors also hope will benefit from Buffett's bet. Be careful what you wish for, though: the law of unintended (market) consequences has resulted in fears of mini-bubbles in the rail sector being triggered by Buffett's bet, as outlined in a UBS Investment Research report on the market reaction to Berkshire Hathaway's three major moves related to Burlington. While the UBS report argues that momentum investing in the stodgy railroad sector led investors to price levels beyond what the fundamentals were able to justify, some of the leading institutional investors respectfully disagree. Even as they accept the empirical nature of the UBS data, they just don't agree with the conclusions reached about Warren Buffett's ability to inflate -- and deflate -- the sector . Rick Paterson, the UBS analyst in charge of the research, said the team spoke with a range of institutional investors after it released the report, and there was definitely a divide in the reaction. "Most of my institutional clients read it, and a lot were surprised by the analysis and tended to agree, as it triggered their memory around these events. About 40% were not in agreement," Paterson said. Some institutions simply chalked it up to a market coincidence. Other institutions had already stopped adding to railroad positions -- sometimes called a stealth sale position. The most negative reaction came from holders of Union Pacific ( UNP - Get Report) shares. UBS downgraded Union Pacific as a result of its research, citing fears that the stock had become too expensive. The railroad companies, of course, also place the blinders on when it comes to any short-term negative market movement, especially when that analysis precipitated a downgrade in their own stock. "Generally speaking, Union Pacific's stock price has outperformed the broader market for more than three years ... we view Berkshire Hathaway's decision to purchase all of the outstanding shares of BNSF Railway as a tremendous vote of confidence in the future of America's freight rail industry," a Union Pacific spokeswoman said when asked about the UBS report. And she may, in fact, be correct in her assessment. Union Pacific is far from alone in viewing the long-term outlook for rails as positive. Notably, UBS Paterson is among those who agree with that outlook, even if the bubble analysis led him to downgrade Union Pacific stock. Longbow Research rail analyst Lee Klaskow said from the long-term perspective that Buffett's buy actually reduces volatility in the sector, though he also conceded that in the short-term, a move by Buffett that strengthens stocks would lead some investors to consider if it was worth selling the stock from a position of strength. The most direct and spirited attack on the UBS data, however, comes from institutional investors who believe that changes in rail volume can explain away any notions that the rail sector grows or contracts -- at least in the short-term -- on the investment whims of the Oracle of Omaha. "I can't argue with the data that UBS has for the history of Buffett's moves," said Nathan Brown, an investment analyst at Waddell & Reed. "I can't argue with what happened to the stocks. I just argue that weekly rail volume data is a more likely cause," Brown said. He noted that rail volume data worsened at several points in time linked to the Buffett stake increases in Burlington, and the subsequent weakness in the stocks can be attributed to the weakness in the rail volume numbers. "That's the more important force," Brown said. UBS' Paterson says while volumes do matter, the UBS data is really a separate data animal: "All we are saying is that for a few days immediately following the Buffett's investments, out to a few weeks following the moves, the valuation of these things become frothy." Indeed, rail volumes, while clearly a driver of stock performance, can't explain away mini-bubbles in stocks that occur within a time frame of days. "I don't disagree about the importance of rail volume, and I'm not saying that rails have outperformed for the last six months because of these three Buffett-inspired events, but volumes move in a more gradual pattern, over months," Paterson argued. In the end, both those supporting the mini-bubble argument and those opposing it have reached a similar long-term conclusion about the rail sector outlook. "These stocks are a little ahead of themselves given the bubble we talked about, but on a one-year to two-year basis, we like the group," Paterson said. Waddell & Reed's Brown noted that the investor analyzes rail volume on a sequential basis, and saw the largest sequential increase in rail volume between the second and third quarter. "That would suggest that we have already seen the baseline of recovery in the sector, and anything further is upside to those numbers."
def logout(): g.project = None g.admin = None if 'project_id' in session: session.pop('project_id', None) if 'admin_project_id' in session: session.pop('admin_project_id', None) return redirect(url_for('index'))
Periodic limb movements of sleep and the restless legs syndrome. Periodic limb movements of sleep and the restless legs syndrome are not diagnoses but rather an indication that there is some CNS disturbance and are associated with an ever-growing number of conditions. They are very common, exist in many forms and are often overlooked by physicians. It is the author's opinion that they are parts of what has been called an akathisia syndrome in the most severe situations and may include the same mechanisms that underlie attention disorders, chronic fatigue syndrome and "sun-downing." They are likely parts of a syndrome caused by dysfunction in a complex brainstem center. This center's normal function is to maintain a smooth electrical template on which discrete neuronal impulses sculpture the rich repertoire we recognize as sensory and motor function awake and to effect a smooth "switching" mechanism allowing sleep to occur without motor and sensory input invading consciousness (awakening). While the DA-ergic CNS pathways have been thought to be the primary neurotransmitter involved, the opioids secondary, there is mounting evidence that the situation is far more complicated, that many neurotransmitter, including stimulating and inhibiting amino acids, play a part. These patients agonize with their indisposition but can be helped by various treatments. Treatment alleviates not only the distress caused by the symptoms but also the devastating insomnia and excessive daytime sleepiness associated with it.
One of a series of photographs from Why Are The Beautiful Ones Always Insane by Sinisa Savic. See all the works in the project. Imagine if you put fourteen artists from seven different countries in a room together. What would they talk about? What would they learn? What would they reveal? Simply put, that's what imagine art after is all about. We can't put those artists together in one room - they're in locations as far-flung as Tehran and Tirana, London and Lagos - but, using the web, we can showcase their work, put them in touch with each other and get them to talk. This project isn't simply about exhibiting art that has already been produced: it's about enabling the artists involved to communicate, to share ideas and - hopefully - to develop their own projects side by side with artistic partners who may be thousands of miles away. Half the artists selected, though now based in Britain, arrived as immigrants from Afghanistan, Albania, Ethiopia, Iran, Iraq, Nigeria and Serbia Montenegro - countries whose people, according to the Home Office, make an unusually high number of applications for asylum in the UK. Each is being linked up with a partner from their country of origin, and the pairs provided with a forum for discussion on the Guardian Unlimited talkboards. For the next six weeks, they are going to be talking to each other about their ideas, their lives, and how it feels to work as artists in societies that are often wildly different. Curator Breda Beban, herself an artist who left Croatia for Britain in 1991 at the outbreak of the civil war, developed the concept for imagine art after and hand-picked the individuals taking part. "It's all about inviting artists who live at the hard edge of political change to talk to each other, and explore how that change has impacted on their lives and work," she explains. "For me the most interesting thing is how much we change and how much we are influenced by different geopolitical environments." The project's name, Beban says, squares with its aims. "After going through experiences that many of the artists taking part have gone through, living in exile, it's difficult to imagine how art can take place. Yet one of the strongest themes of the project is how the process of making art becomes in itself a survival strategy, how imagining art is absolutely necessary to keep your sanity." From a viewer's perspective, too, the aim is to challenge preconceptions. Beban says: "I really want to give everyone taking part a much broader sense of how contemporary art works. We have artists from Nigeria, Albania, Iraq, coming from very different traditions and working in very different styles. All of them have something fresh to add." The diversity encapsulated by the project is evident not merely in the different biographical experiences each artist has brought to the project, but in the sheer variety of artistic approaches they employ. One of the artists from Serbia Montenegro taking part, Tatjana Strugar, works primarily in video - a recent piece, a self-portrait filmed while giving birth, forms one of the exhibition's most startling and moving experiences. Her artistic partner is Sinisa Savic, a London-based photographer, who contributes a powerful series of images that, in his own words, explore "masculinity and its manifestation in gesture, attire, action and inaction". Elsewhere Awni Sami, an Iraqi living in the far north of the country who creates intricately worked abstract graphics, is being partnered with the 19-year-old Estabrak Al-Ansari, an emigre living in London whose simple, bold photographs focus on disarmingly domestic themes. As well as contributing samples of their work and brief accounts about what lies behind it, each artist has also provided a photograph of what they consider to be their favourite place and a short explanation why. The range of locations that appear - from a sandwich bar in the city of London to a park in the middle of Tirana, from photos of studio spaces and street scenes to a favourite painting at the National Gallery - give some hint of how different experiences have shaped each artist's career. All this contributes, Beban explains, to the project's concern with notions of belonging. "I'm very interested by how we perceive what we call home," she says. "What is that place? Does it change? How real is it?" The idea for imagine art after has its origins in an earlier project launched by producer Julia Farrington from Index Arts, an organisation operating under the umbrella of Index on Censorship, the pioneering magazine set up in 1972 to defend the right to free speech. That project, which took place during Refugee Week in 2003, saw the opening of a digital gallery and free internet cafe for the use of refugees at the Union Chapel arts venue in north London. Building on the concept that the internet could be of particular value to people who were geographically separated yet culturally related, Farrington discussed the concept of developing this with refugee artists and creating links with their communities of origin with Jo Confino, executive editor at the Guardian. Farrington then asked Beban to act as consultant and imagine art after was born. Farrington says: "That earlier project sparked a lot of ideas about the way the internet could be used to connect people. There's something very straightforward about this idea, but it's very powerful nonetheless. Finding Breda, who brought her experience as an artist and curator to bear on imagine art after really galvanised the whole project. "One of the most exciting things, too, has been realising how important the internet already is to many of the artists who are involved. It's a major part of their life." Ultimately, Beban and Farrington feel the project will work if it encourages communication over what Farrington calls the "digital divide". "The most important thing is to get them talking," says Beban, "sharing ideas and sparking off each other. That's what imagine art after is really about."
PAULA ZAHN, CNN ANCHOR: Christiane Amanpour is standing by in Bagram to give us a better sense of what's going to happen there as commemorations are just about ready to get under way -- Christiane, good morning. And that's exactly right. In about 10 minutes from now, a remembrance ceremony here will be held for those who were killed in that attack in New York and Washington and Pennsylvania one year ago today. You know, there are 7,000 or more U.S. men and women in the military who are continuing this war on terror. It is not quite over yet, although they do have al Qaeda and the Taliban in disarray and on the run. These have been capable of mounting small attacks. There was one in Kabul last week and it killed and wounded scores of people. There was, of course, that assassination attempt, as well, on the life of the president that failed, the president, Hamid Karzai. Now, here they will be remembering the victims of al Qaeda and Taliban. Behind me, they are already gathered in formation. The general in command of all U.S. forces here will address the men and women here and there will be a message from President Bush read out. There will be "Taps." There will be a moment of silence at the exact moment that the first plane went into those World Trade Center towers a year ago. And then this ceremony will be over. In Kabul earlier today, the U.S. Embassy held a small ceremony, a solemn ceremony, in which they unveiled a plaque. It was a plaque that covers a piece of the World Trade Center building that had been buried there many months ago. And on that plaque, a very simple inscription: "Here lie the remains of the World Trade Center and those who perished. We serve the cause that they cannot." So all over Afghanistan, wherever U.S. forces are and coalition forces, remembrances today and a determination and a knowledge that at least here in Afghanistan this war against terrorism continues with an operation under way right now called Champion Strike. ZAHN: Thanks so much, Christiane.
Factors influencing 20hour increments after platelet transfusion The 20hour posttransfusion platelet count determines transfusion policy for patients requiring platelet support, and yet factors influencing the 20hour count have been poorly defined. The clinical factors influencing both the 1 and 20hour corrected count increment (CCI), were studied in 623 human leukocyte antigen (HLA)unmatched platelet transfusions in 108 patients. The 1 and 20hour CCIs were highly correlated (r = 0.67, p less than 0.001). On average, the 20 hour CCI was 64 percent of the 1hour CCI. Multiple linear regression analyses identified splenectomy, bone marrow transplantation, disseminated intravascular coagulation, administration of amphotericin B, palpable spleen, and HLA antibody grade as the major factors influencing the 20hour posttransfusion CCI. Plateletspecific antibodies, number of concurrent antibiotics, clinical bleeding, and temperature did not significantly influence the 20hour posttransfusion CCI. The 1hour CCI was the only significant factor influencing the 20 hour CCI in a regression model containing the 1hour CCI and the above factors. Thus, the same clinical factors exert a major influence on the CCI at both 1 and 20 hours after platelet transfusion, with no evidence that any factor has more influence at 20 hours after transfusion than at 1 hour.
Physics-based Numerical Modeling for SiCMOSFET Devices SiC-MOSFET has shown up prominently as one of the most promising power electronics devices owing to its comprehensive splendid performance. In spite of that, the challenge of electric and thermal stress still restricts further technical application of SiC-MOSFET. The internal characteristics of SiC-MOSFET is closely related to its internal structural parameters and doping concentration together with its material physical parameters. In order to develop further explore on the stress boundary of SiC-MOSFET, a physicalbased numerical model of a specific SiC-MOSFET device is established based on TCAD (Technology Computer Aided Design) in this work. To verify the accuracy of the model, a simple but effective parameters calibration procedure which only needs overt datasheet provided by producers is developed. Short circuit simulation is also performed to verify the accuracy of the model. With credible physical-based numerical model of SiC-MOSFET, it is possible to improve its performance and broaden its application scenarios.
<filename>CLI/cxx/createLandmarks/dtitypes.h #ifndef DTITYPES_H #define DTITYPES_H // ITK Data types #include <itkImage.h> #include <itkVectorContainer.h> #include <itkVector.h> #include <itkCovariantVector.h> #include <itkDiffusionTensor3D.h> #include <itkAffineTransform.h> #include <itkDTITubeSpatialObject.h> #include <itkGroupSpatialObject.h> #include <itkRGBPixel.h> #include <itkVectorImage.h> // VNL Includes #include <vnl/vnl_matrix.h> #include <vnl/vnl_vector_fixed.h> // Define necessary types for images typedef double RealType; typedef double TransformRealType; typedef unsigned char LabelType; const unsigned int DIM = 3; typedef unsigned short ScalarPixelType; typedef itk::DiffusionTensor3D<double> TensorPixelType; typedef itk::Vector<double,3> DeformationPixelType; typedef itk::CovariantVector<double,3> GradientPixelType; typedef itk::VectorImage<ScalarPixelType, DIM> VectorImageType; typedef itk::Image<TensorPixelType, DIM> TensorImageType; typedef itk::Image<DeformationPixelType, DIM> DeformationImageType; typedef itk::Image<GradientPixelType, DIM> GradientImageType; typedef itk::Image<RealType, DIM> RealImageType; typedef itk::Image<ScalarPixelType, DIM> IntImageType; typedef itk::Image<LabelType, DIM> LabelImageType; typedef itk::Image<itk::RGBPixel<unsigned char>,3> RGBImageType; typedef TensorImageType::SizeType ImageSizeType; typedef TensorImageType::SpacingType ImageSpacingType; typedef itk::AffineTransform<TransformRealType,3> AffineTransformType; typedef vnl_vector_fixed<double, 3> GradientType; typedef itk::VectorContainer<unsigned int, GradientType> GradientListType; enum InterpolationType {NearestNeighbor, Linear, Cubic}; enum TensorReorientationType {FiniteStrain, PreservationPrincipalDirection}; enum EigenValueIndex {Lambda1 = 0, Lambda2, Lambda3}; typedef itk::DTITubeSpatialObject<3> DTITubeType; typedef DTITubeType::TubePointType DTIPointType; typedef DTITubeType::PointListType DTIPointListType; typedef itk::GroupSpatialObject<3> GroupType; typedef GroupType::ChildrenListType ChildrenListType; #endif
Insulation is probably not the first thing you think of when you wake up in the morning. To put it bluntly, it’s a bit unsexy: No house guest is going to be bowled over by the mineral wool you’ve installed in the walls; there’s none of the visual heft of a redecorated lounge. But in terms of worthwhile home improvements, insulation can improve a homeowner’s experience in more ways than one. Three main reasons: Comfort, cash, and carbon dioxide. An insulated home retains heat far more effectively, so will help ensure you feel warm and snug and sealed off from the elements when winter strikes. Effective insulation can also save you a great deal of money in the longer term – potentially hundreds of pounds a year. Draught-proofing can be a simple but effective measure too – and DIY-friendly. After all, all you’re doing is, quite literally, papering over the cracks. Self-adhesive foam strips work wonders for covering gaps around windows, door frames and loft hatches – a cheap and easy buy from your local hardware store. Consider a keyhole cover for your front and back door – a small metal plate that stops the wind from whistling in – and shore up your letterbox with a letterbox brush. Cracked walls can be mended with a dollop of cement or hard-setting fillers, while a professional could install a suitable draught-stop into an out-of-service chimney. Remember, when blitzing cracks and crevices, it’s essential not to block any of the intentional ventilation needed to air out your home. Extractor fans, underfloor grilles and trickle vents should all be left undisturbed. Portable draught excluders for the bottoms of doors are cheap, available off-the-shelf and come shaped as tube trains, sausage dogs and any number of other fluffy animals. Despite their simplicity, they work pretty well. Alternatively, pay a professional to draught-proof your whole home in one fell swoop. It’s comparatively costly – the Energy Saving Trust estimate a £200 price tag for a typical semi-detached property – but will save on time and effort, and, depending on your skill level, could yield far sturdier results. Simple measures aside, the biggest heat losses will be coming from the within the very fabric of your home. Warm air rises, and estimates suggest that an uninsulated home suffers roughly a quarter of all heat loss through the roof. According to the Energy Saving Trust, loft insulation in an uninsulated, gas-heated semi-detached house costs an average of £300, but can save an average £135 in energy bills annually – paying for itself in two to four years. A pretty good deal when you consider that well-installed loft insulation should last around 40 years. If your loft space is for storage only, you can simplify the process by insulating the floor with strips of mineral wool. DIY-savvy homeowners can attempt this themselves – but remember, this is no IKEA flat-pack. Research the process thoroughly and, if in doubt, send for the specialists. Insulating the roof itself should always be done by a professional, usually with rigid insulation board or spray-on foam. If loft insulation is your home’s snug bobble hat, wall insulation is its cosy winter coat. First of all, you need to know what sort of wall your property has. Most UK homes are either ‘solid wall’ – single slabs of brick or stone – or cavity walls, that leave a space between two layers of concrete or brick. Cavity walls installed in the last 15 or 20 years probably already have some insulation, otherwise both types can benefit. Solid walls insulate extremely poorly and haemorrhage heat and money, but can cost in the thousands to insulate properly. Cavity walls are much more effective insulators, but householders may still benefit from insulation that can be installed at a fraction of the cost. Homeowners should not even consider attempting these tasks themselves – call in the professionals and get an assessment up front. Once you’ve ticked off roof and walls, completionists can even insulate their floors. “It’s not done a huge amount because, practically, it’s difficult, as you’ve got to get underneath your property, which isn’t always possible,” explains Robson. “If you live in a park home, it’s easier to do and a really good idea.” Most modern floors have at least a dash of insulation built in during construction, and it minimises the cold floor shock of of the morning bathroom floor. Uninsulated water tanks and pipes exude heat too, driving up the energy – and money – required to warm your radiators and shower. A jacket for your water tank costs £15-20, and should come with instructions for assembly. Pipes can be protected with foam tubing that you simply slip on, provided you buy the right size. Potential pitfalls come from accessibility – if your waterworks are hard to reach, you may need a professional after all. According to the Energy Saving Trust, DIY insulation of a water tank should pay back its price in around three months.
/** * Copyright 2008 - 2015 The Loon Game Engine Authors * * Licensed under the Apache License, Version 2.0 (the "License"); you may not * use this file except in compliance with the License. You may obtain a copy of * the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. * * @project loon * @author cping * @email:<EMAIL> * @version 0.5 */ package loon.html5.gwt; import loon.font.Font; class GWTFont { public static final Font DEFAULT = new Font("sans-serif", Font.Style.PLAIN, 12); public static String toCSS(Font font) { String name = font.name; if (name == null) { name = "monospace"; } String familyName = name.toLowerCase(); float size = font.size; // 针对不同浏览器(主要是手机),有可能字体不支持,请自行引入font的css解决…… if (Loon.self != null && (!Loon.self.isDesktop())) { if (name != null) { if (familyName.equals("serif") || familyName.equals("timesroman")) { name = "serif"; } else if (familyName.equals("sansserif") || familyName.equals("helvetica")) { name = "sans-serif"; } else if (familyName.equals("monospaced") || familyName.equals("courier") || familyName.equals("dialog") || familyName.equals("黑体")) { name = "monospace"; } } } if (!name.startsWith("\"") && name.contains(" ")) { name = '"' + name + '"'; } String style = ""; switch (font.style) { case BOLD: style = "bold"; break; case ITALIC: style = "italic"; break; case BOLD_ITALIC: style = "bold italic"; break; default: break; } return style + " " + size + "px " + name; } }
/** * Purpose: Method to update the individual cells in the * 2D array of cell objects grid by using updateCell, which contains * the game rules, to set the string value by cell location. * * Assumptions: Calling this method on an object that is not * of the subclass simulation would cause it to fail. * * Return: N/A */ public void updateGrid() { takenSpots = new ArrayList<>(); antsMoved = new ArrayList<>(); if (antCount < maxAnt) { antSpawn(); } for (int i = 0; i < getRows(); i++) { for (int j = 0; j < getCols(); j++) { antGrid[i][j] = antGrid[i][j].getNextState(); if (antGrid[i][j].getName().equals("tl_ant") || antGrid[i][j].getName().equals("tr_ant") || antGrid[i][j].getName().equals("bl_ant") || antGrid[i][j].getName().equals("br_ant")) { System.out.println("ANT: " + i + " " + j); ArrayList<AntCell> list = new ArrayList<>(); list = antGrid[i][j].get4NeighborsTorroidal(antGrid[i][j].x, antGrid[i][j].y, antGrid, list); for (AntCell l : list) { System.out.println("NEIGHBOR: " + l.x + " " + l.y); } } antGrid[i][j].setNextState(null); } } for (int i = 0; i < getRows(); i++) { for (int j = 0; j < getCols(); j++) { updateCell(antGrid[i][j]); } } updateStringArray(); }
In a standard bicycle or other crank-powered vehicle, a crank arm is rotated to provide the force required to propel the vehicle. The most common example is a bicycle crank arm connected to a chainring, which rotates a chain, which, in turn, rotates a cog operatively connected to a drive wheel. The length of crank arms can be increased to allow the rider to transfer more torque through the crank arms for a given amount of force exerted by taking advantage of the longer lever provided by a longer crank arm. In fact, crank arms for many bicycles can be purchased in lengths varying from 165 millimeters up to and past 185 millimeters, generally in 2.5 millimeter increments. Elite cyclists will, on occasion, change the crank arm length of their bicycles to provide for more effective power transfer in events which are either hilly or flat, or which require a steady effort, such as a time trial event. Longer crank arms do, however, have drawbacks. The average cyclist is unlikely to change their crank arms to obtain a different effective crank length due to the difficulties associated with disassembling portions of the drivetrain. In addition, the constant use of longer crank arms has been associated with increased rates of injury among cyclists because of the corresponding larger range of motion induced by longer crank arms in the cyclists' knees and other joints. Attempts have been made to provide devices which vary the effective crank length during the pedal stroke to obtain an increase in the effective crank length during the power phase of the stroke. Most of the attempts have focused on converting the rotating motion of the crank arm and attached pedal to linear motion to provide the increase in effective crank length. One example of such an attempt is disclosed in U.S. Pat. No. 4,625,580 to Burt. In that device, the pedal incorporates a floating body and cam mechanism which moves the body of the pedal along a linear path during rotation of the pedal. The linear motion is defined by a combination of slots and pins, with the pins riding in the slots to guide the body of the pedal during rotation. That linear motion of the pedal body provides an increase in effective crank length in the Burt device. U.S. Pat. No. 4,882,945 to Trevizo discloses a device for transferring the rotary motion of the pedal relative to the crank to linear motion in which the effective crank length is increased. During rotation, portions of the Trevizo crank arms telescope to provide the increased effective crank length. U.S. Pat. Nos. 4,446,754 and 4,519,271 to Chattin also disclose a telescoping crank arm and cam device which provide increased effective crank lengths through the use of telescoping crank arms during rotation. U.S. Pat. No. 1,714,134 to Poyser discloses a crank arm including a longitudinal slot formed in its end. The slot receives a set of rotating elements to which the axle of the pedal is attached. As the crank arm is rotated, the pedal moves along the longitudinal slot to increase the effective crank arm length. The effective crank length provided by the Poyser device is at its maximum at the 3 o'clock position and a minimum at the 6 o'clock position. All of the attempts at providing an increase in effective crank length described above incorporate linear or longitudinal motion into the system. The use of linear motion to increase effective crank length is problematic because it results in increased friction. Wear associated with that friction will eventually lead to premature failure of those devices. Furthermore, devices such as the Poyser device provide a maximum crank length at the 3 o'clock position which does not take into account the advantages associated with maximizing the effective crank length proximate the 12 o'clock position. The device disclosed in U.S. Pat. No. 626,045 to Behan describes a device used in one attempt to increase effective crank length using purely rotary motion. As described there, the Behan device includes an attachment for a crank arm which provides an outer stationary housing in which a pair of rotating inner disks are housed. The pedal is attached to the rotating disks and, therefore, rotates during rotation of the crank arm. Because the inner disks to which the pedal is attached are freely rotating during use, the Behan device provides a minimum effective crank length at the 12 o'clock position and a maximum effective crank length at the 6 o'clock position. That particular combination is the least desirable for maximizing power transfer between the rider and crank arms because the effective crank length is minimized for a majority of the power phase of the pedal stroke. As a result, a need exists for a device which can provide an increase in effective crank length through purely rotary motion and which provides a maximum effective crank length proximate the 12 o'clock position and a minimum effective crank length proximate the 6 o'clock position to maximize power transfer through the crank arms.
package dev.sheldan.abstracto.statistic.config; import dev.sheldan.abstracto.core.config.FeatureDefinition; /** * Features available in the statistic module. */ public enum StatisticFeatureDefinition implements FeatureDefinition { /** * Feature responsible to track the emotes used in a message on a server. */ EMOTE_TRACKING("emoteTracking"); private final String key; StatisticFeatureDefinition(String key) { this.key = key; } @Override public String getKey() { return this.key; } }
package com.datx02_18_35.android; import android.content.Intent; import android.graphics.Point; import android.opengl.Visibility; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.RecyclerView; import android.util.Log; import android.view.Display; import android.view.MotionEvent; import android.view.View; import android.widget.TextView; import com.datx02_18_35.controller.Controller; import com.datx02_18_35.controller.dispatch.ActionConsumer; import com.datx02_18_35.controller.dispatch.actions.Action; import com.datx02_18_35.controller.dispatch.actions.controllerAction.RefreshLevelsAction; import com.datx02_18_35.controller.dispatch.actions.viewActions.RequestLevelsAction; import com.datx02_18_35.controller.dispatch.actions.viewActions.RequestStartNewSessionAction; import com.datx02_18_35.model.GameException; import com.datx02_18_35.model.level.Level; import com.datx02_18_35.model.level.LevelCategory; import game.logic_game.R; public class Levels extends AppCompatActivity implements View.OnClickListener { //callback LevelsCallback callback; //views RecyclerView recyclerView; LinearLayoutManager linearLayoutManager; LevelsAdapter adapter; boolean levelbeenstarted=false; public int categoryIndex = 0; public int categorySize = -1; private float xDown, xUp; private static float percentMovedIfSwipe = (float) 0.25; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_levels); recyclerView = (RecyclerView) findViewById(R.id.levels_recyclerview); linearLayoutManager = new LinearLayoutManager(getApplicationContext()); recyclerView.setLayoutManager(linearLayoutManager); adapter = new LevelsAdapter(this); recyclerView.setAdapter(adapter); //right and left arrow listeners findViewById(R.id.level_left_arrow).setOnClickListener(this); findViewById(R.id.level_right_arrow).setOnClickListener(this); } //all events called here @Override public boolean dispatchTouchEvent(MotionEvent e) { switch (e.getAction()){ case MotionEvent.ACTION_DOWN: xDown = e.getRawX(); break; case MotionEvent.ACTION_UP: xUp=e.getRawX(); //get screenSize in absolute size Display display = getWindowManager().getDefaultDisplay(); Point size = new Point(); display.getSize(size); int x = size.x; //Swipe if(Math.abs(xDown - xUp) > x * percentMovedIfSwipe){ //Right swipe if(xDown - xUp < 0){ findViewById(R.id.level_left_arrow).performClick(); } //Left Swipe else{ findViewById(R.id.level_right_arrow).performClick(); } } break; } //Don't consume the event return super.dispatchTouchEvent(e); } @Override protected void onResume() { super.onResume(); levelbeenstarted=false; callback = new LevelsCallback(); try { Controller.getSingleton().handleAction(new RequestLevelsAction(callback)); } catch (GameException e) { e.printStackTrace(); } } public void startLevel(Level level){ try { if(!levelbeenstarted) { Controller.getSingleton().handleAction(new RequestStartNewSessionAction(callback, level)); Intent intent = new Intent(this, GameBoard.class); //create intent startActivity(intent); //start intent levelbeenstarted=true; } } catch (GameException e) { e.printStackTrace(); } } private void refreshLevels(){ try { Controller.getSingleton().handleAction(new RequestLevelsAction(callback)); } catch (GameException e) { e.printStackTrace(); } } @Override public void onClick(View v) { if(categorySize==-1){ try { throw new Exception("Levels class: Onclick: Category size not init"); } catch (Exception e) { e.printStackTrace(); } } switch (v.getId()){ case R.id.level_left_arrow: if(categoryIndex == 0 || (categoryIndex+1) > categorySize){ //Do nothing break; } else { categoryIndex-=1; refreshLevels(); break; } case R.id.level_right_arrow: if((categoryIndex+1) >= categorySize){ //Do nothing break; } else{ categoryIndex+=1; refreshLevels(); break; } } } public class LevelsCallback extends ActionConsumer { @Override public void handleAction(Action action){ if(action instanceof RefreshLevelsAction){ RefreshLevelsAction refreshLevelsAction = (RefreshLevelsAction)action; categorySize = refreshLevelsAction.levelCollection.getCategories().size(); LevelCategory levelCategory = refreshLevelsAction.levelCollection.getCategories().get(categoryIndex); ((TextView)findViewById(R.id.level_top)).setText(levelCategory.getName()); adapter.updateLevels(levelCategory,refreshLevelsAction.levelProgressionMap, refreshLevelsAction.categoryProgressionMap.get(levelCategory)); //arrows remove or add findViewById(R.id.level_left_arrow).setVisibility(categoryIndex == 0 ? View.GONE : View.VISIBLE); findViewById(R.id.level_right_arrow).setVisibility(categoryIndex == categorySize-1 ? View.GONE : View.VISIBLE); } } } //if app has been down for a while memory will be trimmed and the app will crash //unless we release memory, i.e lets kill it in a controlled way @Override public void onTrimMemory(int level) { super.onTrimMemory(level); if(TRIM_MEMORY_COMPLETE==level){ finish(); } } }
package fr.vidal.oss.jaxb.atom.core; import org.junit.Test; import org.xml.sax.InputSource; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Marshaller; import javax.xml.bind.Unmarshaller; import java.io.IOException; import java.io.StringReader; import java.io.StringWriter; import java.util.TimeZone; import static java.util.TimeZone.getTimeZone; import static org.assertj.core.api.Assertions.assertThat; public class ExtensionElementAdapterTest { private static final Namespace ANY_NAMESPACE = Namespace.builder("http://foo.bar.net/-/any/").withPrefix("any").build(); private static final Attribute XMLNS_ATTRIBUTE = Attribute.builder("xmlns", "http://www.w3.org/2005/Atom").withNamespace(Namespace.builder("http://www.w3.org/2000/xmlns/").build()).build(); private static final Namespace XMLNS_NAMESPACE = Namespace.builder("http://www.w3.org/2000/xmlns/").withPrefix("xmlns").build(); @Test public void marshal_single_simple_element() throws Exception { ExtensionElement element = element("element", "with value"); String xml = marshalElement(element); assertThat(xml).isXmlEqualTo("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" + "<root xmlns=\"http://www.w3.org/2005/Atom\">\n" + " <any:element xmlns:any=\"http://foo.bar.net/-/any/\">with value</any:element>\n" + "</root>"); } @Test public void marshal_structured_element_with_single_child() throws Exception { ExtensionElement element = element("wrapper", element("element", "with value") ); String xml = marshalElement(element); assertThat(xml).isXmlEqualTo("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" + "<root xmlns=\"http://www.w3.org/2005/Atom\">\n" + " <any:wrapper xmlns:any=\"http://foo.bar.net/-/any/\">\n" + " <any:element>with value</any:element>\n" + " </any:wrapper>\n" + "</root>"); } @Test public void marshal_structured_element_with_multiple_children() throws Exception { ExtensionElement element = ExtensionElements .structuredElement("wrapper", element("element", "with value")) .addChild(element("sub-wrapper", element("second-level-child", "value"))) .withNamespace(ANY_NAMESPACE).build(); String xml = marshalElement(element); assertThat(xml).isXmlEqualTo("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" + "<root xmlns=\"http://www.w3.org/2005/Atom\">\n" + " <any:wrapper xmlns:any=\"http://foo.bar.net/-/any/\">\n" + " <any:element>with value</any:element>\n" + " <any:sub-wrapper>\n" + " <any:second-level-child>value</any:second-level-child>\n" + " </any:sub-wrapper>\n" + " </any:wrapper>\n" + "</root>"); } @Test public void unmarshal_single_simple_element() throws Exception { String xml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" + "<root xmlns=\"http://www.w3.org/2005/Atom\">\n" + " <any:element xmlns:any=\"http://foo.bar.net/-/any/\">with value</any:element>\n" + "</root>"; ExtensionElement result = unmarshalElement(xml); assertThat(result).isEqualTo(ExtensionElements.simpleElement("element", "with value") .withNamespace(ANY_NAMESPACE) .addAttribute(XMLNS_ATTRIBUTE) .addAttribute(Attribute.builder("any", "http://foo.bar.net/-/any/") .withNamespace(XMLNS_NAMESPACE) .build()) .build()); } @Test public void unmarshal_structured_element_with_single_child() throws Exception { String xml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" + "<root xmlns=\"http://www.w3.org/2005/Atom\">\n" + " <any:wrapper xmlns:any=\"http://foo.bar.net/-/any/\">\n" + " <any:element>with value</any:element>\n" + " </any:wrapper>\n" + "</root>"; ExtensionElement result = unmarshalElement(xml); assertThat(result).isEqualTo(ExtensionElements.structuredElement("wrapper", element("element", "with value")) .withNamespace(ANY_NAMESPACE) .addAttribute(XMLNS_ATTRIBUTE) .addAttribute(Attribute.builder("any", "http://foo.bar.net/-/any/") .withNamespace(XMLNS_NAMESPACE) .build()) .build()); } @Test public void unmarshal_structured_element_with_multiple_children() throws Exception { String xml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" + "<root xmlns=\"http://www.w3.org/2005/Atom\">\n" + " <any:wrapper xmlns:any=\"http://foo.bar.net/-/any/\">\n" + " <any:element>with value</any:element>\n" + " <any:sub-wrapper>\n" + " <any:second-level-child>value</any:second-level-child>\n" + " </any:sub-wrapper>\n" + " </any:wrapper>\n" + "</root>"; ExtensionElement result = unmarshalElement(xml); assertThat(result).isEqualTo(ExtensionElements.structuredElement("wrapper", element("element", "with value")) .withNamespace(ANY_NAMESPACE) .addAttribute(XMLNS_ATTRIBUTE) .addAttribute(Attribute.builder("any", "http://foo.bar.net/-/any/") .withNamespace(XMLNS_NAMESPACE) .build()) .addChild(element("sub-wrapper", element("second-level-child", "value"))) .build()); } private String marshalElement(ExtensionElement element) throws IOException, JAXBException { try (StringWriter writer = new StringWriter()) { Marshaller marshaller = marshaller(); marshaller.marshal(new Root(element), writer); return writer.toString(); } } private ExtensionElement unmarshalElement(String xml) throws JAXBException { Unmarshaller unmarshaller = context().createUnmarshaller(); Root root = (Root) unmarshaller.unmarshal(new InputSource(new StringReader(xml))); return root.element; } private Marshaller marshaller() throws JAXBException { TimeZone.setDefault(getTimeZone("Europe/Paris")); Marshaller marshaler = context().createMarshaller(); marshaler.setProperty(marshaler.JAXB_FORMATTED_OUTPUT, true); marshaler.setProperty(marshaler.JAXB_ENCODING, "UTF-8"); return marshaler; } private JAXBContext context() throws JAXBException { return JAXBContext.newInstance( ExtensionElement.class, StructuredElement.class, SimpleElement.class, Root.class); } private ExtensionElement element(String tag, String value) { return ExtensionElements.simpleElement(tag, value).withNamespace(ANY_NAMESPACE).build(); } private ExtensionElement element(String tag, ExtensionElement value) { return ExtensionElements.structuredElement(tag, value).withNamespace(ANY_NAMESPACE).build(); } }
//conections accepeted by a server class DCT_ServerConnection : public DCT_BaseSocket { public: DCT_ServerConnection(); virtual ~DCT_ServerConnection(); }
<filename>PrivateFrameworks/SiriTasks.framework/STAlarmShowAlarmAndPerformActionRequest.h /* Generated by RuntimeBrowser Image: /System/Library/PrivateFrameworks/SiriTasks.framework/SiriTasks */ @interface STAlarmShowAlarmAndPerformActionRequest : AFSiriRequest { STAlarmAction * _action; } + (bool)supportsSecureCoding; - (void).cxx_destruct; - (id)_initWithAction:(id)arg1; - (id)action; - (void)encodeWithCoder:(id)arg1; - (id)initWithCoder:(id)arg1; @end
BLUe's Letters With love The backgrounds, use, design, and coding of BLUes Letters format have been discussed. The purpose is to format a letter, merge it with address(es) from a database, and typeset it all with the appropriate background such as logo and the like, completely within TEX. Separate labels can be obtained too, either specified by name or searched for by pattern.
<gh_stars>0 import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { public static class MyQueue<S> { Stack<S> stack1 = new Stack<S>(); Stack<S> stack2 = new Stack<S>(); public void enqueue(S value) { stack1.push(value); } public S peek() { if (stack2.isEmpty()) { while (!stack1.isEmpty()) { stack2.push(stack1.pop()); } } return stack2.peek(); } public S dequeue() { if (stack2.isEmpty()) { while(!stack1.isEmpty()) { stack2.push(stack1.pop()); } } return stack2.pop(); } } public static void main(String[] args) { MyQueue<Integer> queue = new MyQueue<Integer>(); Scanner scan = new Scanner(System.in); int n = scan.nextInt(); for (int i = 0; i < n; i++) { int operation = scan.nextInt(); if (operation == 1) { queue.enqueue(scan.nextInt()); } else if (operation == 2) { queue.dequeue(); } else if (operation == 3) { System.out.println(queue.peek()); } } scan.close(); } }
SRC participant, Onyi Ozoma, presenting his research at the SRC Symposium. During the summer, five undergraduates have the opportunity to participate in the Summer Research College (SRC). The Summer Research College is designed to foster close intellectual exchange by engaging students in research with a faculty member on a new or ongoing research project. The program is a unique opportunity for undergraduate students from diverse disciplines to undertake research with a faculty mentor while being paid for their work. Participants will work directly with a faculty mentor for ten weeks and receive a $7,500 stipend. The program, which is part of the Summer Research College, is designed to foster close intellectual exchange by involving students in the ongoing research of Stanford professors. Participants must be current undergraduates at Stanford. Co-term students and seniors are eligible only if the bachelor’s degree will not be conferred before the end of the research appointment. Students will be expected to work 40 hours per week during the program. The program will run from Monday, June 24th through Friday, August 30th, 2019. Students and faculty will present their collaborative research in a symposium at the end of the program. You are expected to attend the symposium. Each student will receive a stipend of $7,500 for ten weeks of full-time research work. STS does not offer course credit for Summer Research College. You are only eligible to receive the full Summer stipend. Students planning to take Summer courses may not enroll in courses that exceed 5 credits and must get prior approval from the faculty member with whom they are working. For students who want to apply for on-campus summer housing, room, board, house dues, and other academic expenses are paid by the student. Students are responsible for paying their university summer bill, which will include any other academic expenses incurred. Students may review the summer room and board rates on the Housing Assignment Services website. Applications for SRC 2019 are due Friday, February 22, 2019 at 3 pm. SRC will take place over the Summer of 2019 from Monday, June 24th through Friday, August 30th, 2019. View the STS Summer Research College Project Descriptions to identify your top topic choice(s). Once you've found your topic(s) of interest, fill out the STS Summer Research College Preference Form to express your preference regarding faculty mentors and research projects. Write a cover letter for each project you are applying for following the STS Summer Research College Coverletter Guidelines. Email the Preference Form, cover letters, resume, and unofficial transcript in PDF format to Emily Van Poetsch ([email protected]). IMPORTANT NOTE: If your application is approved, you may be contacted to set up an interview. The program is only accepting applications via email. Application deadline is Friday, February 22, 2019 at 3 pm. Questions? Please contact Emily Van Poetsch.
class LengthEntry: """An entry in a (comma-separated) len attribute""" NULL_TERMINATED_STRING = 'null-terminated' MATH_STRING = 'latexmath:' def __init__(self, val): self.full_reference = val self.other_param_name = None self.null_terminated = False self.number = None self.math = None self.param_ref_parts = None if val == LengthEntry.NULL_TERMINATED_STRING: self.null_terminated = True return if val.startswith(LengthEntry.MATH_STRING): self.math = val.replace(LengthEntry.MATH_STRING, '')[1:-1] return if val.isdigit(): self.number = int(val) return # Must be another param name. self.param_ref_parts = _split_param_ref(val) self.other_param_name = self.param_ref_parts[0] def __str__(self): return self.full_reference def get_human_readable(self, make_param_name=None): assert(self.other_param_name) return _human_readable_deref(self.full_reference, make_param_name=make_param_name) def __repr__(self): "Formats an object for repr(), debugger display, etc." return 'spec_tools.attributes.LengthEntry("{}")'.format(self.full_reference) @staticmethod def parse_len_from_param(param): """Get a list of LengthEntry.""" len_str = param.get('len') if len_str is None: return None return [LengthEntry(elt) for elt in len_str.split(',')]
Use of Remote Sensing Data to Identify Air Pollution Signatures in India Air quality has major impact on a country's socio-economic position and identifying major air pollution sources is at the heart of tackling the issue. Spatially and temporally distributed air quality data acquisition across a country as varied as India has been a challenge to such analysis. The launch of the Sentinel-5P satellite has helped in the observation of a wider variety of air pollutants than measured before at a global scale on a daily basis. In this chapter, spatio-temporal multi pollutant data retrieved from Sentinel-5P satellite is used to cluster states as well as districts in India and associated average monthly pollution signature and trends depicted by each of the clusters are derived and presented.The clustering signatures can be used to identify states and districts based on the types of pollutants emitted by various pollution sources. Introduction Air pollution is one of the major health hazards in a developing country such as India. Hence, it is necessary to study the composition of the air over the districts of our country to understand how to tackle pollutants at an individual level. Zheng et.al has shown that the concentration of particulate matter less than 2.5 micrometers in diameter (P M 2.5) has a direct positive correlation to the number of cases of lung diseases such as asthma in patients. Other pollutants which directly affect human respiratory system are nitrogen dioxide, sulphur dioxide, formaldehyde and tropospheric ozone. High concentrations of carbon monoxide can cause dizziness, confusion, severe brain damage or even death. Methane is another important gas which contributes to the greenhouse effect and it is 80 times more harmful than CO 2 if it sustains in the air for long periods of time. Hence, there is a dire need to equip policy makers and policy enforcers with data driven knowledge of major pollutant sources and their geolocation, as well as increase the awareness about the ill-effects of these pollutants among the general public to ensure the well being of the society. Air-pollutant sources include road-traffic, industries such as brick-kilns, portable power generators such as diesel power generators etc.. The information presented in details currently used methods for identifying pollution sources. Methods can be expensive and may fail to detect all sources as they may be located in geographical wide areas. Recent efforts to identify pollution sources involved the use of low-cost sensors. However, such methods need the additional overhead of sensor installation and maintenance of each site. The aim of this chapter is to present clustering methods on air pollution data derived from larger granularity satellite data and to identify the sources behind the pollution in each cluster. Grouping of the various states and districts in India has been done based on the pollution signatures emitted from different regions. It is cost effective to use satellite data since it does not involve additional installation overheads and provides a more holistic wider area picture. Such an approach can also be used to provide an insight for site selection for low cost sensor installation for more accurate and localised studies if required. The outcome will aid the government in reforming policies to help alleviate the pollution levels effectively. Once it is determined which sources are responsible for emitting a particular pollutant, government bodies can take necessary steps to reduce the specific pollutant at the specific geo-location by placing restrictions targeting only the relevant sources. This will help the government to efficiently formulate targeted and more effective strategies to combat individual pollutants levels if there is an alarming spike in the levels of a specific pollutant. Rest of the chapter is arranged as follows: section 2 touches upon the most notable work done in the area till date; section 3 details the method employed to collect the required data used in this study, with section 4 explaining how the data was prepared and preprocessed for further analysis; section 5 presents the clustering algorithms employed with the discussion on the results achieved and finally conclusions in section 6 mentioning briefly now how the research can be taken ahead. Literature Review Air pollution monitoring has been a major challenge faced by the Government of India, especially considering the fact that 26 of the 50 most polluted cities in the world are located in India as of 2019. P M 2.5, particulate matter of aerodynamic diameter less than 2.5 m, was found to cause damage to the air passages, cardiovascular impairments, increase the likelihood of Diabetes Mellitus in humans and cause adverse effects in infancy. Thus, with the advent of low cost sensors, it has become possible to evaluate the air quality at a given location by only taking into consideration the P M 2.5 levels. But there are other pollutants as well that can cause adverse health effects, such as N O 2, SO 2 and CO, which can be measured by a high quality air pollution monitoring system, but which requires a higher installation cost and maintenance. As of 2020, there are only 804 pollution monitoring stations placed across 344 cities in India, as part of the National Air Quality Monitoring Program (NAMP). This means than on an average there is only 1 pollution monitoring station available per 1.6 million people in the country, the density varying from state to state with the North Eastern states having the least number of stations. In contrast there are about 700 Air Quality Monitoring Areas in the UK, which translates to about 1 site per 100,000 individuals. Even though the NAMP has been running since 1984, the data from the sensors is publicly available only from 2016. The number of stations covering rural areas are very sparse and even the data collected through the monitoring stations is patchy and prone to errors as some of them are manually collected and uploaded. There have been attempts to perform spatial interpolation of ground-based sensors to fill the gaps in the areas where pollution monitoring sites are not present. Various techniques have been employed to create these pollution maps such as the Kriging algorithm, Inverse Distance Weighting and more recently Artificial Neural Networks (ANNs) and Long Short-Term Memory (LSTM) neural networks have been used to increase the accuracy of these interpolations. But there are various drawbacks of these models, Inverse Distance Weighting cannot estimate values which fall outside the range of the training data, neural networks do not consider the spatio-temporal associations and the LSTM requires a large set of tagged historical data and plenty of time for the training. Thus the lack of finer resolution data points makes it hard to use spatial interpolation techniques to monitor air quality effectively in India. The alternative approach is to use remote sensing data to monitor the air quality over a region. The satellites measure the particulate matter present in the atmosphere by using spectroscopic retrieval methods. A significant positive correlation of 0.96 was observed between the Aerosol Optical Thickness (AOT) measured by the MODIS satellite and the P M 2.5 measurements from the ground. It was possible to get a wider coverage of pollution measure as compared to ground based sensors but this came at a lower spatial resolution, which meant that the data was not adequate for pinpointing the source of the pollution. Spatial scaling techniques were employed to enhance the resolution of the AOT product from 10km to 1km which helped gain a finer insight. The temporal AOT data from Moderate Resolution Imaging Spectroradiometer (MODIS) has been used to perform back trajectory analysis by which the transportation of particulate matter across borders can be traced and the source of the pollutant can be determined. The Global Ozone Monitoring Experiment (GOME) was launched in 1995 and it was able to measure tropospheric Ozone and N O 2 on a global scale for the first time, but it had a very poor spatial resolution of 80 x 40 km 2 per pixel. Apart from this, MODIS has a 1-2 day temporal resolution and GOME has a monthly temporal which meant that data is not available as frequently as the 4hr or 8hr intervals as provided by ground based sensors. The launch of Sentinel-5P by the European Space Agency (ESA) in October 2017 brought with it an increase in the spectral radius as well as spatial resolu-tion. The on-board TROPOMI sensor can measure O 3,N O 2,SO 2,CH 4,CO,HCHO and AER A I at a spatial resolution of , which is about 6 times better as compared to GOME and also improve the sensitivity by an order of magnitude. Sentinel-5P also has a temporal resolution of 1, which meant that it would cover the entire surface area of the earth once every day. Thus, there is now a wider spectrum of pollutants being measured at a very good spatial and temporal resolution, though not as fine granularity as a ground based sensor. Since Sentinel-5P has a wider spectral range and a finer spatial resolution as compared to previous satellites, it helps study the air quality over an area in much finer details and is the preferred choice of data source for experiments carried out in this paper. Studies have been conducted to analyse the N O 2 pollutant, in particular the one conducted by Kaplan et al. which correlates it with statistical indicators such as population density. In this study, multiple pollutants gathered from remote sensing data have been taken into consideration as well and used to perform a clustering of Indian states and districts based on their pollution signatures. Data Collection This section will provide a brief description of how the remote sensing data was obtained. The images were retrieved using Google Earth Engine's Level 3 products for Sentinel-5P. The method by which the Level 2 products, released by the ESA, are processed by Earth Engine are described in the next section. A yearly average of Level 3 N O 2, SO 2, CO, AER AI, O 3 and HCHO products was taken over the latitudinal and longitudinal extents of India from January 2019 to December 2020. and was then processed for further analysis. The following subsections describe the various Level-2 products that are released by the ESA and explain some of their retrieval methods and their significance in terms of their contribution to air pollution. Nitrogen Dioxide (N O 2 ) Sentinel-5P have two sub products which are measured for N O 2, namely the tropospheric and the total column. In this study the tropospheric N O 2 column, which is the N O 2 between the surface of the earth and the troposphere, is used as it plays a major role in determining the level of photochemical Ozone. It must be noted that the TROPOMI N O 2 underestimates the N O 2 level as compared to ground based sensors but the correlation coefficient was found to be 0.84 and appropriately calibrated, which makes the product accurate enough to be used for analysis. The data measures trace gas concentrations in mol/m 2. Sulphur Dioxide (SO 2 ) The sources of Sulphur Dioxide pollution in the atmosphere are both natural and man-made. The majority of pollution (70%) arise from coal power plants, smelting industries and mines. Apart from incurring both long term and short term effects on climate, it affects vegetation and water quality when it washes down as acid rain. Aerosol UV Index (AER AI) This product is calculated based on the spectral contrast in the ultraviolet spectral range for the 354 mm and 388 mm wavelengths. It is a long established air quality index monitor and is useful in tracking the plumes released from dust, biomass burning and volcanic ash. Carbon Monoxide (CO) Carbon monoxide is an important atmospheric trace gas for the understanding of tropospheric chemistry and in certain urban areas, it is a major atmospheric pollutant. In the 2.3 m spectral range of the shortwave infrared (SWIR) part of the solar spectrum, TROPOMI clear sky observations provide CO total columns with sensitivity to the tropospheric boundary layer. This data has been validated against TCCON and NDACC ground-based network and the MOPITT satellite with a resulting bias of less than 10%. Formaldehyde (HCHO) Formaldehyde is an intermediate gas in almost all oxidation chains of nonmethane volatile organic compounds (NMVOC), leading eventually to CO 2. HCHO satellite observations are used in combination with tropospheric chemistry transport models to constrain NMVOC emission inventories in so-called top-down inversion approaches. This data has a mean bias of 50% with ground based sensors and other satellites such as GOME-2 and OMI and is measured in mol/cm 2. Ozone (O 3 ) Ozone in the tropical troposphere plays various important roles. The intense UV radiation and high humidity in the tropics stimulate the formation of the hydroxyl radical (OH) by the photolysis of O 3. The O 3 Tropospheric Column gives the measurement of tropospheric ozone between the surface and the 270 hPa pressure level. It is based on the convective cloud differential (ccd). Data Preprocessing The Earth Engine uses the Level 2 product, from the ESA, and filters the pixels based on minimum pixel quality level corresponding to each scene. These images are then broken into tiles according to the orbit number to make it easier for ingestion and retrieval. Quality Assurance Filtering The quality of the individual observations depends on many factors, including cloud cover, surface albedo, presence of snow-ice, saturation, geometry etc. These observations are filtered in order to avoid misinterpretation of the data quality and to avoid the effects of sun glint. Each of the satellite products comes with a qa value (quality assurance) band which can be used to filter out less accurate values. Different thresholds of qa value are chosen for different products as defined in the Sentinel-5P Product User Manual. For making the Level-3 products, Google Earth Engine filters pixels associated with a qa value below 0.8 for the AER AI product, 0.75 for Tropospheric N O 2 and 0.5 for all other products, except O 3 for which quality filtering is not done. This takes care of erroneous scenes and problematic retrievals. Fig.1a shows a Level-2 scene of Aerosol Index for a single day released by the ESA. In Fig.1b the missing pixels can be observed after filtering, which correspond to those that had a qa value of less than 0.8. A yearly average of these Level-3 products was taken in which these missing pixels were filled in with values from those scenes which had data over that region. Regional Masking The district level and state level administrative boundaries for India were used to mask the Level-3 products. The CH 4 column contained a lot of missing data as the retrieval of this pollutant was of low quality, therefore this pollutant was dropped in the subsequent analysis. The average pixel values of N O 2, SO 2, CO, AER AI, O 3 and HCHO was calculated for each of the masked district and was stored in a tabular format. A total of 594 districts with six pollutant values was used to frame the monthly pollution data set. This was further cleansed and the rows containing null values were removed. Similarly, the Sentinel-5P data was also masked using state border shape files to extract the state-wise average value of pollutants and converted into a tabular data set. The state-wise data had 33 rows and 6 features. Standardization The dataset was then re-scaled by first removing the mean of each pollutant and scaling each column by the standard deviation as shown in Eqn.1. This ensures that the features are of comparable denominations before running the clustering algorithms on them. Clustering Clustering is a method of grouping based on patterns in the data. This technique is primarily used to find clusters of data points with inherent similarity in unlabelled datasets. In this case, the unlabelled dataset consists of different pollutants emitted (N O 2, SO 2, CO, AER AI, O 3 and HCHO) for each state or district in India. The aim is to find clusters comprising of states or districts that emit a similar pollution signature which will help isolate pollution sources more easily. Clustering Methods Unsupervised clustering was performed on the pollution datasets using three different algorithms, namely -K-Means clustering, Agglomerative clustering and DBSCAN. The distance measure which was used in all three methods was the L-2 norm, Euclidean Distance. K-Means Clustering Partitioning based unsupervised clustering method reallocate data points by moving them from one cluster to another, starting from an initial partitioning. This algorithm works by initializing K points as cluster centers. In each iteration, every point in the dataset is assigned to the cluster it is closest to (using the L2 norm distance). The cluster center is then reinitialized to the mean of the cluster set and the clustering iteration until convergence is achieved. Ward Agglomerative Clustering Hierarchical clustering constructs the clusters by recursively splitting or combining the data points. In agglomerative clustering (a method within the broader class of hierarchical clustering methods), the clusters are built by iteratively merging smaller clusters starting from individual data point up until the required number of clusters are reached. The Ward's distance minimizes the overall inter-cluster sum of squared distances within all clusters. Here, x i represents the center of the i th cluster and ∆ A, B represents the cost of merging clusters A and B. DBSCAN Clustering Density based clustering tries to associate each point to a set of probability distributions. This algorithm does not take the number of clusters as an input and uses two parameters min pts and epsilon to form clusters. Epsilon determines the maximum distance between two points upto which they can be grouped within the same cluster and the min pts determines the minimum number of points that must fall within a cluster for it to not be designated as a noise point. It is useful in detecting noise and outliers in the data. Optimal Number of Clusters The elbow method is a technique used to determine the optimal number of clusters to be chosen based on heuristics such as inter cluster similarity and intra-cluster similarity. In the method presented here, the number of clusters is iteratively increased from 2 till 15 and the point at which the graph of the cost function has the highest curvature is taken as the elbow, or optimal number of clusters. The elbow method was performed using the distortion score as the cost function to find the optimal number of clusters for K-means clustering and the silhouette score was used to ensure that the intra-cluster similarity was optimal. Both scores are explained below. The same number of clusters as derived from the elbow method was used to slice the hierarchical clustering to compare the results of the two algorithms. Distortion Score This metric give information about the overall cluster dissimilarity. It is calculated as the mean sum of squared distances to centers. In Eqn.3, x i represents the i th row of the dataset andx represents the mean of the cluster it belongs to. the lower the value of S, lower the dissimilarity, the more optimal is the solution. Fig.3 represents the plot for the distortion scores as a result of fitting the K-Means model to the dataset, while varying the value of K from 2-15. The elbow method analysis results in the elbow line being drawn at K=5 as this is the point of maximum curvature in the curve. Silhouette Score The silhouette score metric represents the intra-cluster similarity. It is calculated as the mean ratio of intra-cluster and next nearest-cluster distance. In Eqn.4, a is the mean distance between a sample and all other points in the same class and b is the mean distance between a sample and all other points in the next nearest cluster. The score is higher when clusters are dense and well separated. The elbow method using distortion score resulted in 5 being the optimal number of clusters. From the silhoutte score plot, it can be inferred that the intra-cluster similarity of K=5 was 0.379 which was not too far away from the optimal silhouette score of 0.394. Hence, K=5 was chosen as the optimal number of clusters to prevent over-fitting on the data and get well rounded clusters. Cluster Validation The silhouette method was used to determine the intra-cluster similarity of the clusters formed by the three algorithms mentioned in the previous section. It gives an idea of how closely related a state or district is to the cluster it has been assigned to by the respective method. Values close to 1 indicate a high affinity to that cluster and negative values imply that the data point might have been wrongly assigned to that cluster. Each color in Fig.5 represents the distribution of the silhouette score of each state that falls within the cluster. The wider the graph for a cluster, the more the number of states that fall into it. The average silhouette score for K-Means was 0.379 and for Ward Agglomerative was 0.385. The lower polluting states and the higher polluting states had silhouette coefficients of almost 0.5, which indicates a higher intra-cluster similarity. Both K-Means and Ward Agglomerative clustering resulted in a similar distribution of states across clusters. Only the state of Telangana was placed in cluster 1 in K-Means and cluster 0 in Ward Agglomerative. In the case of DBSCAN, it does not need to specify the number of clusters and instead the values of the parameter min pts was fixed to 3 and that of epsilon was 1.7. DBSCAN Clustering resulted in a few states being classified as outliers since their pollution signatures did not match with any other state. But the remaining clusters that were formed were in correlation with K-Means and Ward Agglomerative clustering. The results from K-Means clustering have been presented and used for analysis in the following section. Analysis of Pollution Signatures and Clustering Results across Indian States and District In this section, the various results from state as well as district wise clustering and the corresponding pollution signatures obtained for different clusters have been presented and analysed in depth. The bar plots shown below in Fig.6 represent the average pollution signatures for every cluster obtained as a clustering of pollution data across different states and districts. These pollution signatures serve as a unique representation for each cluster. Each pollution signature shows the average pollutant magnitude for a given cluster. States and Districts which are part of Cluster 0, on average, have the lowest pollution profile and those which fall into Cluster 4 emit the highest level of pollutants as can be seen in Fig.6. The varying trends for each pollutant across each cluster for both state-wise and district-wise clustering have also been shown in Fig.7. Fig.8a shows the state-wise cluster map and Fig.8b shows the district-wise cluster map across India as obtained from Kmeans clustering algorithm. Each of the cluster behaviour can be explained in terms of its corresponding pollution signature and trends as follows: Cluster 0 -As can be seen in Fig.6, cluster 0 has the least concentration of pollutants. States such as Jammu & Kashmir, Uttarkhand, Himachal Pradesh and Arunachal Pradesh fall under this cluster. These states have a very low population density and consequently low vehicular traffic as they mainly comprise of mountainous terrains. These states also have significantly lower industrial activity and hence do not contribute much to the air pollution. Cluster 1 -On an average, the regions which fall under this cluster have a higher level of emission of all pollutants when compared to cluster 0, except for SO 2 as can be seen in Fig.7. This cluster comprises of the southern states and a majority of the north eastern states. Cluster 2 -The western and central states such as Rajasthan, Uttar Pradesh, Gujarat, Madhya Pradesh and Maharastra fall under this cluster. It can be noted that in the district-wise clustering in Fig.8b, the city of Chennai falls under this cluster 2, whereas it came under cluster 1 along with the state of Tamil Nadu in Fig.8a. Cluster 3 -This is the second highest polluting cluster and most of the states which fall under this have a lot of industrial presence which is attributed with most of the emissions. From Fig.7 it can be seen that this cluster has the highest CO emissions. States such as Jharkand, Bihar, Orissa, Telangana, Chhatisgarh and West Bengal fall under this in the state-wise clustering. Cluster 4 -This is the cluster which has the higher average percentage of pollutants SO 2 and N O 2 as can be seen in Fig.7. These states have some of the highest population densities in India and contain some of the worst polluting cities in the world. It is worth noting here that the results are consistent with the report from TERI -The Energy and Resources Institute, where the states belonging to the cluster 3 and cluster 4 are those which have the highest P M 10 emissions from brick kilns as well as from coal and iron ore mining. The biggest difference in terms of state vs. district level clustering can be noted from the regions which fall under this cluster. In Fig.8a it can be seen that the states of Punjab, Haryana and Delhi fall under this cluster 4. But in the district-wise clustering as shown in Fig.8b, it can be seen that a majority of the districts from Uttar Pradesh and Bihar as well now come under cluster 4, representing the highest overall pollution signature. Further comparing the two maps in Fig.8a and Fig.8b, it can be seen that the districts from the state of Telangana fall under cluster 3, which indicates that they belong to a lower polluting category. It can be noted upon a finer inspection of the districts that Greater Mumbai and Hyderabad, which belonged to cluster 2 and cluster 3 in the state-wise clustering, now fall under cluster 4, the highest polluting cluster. This is in line with expectations since these urban cities tend to emit higher levels of pollution. Similarly, the city of Chennai too falls under cluster 2 in the district-wise clustering as compared to cluster 1 in the state-wise clustering as mentioned earlier.Thus a finer classification of regions in the district-wise clustering can be seen as compared to the state-wise clustering. Conclusions and Future Work Acknowledging the importance of managing pollution levels so that it does not affect the socio-economic and health status of the general population, it is important to understand the precise pollutant signature over a certain area and the possible sources of the pollutants. The presented study explored three clustering algorithms on data retrieved from ESA's Sentinel-5P satellite to address this issue. The clustering algorithms were used to assign unique pollutant signatures to states and districts across India. The results have shown to be promising. To take this work further, it is planned to improve on the clustering algorithms to understand, if a similar or higher accuracy can be attained at even finer granularity than a district level. In addition, studies need to be conducted that will help correlating pollution levels with socio-economic factors of the region. Furthermore, there is a need to study the affect of other variables such as wind, atmospheric pressure etc. to understand the transport of pollutants from its source. This will help further identify pollution sources with higher accuracy.
def compute_leaving_time(self,time,boarding_time=0,alighting_time=0,loading_time=0,unloading_time=0): self.total_service_time_passengers += boarding_time + alighting_time self.service_time_passengers += boarding_time + alighting_time self.boarding_time_passengers += boarding_time self.alighting_time_passengers += alighting_time if self.leaving_time_passengers: if self.leaving_time_passengers <= time: self.leaving_time_passengers = time + boarding_time + alighting_time else: self.leaving_time_passengers += boarding_time + alighting_time else: self.leaving_time_passengers = time + boarding_time + alighting_time self.total_service_time_cargo += loading_time + unloading_time self.service_time_cargo += loading_time + unloading_time self.loading_time_cargo += loading_time self.unloading_time_cargo += unloading_time if self.leaving_time_cargo: if self.leaving_time_cargo <= time: self.leaving_time_cargo = time + loading_time + unloading_time else: self.leaving_time_cargo += loading_time + unloading_time else: self.leaving_time_cargo = time + loading_time + unloading_time expected_leaving_time = max(self.leaving_time_passengers, self.leaving_time_cargo) scheduled_time = self.get_scheduled_time() estimated_delay = expected_leaving_time - scheduled_time if estimated_delay >= 0: new_leaving_time = expected_leaving_time else: new_leaving_time = scheduled_time if config.stop_load_max_dwell and self.leaving_time_cargo - self.arrival >= self.max_service: self.cargo_service_time_left = False elif config.stop_load_delay and new_leaving_time == self.leaving_time_cargo: self.cargo_service_time_left = False else: self.cargo_service_time_left = True if self.leaving_time == new_leaving_time: sim_log.info(f"{self.typ} {self.index}: leaves at {self.leaving_time}. (scheduled departure: {scheduled_time}, estimated delay: {estimated_delay}).") return False self.leaving_time = new_leaving_time self.standing_time += (self.leaving_time - self.arrival) - self.dwell self.dwell = self.leaving_time - self.arrival sim_log.info(f"{self.typ} {self.index}: leaves at {self.leaving_time}. (scheduled departure: {scheduled_time}, estimated delay: {estimated_delay}).") return self.leaving_time
A Useful Metaheuristic for Dynamic Channel Assignment in Mobile Cellular Systems The prime objective of a Channel Assignment Problem (CAP) is to assign appropriate number of required channels to each cell in a way to achieve both efficient frequency spectrum utilization and minimization of interference effects (by satisfying a number of channel reuse constraints). Dynamic Channel Assignment (DCA) assigns the channels to the cells dynamically according to traffic demand, and hence, can provide higher capacity (or lower call blocking probability), fidelity and quality of service than the fixed assignment schemes. Channel assignment algorithms are formulated as combinatorial optimization problems and are NP-hard. Devising a DCA, that is practical, efficient, and which can generate high quality assignments, is challenging. Though Metaheuristic Search techniques like Evolutionary Algorithms, Differential Evolution, Particle Swarm Optimization prove effective in the solution of Fixed Channel Assignment (FCA) problems but they still require high computational time and therefore may be inefficient for DCA. A number of approaches have been proposed for the solution of DCA problem but the high complexity of these proposed approaches makes them unsuitable/less efficient for practical use. Therefore, this paper presents an effective and efficient Hybrid Discrete Binary Differential Evolution Algorithm (HDB-DE) for the solution of DCA Problem. I. INTRODUCTION It is important to efficiently utilize the scarce radio spectrum using a proper channel assignment scheme. The aim of a channel assignment algorithm is to determine a spectrum efficient assignment of channels to the cells such that the traffic demand can be met as far as possible while satisfying the channel reuse constraints: co-channel constraints, channels separation constraints, and co-site constraints. A channel can be reused is that the same channel can be assigned to multiple cells simultaneously due to the radio propagation path loss. Channel assignment algorithms can be classified as static or dynamic. In a static approach, which is commonly called Fixed Channel Assignment (FCA) the channels are allocated and prefixed to each cell during the setup according to the traffic intensity estimated by the designer in the cell. FCA is still in use because it requires a moderate amount of BS radio installer and equipment and a simple monitoring procedure/algorithm. However, high efficiency of total channel usage over the whole service area if the traffic varies dynamically from cell to cell cannot be attained by FCA. To solve this problem, DCA has been in trend since last twenty years. In DCA, channels are assigned dynamically over the cells in accordance with traffic load. FCA being a static technique can afford to spend more time to generate a better solution and is also easier to implement FCA in practice. However, from a resource utilization point of view, DCA is more preferable over FCA as DCA is designed to adjust resource assignment according to traffic demand, and hence, can support a higher capacity (or lower call blocking probability). The advantages of DCA are that it has a lower blocking rate than FCA at low traffic intensity and a lower forced call termination rate than FCA when blocking rates are equal. DCA Problem is NP-hard and its effectiveness depends on its algorithm's efficiency in determining fast solution, good quality solution and its ease of implementation. Thus, devising a DCA, that is practical, efficient, and which can generate high quality assignments, is a challenging problem. Metaheuristic Search techniques prove effective in the solution of DCA problem. A number of approaches 17,19,20,21,22] have been proposed for the solution of Channel Assignment Problem. The initial efforts for the solution of FCA were based on deterministic methods but as the problem is NP-hard these methods proved ineffective and inefficient for practical implementation for the next generation of mobile systems in which higher traffic demand was expected. To overcome the difficulties associated with the deterministic methods other heuristic methods such as Simulated Annealing, Tabu Search, Neural Networks and Genetic Algorithms were used for the solution of FCA problem. Later, Feedforward Neural Networks, Hopfield Neural Networks, Genetic Algorithms, Combinatorial Evolution Strategy, and Particle Swarm Optimization have been used for the solution of DCA problem. However, the ever increasing number of mobile cellular users and the increasing demand for bandwidth call for more and more efficient Dynamic Channel Assignment strategies. Therefore, this paper presents an effective and efficient Hybrid Discrete Binary Differential Evolution (HDB-DE) for the solution of DCA Problem. HDB-DE is a discrete binary version of Differential Evolution which is an effective stochastic parallel search evolutionary algorithm for global optimization. The problem formulation and the implementation of HDB-DE take care of the soft constraints as well as hard constraints and hence focuses search only in the feasible regions of the search space II. PROBLEM FORMULATION FOR DCA A. Constraints in Channel Assignment Problem In any cellular network, whenever two cells use the same channel or when two cells use channels adjacent to each other on the spectrum or when two channels are assigned to the same cell, interference occurs; these types of interference are called Co-channel Interference, Adjacent Channel Interference and Co-site Interference respectively. They lower the signal-to-noise ratio at the receiving end, leading to the deterioration of system performance. Though the computation of the actual level of interference is tough, primarily owing to its dependence on the topology of the real environment, experiments show that the effect of interference is reasonably low if the following three constraints are satisfied: Co-Channel Constraint (CCC): The same channel cannot be simultaneously allocated to a pair of cells unless there is a minimum geographical separation between them. Adjacent Channel Constraint (ACC): Adjacent channels cannot be assigned to a pair of cells unless there is a minimum distance between them. Co-Site Constraint (CSC): A pair of channels can be employed in the same cell only if there is a minimum separation in frequency between them. These constraints are called Electromagnetic Compatibility Constraints, which together with the traffic demand constraint, are known as hard constraints. Apart from the hard constraints, another set of constraints called soft constraints is also considered, which may be described as follows: The packing condition requires that a channel, in use in one cell, should be reused in another cell as close as possible (but obviously not interfering with the former) so that the number of channels used by the network is minimal, thereby lowering the probability of future call blocking in other cells. The resonance condition tries to ensure that same channels are assigned to cells belonging to the same reuse scheme, as far as possible. Another soft constraint is that, when a call arrives, minimum number of channel reassignment operations should be performed because excessive reassignment in a cell may lead to increase in blocking probability. A solution to the CAP must satisfy the hard constraints whereas a soft constraint may be violated; the latter only helps maximize the utilization of resources and/or improve the quality of service. Apart from the traffic demand constraint, the only other hard constraint that we have taken into account is Co-channel Constraint; other sources of interference are assumed to be absent, as reported in literature Vidyarthi et al, Battiti et al, and Chakraborty et al. B. Assumption Pertaining to Cellular Model and Call Arrival 1) The topological model is a group of hexagonal cells that form a parallelogram shape (equal number of cells along x-axis and y-axis) as shown in figure 1(adapted from, figure 1]). We set a minimum "reuse distance", which represents the minimum allowable normalized distance between two cells which may use the same channel at the same time as shown in figure 3. 8) A call is blocked if the entire set of channels in the network is in use in the cell involved in call arrival and its neighborhood that is there is no channel that satisfies the co-channel interference. 9) Existing calls in a cell involved in a new call arrival may be rearranged. C. Formulation of Fitness Function The equations 1, 2, 3 & 4 given below correspond to the different conditions i.e. co-channel interference, packing conditions, resonance conditions, and discouraging excessive rearrangement condition. These equations are combined to constitute a quadratic energy function (equation 5) whose minimization leads to an optimal solution for DCA. Equation expresses the hard condition. Where V k is output vector for cell k, with dimension channel (CH). V k,j =1 if channel j is assigned to cell k, otherwise V k,j =0. Here k signifies in which cell, call arrives. The energy function increases in case a channel j which is assigned in cell i is selected by cell k that interferes with i. It thus ensures that solutions with no interference give better (smaller) fitness values. A i,j is the ij th element of the assignment table A, which is 1 if channel j is assigned to cell i, and 0 otherwise. This term expresses the packing condition. The energy decreases if channel j assigned to cell k is also selected by cell and interf (i, k) =0. Energy reduction depends on the distance between i and k. The packing condition requires that a channel, in use in one cell, should be reused in another cell as close as possible (but obviously not interfering with the former) so that the number of channels used by the network is minimal, thereby lowering the probability of future call blocking in other cells. Where res(i, k) is function whose value is 1 if cells i and k belongs to the same reuse scheme, otherwise 0. Equation symbolize the resonance condition. The resonance condition tries to ensure that same channels are assigned to cells belonging to the same reuse scheme, as far as possible. Which subtracts 1 whenever a channel already being used by cell k, before the arrival of the new call, is considered in the candidate solution (i.e. in the new configuration) so that a mobile terminal being served need not change its channel too often. Finally, the fitness function F(X) is given by, III. HDB-DE HDB-DE is a discrete binary version of Differential Evolution which is an effective stochastic parallel search evolutionary algorithm for global optimization. Unlike DE here the individuals are initialized as binary strings. HDB-DE algorithm consists of three major operationsmutation, crossover and selection, which are carried out for each member of the population (called as target vector). Mutation on each target vector of the population generates a new mutant vector uniquely associated with it. The resultant mutant vector is no longer binary because of the difference operator and the control parameter. Therefore, the discretization process from a real continuous space to a binary space is done. Then the crossover operation generates a new trial vector using the mutant vector and the target vector itself. In selection phase the fitness of the trial vector is compared with the target vector and the vector with higher fitness replaces the target vector in the population for the next iteration. A. Pseudo-Code of HDB-DE 1) Initialize parameters t=0, NP (NP is number of individuals in population), CR and F. Where 2) Initialize target population t X. 3) Evaluate each individual i in the population using the objective function. 4) Obtain the mutant population (a mutant individual, 12 bi ci X X and X are three randomly chosen individuals from the population such that ai bi ci . 5) Discretization process from a real continuous space to binary space is done according to the following equation I refers to a randomly chosen dimension (j=1,2,..,n)) 7) Evaluate trial population 8) Selection (The selection is based on the survival of the fittest among the trial population and target population such that: Repeat steps 2 to 8 While Termination condition not reached. 10) Output best solution. IV. IMPLEMENTATION DETAILS OF HDB-DE FOR DCA The Dynamic Channel Assignment problem is specified in the literature in terms of the number of cells in the network (N ce ), the number of channels in the pool (N ch ) and a demand vector D which is a vector whose i th element denotes the traffic demand in cell i, i= 1,2,, N ce. We assume that the new call demand is placed at cell k which is already serving demand (k) calls where demand (k) denotes the total traffic load (ongoing) in cell k at time t; and no ongoing call is terminated in the entire network. Our problem is to assign an available channel to the incoming call with possible reassignment of channels to the calls in progress in cell k. A. Solution Representation The candidate solution to the problem is represented as a binary string X which is the representation of V k mentioned earlier where k signifies the cell in which call arrives. The size of the vector X is equal to the number of available channels (N ch ) and X k,j =1 if channel j is assigned to cell k, otherwise X k,j =0. The number of 1s in each solution vector is equal to demand(k)+1 i.e. the total ongoing calls plus the call that arrives at the concerned time instant. B. Mutation A new mutation operator has been designed for HDB-DE which is more effective than the one which has been used earlier for mutation of solutions with binary representation in DE. The new mutation operator is as given below: () The effectiveness of the operator can be seen from the fact that if both t bi X and t ci X assume a value 1 then in the earlier mutation operation the difference becomes 0 and therefore does not result in any change in the value of t ai X whereas in the newly designed mutation operation it does not happen so and leads to the generation of better mutants and thereby faster convergence. V. COMPUTATIONALRESULTS HDB-DE algorithm was implemented in Matlab and the following benchmark problems were used for its evaluation: 1) The first benchmark problem CSys 49 consists of 49 hexagonal cells that are arranged to form a parallelogram structure with 70 channels available to the system. 2) The second and third benchmark problems i.e. HEX 1 and HEX 3 are based on a 21-cell system. 3) The last four benchmark problems i.e. EX 1, EX 2, KUNZ 1 and KUNZ 2 are based on a 4, 5, 10 and 15-cell system respectively. The details of the benchmark problems and the used demand vectors are summarized in Table I. 11,9,5,9,4,4,7,4,8 KUNZ 2 15 44 10,11,9,5,9,4,4,7,4,8,8,9,10,7,7 For each of the considered problems it has been assumed that all cells are arranged in the form of a parallelogram, the given N ce of each problem is expressed in the form r c, where r, c are integers, and hence determine the configuration of the cellular network by setting the number of rows to r and the number of all columns to c. A cell is arbitrarily selected and then it is assumed that, just before a call demand arrives in this cell at time t, demand(i) calls were already in progress in the i th cell, i =1,2,, N ce, and demand(k) calls are ongoing in the k th cell. Accordingly, N ce X N ch assignment matrix, avoiding co-channel interference has been manually determined which describes the status of ongoing calls in each cell before the new call arrival and thus represents the initial condition. The assignment table used for HEX3 problem is given in Table II. Whenever a simulation results in a solution, which violates Co-Channel Constraint (CCC) the call is rejected. Call Rejection Probability (CRP) as given in is used as a parameter for determining the effectiveness of the proposed method. CRP= N rejected / N total Where N rejected = number of simulations in which the incoming call is rejected in the host cell considered; N total = total number of simulations. Thus, CRP is the cumulative proportion of simulations, for which the call is rejected, in the long run. This parameter CRP is based on but different from the call blocking probability used in the. The former characterizes a particular cell under a given initial condition while the latter characterizes the cellular network as a whole. The simulation results obtained by HDB-DE and those obtained by Simple Genetic Algorithm (SGA), Modified Genetic Algorithm (MGA) and PSO for the different benchmark problems are shown in Table III. The results shown in table III indicate that the performance of HDB-DE and MGA is good compared to PSO. The comparison of the convergence curves and the average number of Evaluation expended to yield the best solution over a good number of runs of the algorithms will further throw light on the efficiency, efficacy and consistency of the algorithms. He is an active researcher, guiding research, published papers in journals of national and international repute, and is also involved in R&D projects. He is a member of IEEE and a life member of Systems Society of India (SSI). He is also the Publication Manager of Literary Paritantra (Systems)-An International Journal on Literature and Theory.
History-Assisted Energy-Efficient Spectrum Sensing for Infrastructure-Based Cognitive Radio Networks Spectrum sensing is a prominent functionality to enable dynamic spectrum access (DSA) in cognitive radio (CR) networks. It provides protection to primary users (PUs) from interference and creates opportunities of spectrum access for secondary users (SUs). It should be performed efficiently to reduce the number of false alarms and missed detections. Continuous sensing for a long time incurs cost in terms of increased energy consumption; thus, spectrum sensing ought to be energy efficient to ensure the prolonged existence of CR devices. This paper focuses on using of history to help achieve energy-efficient spectrum sensing in infrastructure-based CR networks. The scheme employs an iteratively developed history processing database that is used by CRs to make decisions about spectrum sensing, subsequently resulting in reduced spectrum scanning and improved energy efficiency. Two conventional spectrum sensing schemes, i.e., energy detection (ED) and cyclostationary feature detection (CFD), are enriched by history to demonstrate the effectiveness of the proposed scheme. System-level simulations are performed to investigate the sensitivity of the proposed history-based scheme by performing detailed energy consumption analysis for the aforementioned schemes. Results demonstrate that the employment of history ensued in improved energy efficiency due to reduced spectrum scanning. This paper also suggests which spectrum sensing scheme can be the best candidate in a particular scenario by looking into computational complexity before comparative analysis is presented with other states of the art.
Structural characterization of functionally important chloride binding sites in the marine Vibrio alkaline phosphatase Enzyme stability and function can be affected by various environmental factors, such as temperature, pH and ionic strength. Enzymes that are located outside the relatively unchanging environment of the cytosol, such as those residing in the periplasmic space of bacteria or extracellularly secreted, are challenged by more fluctuations in the aqueous medium. Bacterial alkaline phosphatases (APs) are generally affected by ionic strength of the medium, but this varies substantially between species. An AP from the marine bacterium Vibrio splendidus (VAP) shows complex pH-dependent activation and stabilization in the 0 1.0 M range of halogen salts and has been hypothesized to specifically bind chloride anions. Here, using X-ray crystallography and anomalous scattering, we have located two chloride binding sites in the structure of VAP, one in the active site and another one at a peripheral site. Further characterization of the binding sites using site-directed mutagenesis and small angle X-ray scattering (SAXS) showed that upon binding of chloride to the peripheral site, structural dynamics decreased locally, resulting in thermal stabilization of the VAP active conformation. Binding of the chloride ion in the active site did not displace the bound inorganic phosphate product, but it may promote product release by facilitating rotational stabilization of the substrate-binding Arg129. Overall, these results reveal the complex nature and dynamics of chloride binding to enzymes through long-range modulation of electronic potential in the vicinity of the active site, resulting in increased catalytic efficiency and stability.
Implants and all-ceramic restorations in a patient treated for aggressive periodontitis: a case report A 23-year-old female with aggressive periodontitis was treated using dental implants and LAVA system. The severely compromised teeth were extracted irrespective of initial conservative periodontal treatment. An implant-supported overdenture with 4 implants was fabricated for the maxilla and all-ceramic restorations for the mandible. Esthetic and functional goals were achieved with team approach involving periodontists and prosthodontists. This case report describes a treatment procedure for a generalized aggressive periodontitis patient with severe bone resorption. INTRODUCTION Aggressive periodontitis, an uncommon and destructive periodontal disease, is characterized by followings: rapid attachment loss and bone destruction in otherwise clinically healthy patient, amount of microbial deposits inconsistent with disease severity, and familial aggregation of diseased individuals. 1 It usually occurs in the early decades of age. The disease has been classified into two types: localized and generalized. 2 The distinction between the localized and generalized forms is based on the distribution of the periodontal destruction in the mouth. Localized aggressive periodontitis is characterized by circumpubertal onset of disease, localized first molar or incisor disease with proximal attachment loss on at least two permanent teeth, and robust serum antibody response to infecting agents. Generalized aggressive periodontitis is characterized by generalized proximal attachment loss affecting at least three other teeth than first molars and incisors, pronounced episodic nature of periodontal destruction, and poor serum antibody response to infecting agents usually affecting persons under 30 years of age. 3 There was a controversy on the use of dental implants in aggressive periodontitis patients. Initially, the use of dental implants was suggested and implemented with much caution in patients with aggressive periodontitis because of an unfounded fear of bone loss. However, evidence to the contrary appears to support the use of dental implants in patients with aggressive peri-odontal disease. 4,5 Currently, the use of dental implants must be considered in the overall treatment plan for patients with aggressive periodontitis. 6 In the patient with aggressive periodontitis, the approach to restorative treatment should be made based on a single premise: extract severely compromised teeth early, and plan treatment to accommodate future tooth loss. The teeth with the best prognosis should be identified and considered when planning the restorative treatment. The lower cuspids and first premolars are generally more resistant to loss, probably because of the favorable anatomy and easier access for patient oral hygiene. 6,7 The risk of further bone loss is even a greater concern to preserve bone for implant placement and treatment success. The use of dental ceramics has been increased in young patients, because the demand for dental materials which fulfill esthetic requirements has increased. Dental ceramics with high esthetics are considered to be chemically stable with high biocompatibility. The biofilm on the prostheses stimulates the gingival inflammatory response. The growth of the biofilm result in an enhancement of the gingival crevicular fluid and subsequent clinical signs of gingivitis. 8 Ceramic materials have been reported to be biocompatible and showed lower bacterial adhesion compared with metallic materials. Zirconia specimens accumulated significant less amount of biofilm than titanium specimens in vivo. 9 Together with the esthetics, one of the important considerations for all-ceramic restorations is the strength of the prosthesis. Currently, CAD/CAM systems Implants and all-ceramic restorations in a patient treated for aggressive periodontitis: a case report using zirconia-based ceramics for framework are available. The ceramic systems have improved mechanical properties and it is claimed that they are strong enough to produce up to four unit fixed dental prostheses to replace missing molars. 10 Loss of teeth due to the aggressive periodontitis is one of the most common reasons for requiring complete denture prosthetics or full mouth rehabilitation in young patients. This clinical report describes the team approach for oral rehabilitation using dental implants and all-ceramic restorations for a young lady with a generalized aggressive periodontitis. CASE REPORT This report presents a case of aggressive periodontitis in a 23year-old female who had previously received periodontal therapy. She was presented to the Department of Periodontics, Seoul National University Dental Hospital in 2004 with the chief complaint that her gums had been swelling (Fig. 1). She requested for dental treatment to address the issue of gum swelling and tooth mobility. Her medical history was unremarkable. Subsequent clinical and radiographic examina-tion led to the diagnosis of generalized aggressive periodontitis (Fig. 2, 3). The patient reported of becoming aware of swelled gums and mobile teeth when she was at the age of 13. At that time, she had received scaling and root planning in conjunction with systemic antibiotics which were periodically repeated through the years with no definitive results. In 2008, the patient was referred to the Department of Prosthodontics for evaluation and treatment planning. The objectives of treatment were patient motivation and education, improvement of oral hygiene, improvement of esthetics, and a stable and predictable outcome. All teeth including the canine in the maxilla and left mandibular lateral incisor through right mandibular canine, and left mandibular second premolar were extracted. Initially fixed prosthesis using implants in the maxilla were planned, but rapid and severe bone resorption after extraction was observed. Because the maxillary lip required additional support as a consequence of bone loss, the fixed prosthesis treatment plan for the maxilla was changed to an implant-supported overdenture. Facebow transfer and mounting on the casts on the articulator were performed for the diagnostic wax-up procedure. The remaining teeth in the mandible were prepared for the fixed partial prosthesis. A provisional complete denture in the maxilla and provisional fixed partial dentures in the mandible was delivered. A computerized tomography scan with implant stent was taken to select suitable implant sites in the maxilla. In the maxilla, US II external-type implants (Osstem, Seoul, Korea) were placed at the sites of right maxillary first premolar, right maxillary lateral incisor, left maxillary first premolar, and left maxillary first premolar following a two-stage delayed-loading schedule. After the first implant surgery, the interim complete denture in the maxilla was relined with Coe-Soft TM (GC America Inc., Alsip, IL, USA). After 7 months of healing, the impression of the implants in the maxilla were made. An individual tray was fabricated. Pickup impression copings were connected, and splinted with DuraLay resin (Reliance Dental Mfg. Co., Worth, IL, USA). The functional impression technique using polyvinylsiloxane impression material was used. The occlusal plane was evaluated using the TRUBYTE TM occlusal plane plate (Dentsply, York, PA, USA). Facebow transfer and mounting the maxillary cast on the articulator was performed. A bar for the clip attachment was incorporated in the maxilla. Wax denture try-in for the maxilla and zirconia framework try-in for the mandible were done. The all-ceramic restoration between the left mandibular canine and right mandibular first premolar was designed to a six unit restoration, because the lack of MD space was expected. The other all-ceramic restorations in the mandible were fabricated separately. Lower all-ceramic restorations using Lava system TM (3M ESPE, St. Paul, MN, USA) were completed (Fig. 4). Final cementation was carried out using resin cement (Multilink, Ivoclar Vivadent Inc., Lichtenstein, Germany). A Hader bar � (Attachments International Inc., San Mateo, CA, USA) for the maxillary implant-supported overdenture was fabricated (Fig. 5). Marginal fits of the Hader bar were evaluated with one-screw test, screw resistance test, Fit Checker II (GC Corporation, Tokyo, Japan) and periapical radiograph. After delivery of the final prostheses, soft tissue profiles were evaluated in the frontal and lateral view (Fig. 6, 7). A daily maintenance care by patient' s effort was instructed using interdental cleaning aids (Fig. 8). A regular maintenance program was instituted with periodontal recall every 3 months following delivery of the definitive restorations. DISCUSSION Team approach involving prosthodontists and periodontists is required to rehabilitate patients with severe complicated periodontal situations in the planning and treatment process. In this particular aggressive periodontitis patient, an interdisciplinary approach was essential to evaluate, diagnose, and restore the function and esthetic problems using a combination of prosthodontic and periodontic treatments. A periodontitis consulted with prosthodontist before and after extraction of the teeth, made it possible to discuss and change prosthetic options in this case. The implant stent was fabricated by the prosthodontist, and the prosthodontist was participated in the surgery for the implant positioning. A periodic maintenance care by prosthodotist and periodontist has been conducted to enhance the success of prostheses and soft tissue after the prosthetic reconstruction. The long-term success of osseointegrated implants has been recorded in numerous studies. 11,12 Studies 13,14 revealed that the long-term implant prognosis in patients with a history of chronic periodontitis was equivalent to that in patients without periodontal disease. It was also demonstrated that osseointegrated implants in generalized aggressive periodontitis patients can be placed successfully. 4 Implants in this patient with aggressive periodontitis can accommodate the successful use of prosthesis and ensure to prevent future bone loss. Overdenture in the maxilla was chosen to restore the masticatory function of this patient because she had deficient bone to house sufficient number of implants. Also, severe bone atrophy in the anterior area left esthetic problems such as insufficient lip support if restored with the fixed dental prosthesis. A passive fit is an important prerequisite to ensure long-term success for implant-supported prostheses, so passive fit should be evaluated when implant framework is delivered. Marginal fit of the implant framework was evaluated by the combination of several methods: alternative finger pressure, direct vision and tactile sensation, periapical radiograph, one-screw test, screw resistance test, and disclosing media using fit checker, pressure indicating paste, and disclosing wax. 15 Multiple methods including periapical radiographs were used to check the passive fit of implant framework in this case. Development of physical properties of the dental material in the ceramic systems enables all-ceramic restorations to restore the posterior area. High esthetics and suitable strength of zirconia frameworks make it more popular. 10 Also, another advantage of ceramic compared to metal is its biocompatility. 9 All-ceramic restorations using zirconia frameworks ensures the strength and esthetics in this young female. CONCLUSION This clinical case report describes a patient restored with implant-supported overdenture for the maxilla and all-ceramic restorations in the mandible. The results showed significant improvement in esthetics and function of the masticatory system. Considering the psychological problems that these patients have faced during the early stages of their life, this alternative implant treatment and esthetic restoration may provide a better opportunity to meet this patient' s needs. Team approach in the evaluation and treatment planning will be necessary to improve the esthetic and functional outcomes in aggressive periodontitis patients.
Ballroom Dancing for Community-dwelling Older Adults: A 12-month Study of the Effect on Well-being, Balance and Falls Risk ABSTRACT Physical activities that involve muscle strengthening, balance and coordination skills such as ballroom dancing are encouraged for older adults to assist with the maintenance of functional autonomy and prevention of falls. Twenty-three community-dwelling older adults engaged in regular ballroom dancing completed a 12-month study assessing well-being, falls risk and balance using a set of clinical outcome measures. Those attending ballroom dancing classes were more likely to be active older adults, with lower levels of BMI and obesity compared to the general population. Participants scored lower in the falls risk tests than normative values. Some of the results suggest a possible substantive finding for clinical practice and indicate ballroom dancing is an activity with good attrition and adherence rates among community-dwelling older adults that can improve well-being, balance and reduce falls risk as part of an active lifestyle. Introduction The World Health Organization predicts that between 2000 and 2050 the world's population of adults over the age of 60 years will double from 11% to 22%. In the United Kingdom (UK), the percentage of over 65-year-olds has risen from 15% in 1984 to 18% in 2014, and the number of adults in the 'oldest old' age bracket, above 80 years, has risen from 19% of the over-65 population in 1984 to 22% in 2014 (Office for National Statistics, 2015). There is an acknowledged age-related risk of many chronic health conditions, such as coronary heart disease, type 2 diabetes and cancer (Department of Health, Physical Activity, Health Improvement and Protection, DHPAHIP, 2011) and progressive functional decline due to aging may have a significant impact on many individuals' quality of life. Amongst those aged 75 years and over, it is thought that around 25% live in ill-health and are, therefore, particularly at risk of poor mental health and well-being or social exclusion issues. Evidence suggests that promoting a healthy aging process is dependent upon multiple factors including appropriate policies at national level to combat the known risks associated with aging, such as those highlighted above. For example, in the UK, "social prescribing" aims to use networks commissioned by Clinical Commissioning Groups (CCGs) and local councils to link health service users with community groups to provide non-medical interventions that aim to reduce social isolation, loneliness and improve physical and mental health (Department of Health, 2019), an example being referral to community-based physical activity classes. In relation to falls, the National Health Service's (NHS) National Service Framework (NSF) for Older People highlighted 'falls prevention' as a priority. Introduced in 2001, the NSF suggested that integrated falls services involving local authorities and health and social care provision are needed, as this has been shown to reduce the number of falls and their negative impact by up to 30% (Department of Health, 2003, p. 8). However, NHS falls pathways tend to be fairly short-lived meaning upon discharge, individuals need to then self-manage conditions and transition to community-based physical activity programs to maintain the benefits of physical activity and avoid return to one's physical state pre-rehabilitation. Falls are a leading cause of both injury and death in the over 65-year-old age group (Lueckenotte & Conley, 2009;Mitty & Flores, 2007;) and over 75-year-old age group and can lead to increased need for long-term care amongst older adults (Campbell & Robertson, 2003). The National Institute for Health and Care Excellence estimates the cost of falls to the NHS to be £2.3 billion per year, and it is predicted that the incidence of falls will rise 2% each year with increasing numbers of people aged 60 and over. The American Geriatrics Society/British Geriatrics Society (AGS/BGS) has also echoed the importance of falls prevention programs, and suggest, given the consequences of hospitalization, functional decline and reliance on nursing home care, that falls prevention is "an important public health objective" (AGS/ BGS, 2011, p. 148). The AGS/BGS also emphasizes the effectiveness of multifactorial fall intervention programs for community-dwelling older adults when they include an exercise component that varies in intensity and includes resistance, balance, gait and coordination training of more than 12 weeks' duration (American Geriatrics Society/British Geriatrics Society, 2011, p. 151). Therefore, transition from NHS-led pathways and uptake of community-based physical activities for older adults must be encouraged by health-care professionals. Physical activity for older adults Maintaining physical activity for older people is important in encouraging social interaction, preventing chronic diseases, decreasing cognitive decline and maintaining physical independence (Wurm, Tomasik & Tesch-Romer, 2010). Whilst research promotes the importance of physical activity to support successful aging (Flynn & Stewart, 2013) and guidelines for exercise 'dose' exist for older adults from associations such as the American College of Sports Medicine, the specific types of exercise or activity that are recommended for community-dwelling older-adults are less clear. There is some suggestion that walking, jogging, cycling and gardening are suitable activities but there is criticism that this remains 'vague' (Paterson & Warburton, 2010) and that the methodologies reviewed contained insufficient details on the minimum volume of physical activity recommended to enable any strong conclusions to be formed. Stevens et al. acknowledge that all of the physical activity interventions in the studies they reviewed were left to participants to organize; and as the studies were heterogeneous in design and at times poorly reported, the types and quantity of activity were difficult to compare for the most beneficial interventions. Aerobic and strengthening exercises were found to be of benefit for older adults in the studies considered by Paterson and Warburton, and included alongside these guidelines were the inclusion of balance and resistance exercise. The Department of Health, Physical Activity, Health Improvement and Protection guidelines also recommend that older adults perform a physical activity to improve muscle strength, balance and co-ordination (particularly if at risk of falls) on at least two days a week. As ballroom dancing includes steps and routines that would improve muscle strength, balance and coordination, it is proposed as a specific physical activity that has the potential to assist with physical improvements and a reduction in falls risk, with the additional benefit of enhancing the social and mental health components that are also important for successful aging. Social ballroom dancing and health in older adults Evidence in the literature on ballroom dancing as an intervention for health improvements tends to include ballroom dancing classes for periods of time ranging from 2 weeks (Hackney & Earhart, 2009a) to ongoing involvement in dancing classes and competitions (Kattenstroth, Kalisch, Kolankowska, & Dinse, 2011). A short-duration feasibility study undertaken by Hackney and Earhart (2009a) revealed significant findings for positive changes in balance, concentration and reaction times. The majority of studies reviewed included intervention programs of around 2 to 3 months' duration. During such an intervention period, positive results were found for significant improvements in function and gait measurements, motor, cognitive and perceptual performance, cardiac efficiency, balance and physical performance, functional autonomy, a reduction in depression and improvements in self-esteem and observations of increased physical participation (Belardinelli, Lacalaprice, Ventrella, Volpe, & Faccenda, 2008;Gomes da Silva ;Haboush, Floyd, Caron, LaSota, & Alvarez, 2006;Hackney & Earhart, 2009b;Hackney, Kantaorovich, Levin & Earhart, 2007;). However, these studies largely used clinical populations and the lack of detail in some studies of baseline measurements, the plethora of different ballroom dances studied and the heterogeneous quantitative outcome measures and qualitative research methodologies used within each of these studies make comparison and interpretation of findings across the studies problematic. Falls risk assessment There has been much debate about the relevance of formal 'falls' risk assessments for community-dwelling individuals or inpatients in hospital settings (Donoghue, Graham, Gibbs, Mitten-Lewis, & Blay, 2003;Muir, Berg, Chesworth, Klar, & Speechley, 2010;Scott, Votova, Scanlan, & Close, 2007). An accurate method of assessment is important to predict those at risk of falls and for an appropriate prevention program to be implemented, but Scott et al. suggest that there is no single test that can be recommended for implementation in all settings as assessments, whilst shown to be valid and reliable in some settings, have yet to be tested in more than one setting. Therefore, questions remain regarding the inter-rater reliability, validity and specificity of the numerous tests that have been suggested. Indeed, some authors have found there to be no difference between the accuracy of some validated tests, which are time-consuming to complete, and nurses' own clinical judgments (cited by or even the self-reporting of balance problems (). The numerous outcome measures with their different indications for use mean that careful consideration is needed prior to their implementation. Ambulatory capacity, age, balance, lower limb strength and range of joint motion have been cited as risk factors for falls and the chosen tests in this study assessed some of these intrinsic factors (;Hansma, Emmelot-Vonk & Verhaar, 2010;Lueckenotte & Conley, 2009;Mitty & Flores, 2007;;;Wolfe, Lourenco & Mukland, 2010). The outcome measurement tests used, as outlined below, are all commonplace in clinical practice. In summary, social ballroom dancing is an activity that can provide older adults with improvements in functional independence, balance, cognitive performance and sociality, even over relatively short durations of time. Kattenstroth et al. (2011, p. 7) suggest that dance can enrich the environment for individuals due to its, "unique combination of physical activity, rhythmic motor coordination, emotion, affection, balance, memory, social interaction and acoustic stimulation." Given its popularity, there remains a paucity of literature supporting the use of ballroom dancing as an engaging activity beneficial to physical and social health and mental wellbeing in communitydwelling, non-symptomatic older adults. The aim of this study was to investigate the effect of social ballroom dancing on well-being, balance and falls risk outcome measures in communitydwelling older adults. Participants A purposive sample of 26 older adults were recruited from a population of community-dwelling 'older people' who participated in social ballroom dance at least once a week, via approaches made to local dance tutors leading community-based ballroom dance classes. Inclusion criteria were: aged 58 years and over (to allow for the inclusion of partners of those aged 60 years and above; on which many outcome measures were based); living in their own home (not residing in assisted living or nursing home accommodation); and participating in ballroom dancing at least 1 hour per week. Exclusion criteria included those with diagnosis of a dementia-related pathology. The aims of the study and the commitment required from individuals to participate were explained on a detailed information sheet and there was an opportunity for potential participants to have any questions answered. Data collection Participants completed the physical tests at baseline, 3 months, 6 months, 9 months and 12 months after baseline. If a participant felt unable to continue any of the tests on that day for health reasons, the tests were not carried out, or if started, ceased immediately. Following each session, participants were contacted via e-mail or telephone to arrange a convenient time for the next visit. The age, height and weight of all participants were collected at baseline. Information relating to known medical problems, any diagnosis of dementia or dementia-related illness, current medication use, any history of falls, frequency of ballroom dancing and participation in any other form of exercise was collected at baseline and updated at subsequent visits. Well-being, falls risk and balance outcome measures Clinical wellbeing was measured using the Clinical Outcomes in Routine Examination (CORE-GP); a 14-item measure including questions covering issues such as general well-being, depression, anxiety, self-esteem and risk and life/social functioning which has been validated for use within a non-clinical general population. Higher scores indicate greater levels of distress. The scale boundaries for the CORE-GP test are 0-20 are considered to be a "healthy" range, with 21-33 depicting a "low level of client distress" and a national "clinical cutoff" has been established as a score of 10. Falls risk was measured using the Falls Efficacy Scale-International (FES-I); a 16-item commonly used measure with questions focussing on social activities and the impact of a fear or concern of falling upon one's social life (). Balance was measured using the Timed Up and Go test (TUGT), designed to assess elements of functional ability, motor performance, balance, gait and transfers for older adults in a variety of settings; and via an Overall Stability Index (OSI) of dynamic balance obtained from the Biodex Balance System SD (BBS) (Biodex Medical Systems Inc., Shirley, New York). For each participant, this latter measure was compared to age-dependent normative data, with scores higher than normative values being suggestive of deficiencies in strength, proprioception, vestibular or vision and therefore, individuals being at risk of falls (Biodex Medical Systems Inc, n.d. b). All physical tests were performed at a university exercise laboratory on five occasions over the course of a 12-month period. The CORE-GP, FES-I, TUGT and Biodex Overall Stability Index instruments used in the study are all validated instruments with demonstrably high reliability (CORE-IMS, 2014; Dawson, Dzurino, Karleskint, & Tucker, 2018;Fawcett, 2008;). The TUGT has been validated as a suitable functional outcome measure to predict falls in community-dwelling older adults and is commonly used in clinical practice (AGILE, n.d.). Rejected tests Additional testing procedures considered included the Functional Reach Test; the Performance Orientated Assessment of Mobility (POAM) (Mitty & Flores, 2007;;Rache, Herbert, Prince, & Corriveau, 2000;) and the Four Square Step Test (FSST) () to measure dynamic standing balance in an adult population. These tests were found to show poor discriminative capability between participants and were rejected from further consideration. Statistical analysis The sample was summarized descriptively. Mean and standard deviation (SD) values were derived for numerical measures at baseline and after 12 months' dancing activity. The significance of any differences from baseline in key outcome measures was assessed using paired-samples t-testing. The significance of any differences in functional outcome study data from age-standardized normative data was assessed using singlesample t-testing. Ethical issues Ethical approval was sought and gained from the University's School Research Ethics Panel to ensure compliance with regulations for the use of human subjects and data storage. Written informed consent was obtained from all participants and all data were anonymized. Any risks of harm during physical testing procedures were mitigated; for example, due to the positioning of crash mats around the testing equipment where participants might have been at risk of falling off the testing equipment. Demographic data Twenty-six participants agreed to participate in the study. Three participants withdrew after the data collection at baseline; with analysis conducted on the remaining 23 participants (88%) who completed all 5 data collection sessions at baseline and at 3, 6, 9 and 12 months. Of these participants, 18 were either married or life partners dancing together, and 5 were single. Participant characteristics are summarized in Table 1. Outcome Measures Outcome measures recorded at baseline and 12 months are summarized in Table 2. The range of scores for the CORE-GP was 0-24 points, indicating at worst low-level client distress. Low-level distress was indicated on only 3 of 115 measurement occasions (2.61%) and 'healthy' values were recorded on 112 of 115 occasions (97.4%). Mean scores for the TUGT were almost within the normative values in all age groups. Significance testing A paired samples t-test conducted on CORE-GP scores of completing dancers at baseline and 12 months revealed no evidence of a significant effect at the 5% significance level (p =.07; 95% confidence interval for the difference (−0.17, 3.36)). However, the change of 1.6 points (25% reduction from baseline) suggested an improvement of substantive importance. Normative values for the falls risk OSI for individuals aged 54 to 71 years of age were 2.3 (SD = 1.4) and for those aged 72 to 89 were 3.0 (SD = 1.0) (Biodex, n.d. b). At baseline, the mean score for the 54-to 71-year-old age group was 1.23 (SD = 0.53) and the mean score for the 72-to 89-year-old age group was 1.88 (SD = 1.07). At 12 months, corresponding OSI scores were 1.17 (SD = 0.489) for the 54-to 71-year-old age group and 1.96 (SD = 1.17) for the 72-to 89-year-old age group. Between testing at baseline and 12 months, there was no consistent increase or decrease at an individual level in the falls risk OSI scores. The mean difference between the observed and normative values was 0.99 (95% CI (0.74, 1.24)) in the 54-to 71-year-old age group and 0.14 (95% CI (0.165 to 2.12)) in the 72-to 89-year-old age group. Single-sample t-tests revealed these differences to be statistically significant at the 5% significance level (p <.001 for the 54 to71year-old age group; p =.030 for the 72 to 89 year-old age group). However, under the application of a Bonferroni correction for multiple comparisons, this latter result would not be considered significant. Discussion The quantitative element of this study was performed in the context of a feasibility study and thus not powered to detect significant effects. Therefore, significant differences were not particularly expected due to the small sample size. Nonetheless, some significant effects were observed; alongside effects that did not reach significance at standard significance levels but appeared to be of some substantive importance. Tests were performed to potentially identify 'promising' effects even in small sample sizes and have demonstrated that the data are amenable to the form of testing if repeated in future, larger-scale studies. Effect of dancing and concomitant activities It appears that ballroom dancing attracts people who are 'active' in their lifestyle, and is an activity which, once taken up, is generally maintained: 17 of the 23 participants continued dancing consistently for the 12-month period; and those who did not maintain the activity constantly over the 12-month period reported factors unrelated to dancing, such as worsening illness or injury to be the cause of stopping and this also meant their partners stopped dancing. These adherence rates are slightly higher than those found in the systematic review of community-based group exercise programs by Farrance, Tsofliou, and Clark who found adherence rates at 6 to 12 months of 69.1% in adults with a mean age of 73.8 years. While the majority of participants (20; 87%) reported being involved in other forms of exercise at the start of the trial, participation in these other exercises was generally not as regularly attended as the weekly ballroom dancing, for which consistent attendance of a constant weekly frequency was recorded. The low attrition rate recorded in this study is consistent with the findings of other studies (;Hackney & Earhart, 2009a;Pinniger, Brown, Thorsteinsson, & McKinley, 2012). As ballroom dancing is a partnered activity, the act of being in a 'couple,' as well as being in the group as a whole, might help to enhance the social aspect of dance. Therefore, the sociality of ballroom dancing may assist with improving one's emotional and social health status, a reduction in social isolation as well as providing physical assistance for more frail older adults. This sample suggests that those attending dancing classes were more likely to be more active older adults, as their levels of participation in physical activity were greater than those previously reported by Flynn and Stewart who suggested only 13% of 65 to 74 year-olds and 6% of over 75 yearolds participate in vigorous physical activity, with ballroom dancing previously being classified as such (Blanksby & Reidy, 1988;Lima & Vieira, 2007). The study sample comprised individuals with lower levels of BMI and a lower proportion of obese participants compared with the general population in which adult obesity levels are around 26.9% in England (Public Health England, 2015). Aydo, Bal, Tolga, Aydo and akei note than in individuals with Rheumatoid Arthritis, age and BMI were the most important factors affecting postural dynamic balance. Furthermore, Greve, Alonso, Bordini, and Camanho have previously found a link between increased BMI and reduced postural balance in adults with a BMI greater than 30 kg/m 2. Calculations for this were not performed within this sample population as there was only 1 participant with a BMI over 30 kg/m 2. There is no evidence that the study sample is systematically different from the age strata of the population from which it was drawn. Twenty-one participants (91%) were taking regular medication; similar to reported figures for the incidence of medication use by older adults in the UK, whereby 75% of 65 to 74 year-olds and 84% of those aged 75 plus were taking regular medication (Chen, Dewey & Avery, 2001). At baseline, 5 of the 23 participants had reported having a previous fall with a further 8 falls reported in 6 participants over the course of the study (26% of participants falling over the 12-month period). Yoshida noted similar findings in community-dwelling older adults over the age of 64, with 28-35% falling each year and in those of 70, 32-42% fall each year. Well-being measures For the CORE-GP test, 97.4% of participant scores were within a healthy mental well-being range. Three of the possible 115 scores were between 21 and 33, indicating a low-level client distress. The mean scores for all participants at each testing period were always less than 10, indicating a healthy range of well-being throughout the study. The lack of statistical significance between CORE-GP scores at baseline and at 12 months is likely to be because the sample was generally already in a healthy range of well-being at baseline and no participants were diagnosed with a mental health condition. Also, as mentioned above, as a feasibility study the population was not powered to detect significant changes and significant differences were not really expected. Some of the higher scores recorded in the FES-I questionnaire appeared to be seasonal, in that participants were more concerned about walking outside during the heavy snow and ice periods that occurred during the course of the study and this did coincide with higher scores for certain individuals. However, possibly of note is the reduction of the mean FES-I scores from baseline to the end of the follow-up period of 1.40 points, representing a reduction of about 7% from baseline values. Whilst there were no significant findings or changes over the 12-month study period in the FES-I outcome measure, Oliver et al. (2008, p. 626) note that one should be cautious at the use of falls risk prediction tools and clinicians should not be "seduced by the attractiveness of an 'off the shelf' solution to the problem of falls" and they suggest that the most successful falls risk programs involve post-fall assessment and treatment plans rather than the use of falls-risk indicators. Biodex Balance System measures There is a paucity of evidence for normative data for the BBS equipment. That which does exist has often been performed in small numbers or with younger participants; for example, Akbari, Karimi, Farahini and Faghihzadeh who studied 30 symptomatic male athletes, aged 20 to 35 with ankle sprains; Testerman andGriend (1999, cited by ) who studied 10 individuals under 30 years of age with ankle sprains, Aydo et al. who evaluated dynamic postural balance using the BBS with participants with rheumatoid arthritis and Dawson et al. who studied 105 healthy subjects with a mean age of 24.5 years. Whilst the Biodex manual provides normative mean and ranges for data for the falls risk test, at present, there are no standard guidelines for other tests, other than suggesting that it in terms of a person's overall stability, proprioception and dynamic balance, it is better to have a lower overall stability index (OSI) (Biodex Medical Systems Inc, n.d. b). This is perhaps because the BBS provides a number of levels of difficulty for each test and hence normative data are likely to be varied at each level and for each age-group. The BBS user's manual suggests reliability levels of platform stability settings such as those provided by Lephart, Pincivero and Henry (n.d., cited by Biodex Medical Systems Inc, n.d. a). These guidelines suggest a bilateral stance platform level setting of 8 and that two test trials are performed for participants to familiarize themselves with the BBS and that after this, all data can be assumed reliable (Lephart et al., n.d., cited by Biodex Medicial Systems Inc, n. d. a). However, their study was only performed using 10 university-aged students and on one platform setting. Whilst there might have been a learning effect involved in the present study, the tests were performed using the default 3 attempt settings for each set of the balance tests performed and the authors suggest any more than 3 attempts at each test would have led to results being affected by elements of fatigue, given the age of participants and their subjective feedback that it was not an easy test to perform. In addition, the platform stability settings were considered after the pilot study and after discussions with the clinical technician staff, it was decided to use a mid-range, 'moderate' setting of 6. A higher setting might have been easier for participants and produced different results, however, OSI scores in this participant sample were better than those suggested in the normative data on the more difficult settings used within this study. Finn, Alvarez, Jett, Axtell & Kemler (1999, cited by Biodex Medical Systems Inc, n.d. a: 3-8) studied 200 participants (106 males and 94 females) aged between 18 and 89 years old. In 54 to 71 year-olds, the mean OSI was 2.57 (SD 0.78) and in 72 to 89 year-olds it was 2.70 (SD 0.80). In a further study (Finn, 2010, cited by Biodex Medical Systems Inc, n.d. b) the normative OSI was 2.3 (SD 1.4) in 50, 54 to 71 year-olds and 3.0 (SD 1.0) in a sample of 17, 72 yearolds or above (with no defined upper age limit). In the current study, the platform was set at 6-2 for the majority of participants; therefore, this trial's settings required a greater balance skill level. Despite this, and the small sample size of the study, the mean OSI score for study participants was significantly lower in both age groups, suggesting a beneficial effect of dancing on balance. Future research and implications for professional practice With regard to the future feasibility of a larger randomized controlled trial into the physical health benefits of ballroom dancing for community-dwelling older adults, this study demonstrated promising findings in terms of attrition and adherence to both ballroom dancing and the 12-month study length itself. Larger participant numbers would need to be sought from additional dancing classes to increase the sample size but preliminary findings from this study indicate the potential for a significantly better balance and hence a lower falls risk in older adults who participate in ballroom dancing. Further work is needed to establish normative values measured by the Biodex Balance System equipment to provide more workable comparisons with a general population. Overall participants were found to be performing tests to higher levels of function; thus, a ceiling effect was evident with scores. The CORE-GP test demonstrated healthy well-being scores and a decline of 1.4 points over the 12month follow-up period in the FES-I test. Although no statistical significance was revealed, the findings over 12 months suggest a reduction in scores that might be a substantive finding in clinical practice. Whilst these tests were unable to discriminate between the participants in this sample, since they are previously validated and commonly used measures in clinical practice, it is suggested they are of some worth in terms of ethical practice value; that is they might be used as a baseline 'family of outcome tests' to illustrate homogeneity within a sample and be used to demonstrate the participants' risk levels for inclusion in a study. The issue of sensitivity does create a wider issue of the use of outcome measures in clinical practice in terms of cost-effectiveness. As noted previously above, Oliver et al. (2008, p. 626) note clinicians should be cautious at the use of falls risk prediction tools. Perera, Mody, Woodman, and Studenski also question what constitutes a clinically meaningful change in the field of clinical geriatrics. They argue if this was known, it would be useful to plan, evaluate and compare treatment interventions that use outcome measures and that this information could be used to form power calculations for sample sizes in research trials. This point might have had an impact upon this study; if clinically meaningful change parameters were known, power calculations might have been made to formulate an ideal sample size for this study. It is of note that the normative data scores for the TUGT are single figures and not a range of 'normative' figures, thus meaning it is harder to interpret the findings for individuals who lie even 0.1 seconds either side of the normative scores. In addition, clinicians should be mindful that it is not possible to totally separate the effects of some of the confounding variables of the study. For example, in dancing pairs, the amount of dancing is dependent on each individual being willing and able to participate; if one person were to fall ill and not attend dancing, their partner would also be unlikely to attend for that duration. Conclusions This research study aimed to explore the influence of ballroom dancing on health and well-being for older, community-dwelling adults over a 12-month period. Specifically, it aimed to assess well-being and functional activity in terms of balance and falls risk. The results of this study appear to indicate positive findings in comparison to normative data. However, the sensitivity of these outcome measures for an older adult population is difficult to establish in this specific group of active older adults. Although these tests are validated for an older adult, communitydwelling population (AGILE, n.d.), findings suggest that a ceiling effect may have been reached in the study sample, with participants generally scoring at or just under the best scores for these tests. This may limit the predictive value of these tests in the study population. The economic burden of the aging population worldwide has led to a sense of 'moral obligation' for older adults to maintain or improve their health and remain physically active. A central role of the present-day healthcare professional working with older adults is to promote health and encourage individuals to realize their potential and remain functionally independent for as long as possible by means of tailoring physical activity programs to an individual's personal interest. The findings of this study suggest that ballroom dancing is an activity that is pursued by healthy older adults; they are naturally active individuals, with healthy levels of wellbeing, low anxiety of falling and are of a low falls risk than normative data for the parent population.
#!/usr/bin/env python # -*- coding: utf-8 -*- """Convenience module importing everything from backports.configparser.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from backports.configparser import ( RawConfigParser, ConfigParser, SafeConfigParser, SectionProxy, Interpolation, BasicInterpolation, ExtendedInterpolation, LegacyInterpolation, NoSectionError, DuplicateSectionError, DuplicateOptionError, NoOptionError, InterpolationError, InterpolationMissingOptionError, InterpolationSyntaxError, InterpolationDepthError, ParsingError, MissingSectionHeaderError, ConverterMapping, DEFAULTSECT, MAX_INTERPOLATION_DEPTH, ) from backports.configparser import Error, _UNSET, _default_dict, _ChainMap # noqa: F401 __all__ = [ "NoSectionError", "DuplicateOptionError", "DuplicateSectionError", "NoOptionError", "InterpolationError", "InterpolationDepthError", "InterpolationMissingOptionError", "InterpolationSyntaxError", "ParsingError", "MissingSectionHeaderError", "ConfigParser", "SafeConfigParser", "RawConfigParser", "Interpolation", "BasicInterpolation", "ExtendedInterpolation", "LegacyInterpolation", "SectionProxy", "ConverterMapping", "DEFAULTSECT", "MAX_INTERPOLATION_DEPTH", ] # NOTE: names missing from __all__ imported anyway for backwards compatibility.
<reponame>meixianghan/one package cn.mrerror.one.controller.jsonp; import cn.mrerror.one.entity.User; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; /** * JSONP 跨域解决控制器 */ @RequestMapping("/jsonp") @Controller public class JsonpController { @RequestMapping("/users") @ResponseBody public User jsonp2() { return new User(1, "zhangsan", "zhangsan"); } }
//ProcessExpectation decodes and executes expectations func ProcessExpectation(exp *tv.Expectation) bool { log.Debug("In ProcessExpectation") switch { case exp.GetConfigExpectation() != nil: ce := exp.GetConfigExpectation() return processConfigExpectation(ce) case exp.GetControlPlaneExpectation() != nil: cpe := exp.GetControlPlaneExpectation() return processControlPlaneExpectation(cpe) case exp.GetDataPlaneExpectation() != nil: dpe := exp.GetDataPlaneExpectation() return processDataPlaneExpectation(dpe) case exp.GetTelemetryExpectation() != nil: te := exp.GetTelemetryExpectation() return processTelemetryExpectation(te) default: log.Infof("Empty expectation\n") return false } }
import sys t = int(input()) def deploy(A, i, j): lookup = set() while len(lookup) < 9: print(f'{i} {j}',file=sys.stderr) sys.stdout.flush() ni, nj = map(int, input().split()) if (ni, nj) == (-1, -1): # error sys.exit(0) if (ni, nj) == (0, 0): # done return True lookup.add((ni - i + 1) * 3 + (nj - j + 1)) return False def go_gohper(): A = int(input()) i = 2 while True: if deploy(A, i, 2): break A -= 9 i += 3 for case in range(0, t): go_gohper()
# -*- coding: utf-8 -*- """ Handle storing documents on the local disk. This module knows nothing about potential remote storage and syncing. When loaded, we read a json document that describes all available docs, the index. The index can be generated from the document directory. """ from typing import List, Optional, Dict import datetime import json import os import re import shutil import sys import traceback import uuid import click import configparser from requests.exceptions import ConnectionError from .utils import ( tar_directory, delete_directory, err, get_sha_digest, get_short_uid, is_binary_file, is_binary_string, is_short_uuid, is_uuid, modification_date, slugify, to_utc, ) from .document import Document from .tag import Tag, TagDoc from . import file_system as fs from .settings import Preferences from . import utils def read_document_index(user_directory) -> List: """Read document index into list.""" path = os.path.join(user_directory, "index.json") if not os.path.exists(path): return list() try: with open(path) as f: return json.load(f) except json.decoder.JSONDecodeError: print(f"Could not get document index. Check file: {path}") def match(frag, s, exact): """Match a search string frag with string s. This is how we match document titles/names. """ if not exact: return re.search(frag, s, re.IGNORECASE) return re.match(f"^{frag}$", s) def doc_from_data(store, data): return Document(store, data["uid"], data["title"], data["kind"]) def touch(path): """Update/create the access/modified time of the file at path.""" with open(path, "a"): os.utime(path, None) class YewStore(object): """Our data store. Handle storage of documents. """ def __init__(self, username=None): """Init data required to find things on this local disk.""" self.username = fs.get_username(username) self.yew_dir = fs.get_user_directory(self.username) self.prefs = Preferences(self.username) self.offline = False self.location = "default" self.index = read_document_index(self.yew_dir) # this gets injected later by remote, but let's use a default self.digest_method = utils.get_sha_digest def get_gnupg_exists(self): """Retro fit this.""" fs.get_gnupg_exists() def get_counts(self): return len(self.index) def get_or_create_tag(self, name): """Create a new tag. Make sure it is unique.""" return None def sync_tag(self, tagid, name): """Import a tag if not existing and return tag object.""" return None def get_tag(self, tagid): """Get a tag by id.""" return None def get_tags(self, name=None, exact=False): """Get all tags that match name and return a list of tag objects. If exact is True, the result will have at most one element. """ return None def get_tag_associations(self): """Get tag associations.""" return None def parse_tags(self, tag_string): """Parse tag_string and return a list of tag objects.""" return None def associate_tag(self, uid, tagid): """Tag a document.""" return None def dissociate_tag(self, tagid, uid): """Untag a document.""" return None def delete_document(self, doc: Document) -> None: """Delete a document and its associated entities.""" uid = doc.uid # remove files from local disk path = doc.directory_path # sanity check if not path.startswith(self.yew_dir): raise Exception(f"Path for deletion is wrong: {path}") # unlink the document file because it might be a symlink # whatever method we use for deleting the dir tree might follow the link os.unlink(doc.path) if os.path.exists(path): shutil.rmtree(path) # remove from index self.index = list(filter(lambda d: d["uid"] != doc.uid, self.index)) self.write_index() # remember we don't want this anymore deleted_index = self.get_deleted_index() deleted_index.append(uid) self.write_deleted_index(deleted_index) def change_doc_kind(self, doc, new_kind): """Change type of document. We just change the extension and update the index. """ path_src = doc.path doc.kind = new_kind path_dest = doc.get_path() os.rename(path_src, path_dest) self.reindex_doc(doc) return doc def rename_doc(self, doc, new_name): """Rename document with name.""" old_name_path = doc.path doc.name = new_name new_name_path = doc.path os.rename(old_name_path, new_name_path) return self.reindex_doc(doc) def get_doc(self, uid): """Get a doc or throw exception.""" return doc_from_data( self, list(filter(lambda d: d["uid"] == uid, self.index))[0] ) def write_index(self) -> None: """Write list of doc dicts.""" path = os.path.join(self.yew_dir, "index.json") with open(path, "wt") as f: f.write(json.dumps(self.index, indent=4)) def get_deleted_index(self) -> List: """Read deleted document index into list.""" path = os.path.join(self.yew_dir, "deleted_index.json") if not os.path.exists(path): return list() try: with open(path) as f: return json.load(f) except json.decoder.JSONDecodeError: print(f"Could not get deleted document index. Check file: {path}") def write_deleted_index(self, deleted_index) -> None: """Write list of deleted uids.""" path = os.path.join(self.yew_dir, "deleted_index.json") with open(path, "wt") as f: f.write(json.dumps(deleted_index, indent=4)) def get_docs( self, name_frag: Optional[str] = None, tags: Optional[List] = None, exact=False, encrypted=False, ) -> List[Document]: """Get all docs using the index. Does not get remote. """ if not name_frag and not tags: return [doc_from_data(self, data) for data in self.index] if name_frag: matching_docs = filter( lambda doc: match(name_frag, doc["title"], exact), self.index ) else: matching_docs = self.index docs = list() for data in matching_docs: if tags: doc_tags = data.get("tags", []) if not bool(set(doc_tags) & set(tags)): continue docs.append(doc_from_data(self, data)) return docs def verify_docs(self, prune=False) -> List: """Check that docs in the index exist on disk. Return uids of missing docs. Update the index if prune=True. """ docs = self.get_docs() missing_uids = list() for doc in docs: if not os.path.exists(doc.path): print(f"Does not exist: {doc.uid} {doc.name}") missing_uids.append(doc.uid) if prune: self.index = list( filter(lambda d: d["uid"] not in missing_uids, self.index) ) self.write_index() return missing_uids def generate_doc_data(self, write=False): """This generates the index data by reading the directory of files for the given user name. In case the index.json is corrupted or missing. """ data = list() base_path = os.path.join(self.yew_dir, self.location) for uid_dir in os.scandir(base_path): path = os.path.join(base_path, uid_dir.name) for f in os.scandir(path): if f.is_file(): if not ( f.name.startswith( ( "~", "#", ) ) or f.name.endswith( ( "~", "#", ) ) or f.name.startswith("__") ): file_path = os.path.join(path, f.name) with open(file_path, "rt") as fp: digest = get_sha_digest(fp.read()) base, ext = os.path.splitext(f.name) doc = self.get_doc(uid_dir.name) data.append( { "uid": uid_dir.name, "title": base, "kind": ext[1:], "digest": digest, "tags": doc.get_tag_index(), } ) break if write: path = os.path.join(self.yew_dir, "index.json") with open(path, "wt") as f: f.write(json.dumps(data, indent=4)) return data def generate_archive(self) -> str: """Create archive file in current directory of all docs.""" archive_file_name = f"yew_{self.username}-{datetime.datetime.now().replace(microsecond=0).isoformat()}.tgz" tar_directory(self.yew_dir, archive_file_name) return archive_file_name def index_doc(self, uid: str, name: str, kind: str) -> Document: """Enter document into db for the first time. We assume the document exists in the directory not the index. But we handle the case where it is in the index. """ try: doc = self.get_doc(uid) self.reindex_doc(doc) except Exception: # we expect to be here data = dict() data["uid"] = uid data["title"] = name data["kind"] = kind doc = doc_from_data(self, data) self.index.append(doc.serialize(no_content=True)) self.write_index() return doc def reindex_doc(self, doc: Document, write_index_flag=True) -> Document: """Refresh index information. The doc object has new information not yet in the index. """ for d in self.index: if d.get("uid") == doc.uid: d["title"] = doc.name d["kind"] = doc.kind d["digest"] = doc.digest d["tags"] = doc.get_tag_index() if write_index_flag: self.write_index() break return doc def get_short(self, s) -> Optional[Document]: """Get document but with abbreviated uid.""" if not is_short_uuid(s): raise Exception("Not a valid short uid.") for d in self.index: if d.get("uid").startswith(s): return self.get_doc(d.get("uid")) return None def create_document(self, name, kind, content=None, symlink_source_path=None): """Create a new document. We might be using a symlink. """ uid = str(uuid.uuid4()) path = os.path.join(self.yew_dir, self.location, uid) if not os.path.exists(path): os.makedirs(path) p = os.path.join(path, f"{name.replace(os.sep, '-')}.{kind.lower()}") if symlink_source_path: # we are symlinking to an existing path # we need an absolute path for this to work symlink_source_path = os.path.abspath(symlink_source_path) os.symlink(symlink_source_path, p) else: # the normal case touch(p) if os.path.exists(p): doc = self.index_doc(uid, name, kind) if content: doc.put_content(content) return self.get_doc(uid) def import_document(self, uid: str, name: str, kind: str, content: str) -> Document: """Create a document in storage using a string. We already know the uid. """ name = name.replace(os.sep, "-") path = os.path.join(self.yew_dir, self.location, uid) if not os.path.exists(path): os.makedirs(path) p = os.path.join(path, f"{name.replace(os.sep, '-')}.{kind.lower()}") touch(p) if os.path.exists(p): self.index_doc(uid, name, kind) doc = self.get_doc(uid) doc.put_content(content) return doc
/** * Register a <code>Loader</code> for a given type of object. Note that more than one <code>Loader</code> * can be associated to a given type of object. * @param type The type of object the registered <code>Loader</code> is associated to * @param loader The <code>Loader</code> instance. */ public synchronized static void addLoader(String type, Loader loader) { List l = (List) loaders.get(type); if (l == null) { l = new ArrayList(); loaders.put(type, l); } l.add(loader); }
A Neural Model of Memory Impairment in Diffuse Cerebral Atrophy Background Computer-supported neural network models have been subjected to diffuse, progressive deletion of synapses/neurons, to show that modelling cerebral neuropathological changes can predict the pattern of memory degradation in diffuse degenerative processes such as Alzheimer's disease. However, it has been suggested that neural models cannot account for more detailed aspects of memory impairment, such as the relative sparing of remote versus recent memories. Method The latter claim is examined from a computational perspective, using a neural associative memory model. Results The neural network model not only demonstrates progressive memory deterioration as diffuse network damage occurs, but also exhibits differential sparing of remote versus recent memories. Conclusions Our results show that neural models can account for a large variety of experimental phenomena characterising memory degradation in Alzheimer's patients. Specific testable predictions are generated concerning the relation between the neuroanatomical findings and the clinical manifestations of Alzheimer's disease.
package chain import ( sdk "github.com/Conflux-Chain/go-conflux-sdk" "github.com/Conflux-Chain/go-conflux-sdk/types" "github.com/pkg/errors" ) // ConvertToNumberedEpoch converts named epoch to numbered epoch if necessary. func ConvertToNumberedEpoch(cfx sdk.ClientOperator, epoch *types.Epoch) (*types.Epoch, error) { if _, ok := epoch.ToInt(); ok { // already a numbered epoch return epoch, nil } epochNum, err := cfx.GetEpochNumber(epoch) if err != nil { return nil, errors.WithMessagef( err, "failed to get epoch number for named epoch %v", epoch, ) } return types.NewEpochNumber(epochNum), nil }
<reponame>sjinks/php-cxx #include "method.h" #include "function_p.h" phpcxx::Method::Method(const char* name, InternalFunction c, std::size_t nreq, const Arguments& args, bool byRef) : Function(name, c, nreq, args, byRef) { this->setAccess(Method::Public); } phpcxx::Method::Method(const Method& other) : Function(other) { } phpcxx::Method&& phpcxx::Method::setAccess(Method::Access access) { unsigned int flag = static_cast<unsigned int>(access); this->d_ptr->m_fe.flags = (this->d_ptr->m_fe.flags & ~ZEND_ACC_PPP_MASK) | flag; return std::move(*this); } phpcxx::Method&& phpcxx::Method::setStatic(bool v) { if (v) { this->d_ptr->m_fe.flags |= ZEND_ACC_STATIC; } else { this->d_ptr->m_fe.flags &= ~ZEND_ACC_STATIC; } return std::move(*this); } phpcxx::Method&& phpcxx::Method::setAbstract(bool v) { if (v) { this->d_ptr->m_fe.flags |= ZEND_ACC_ABSTRACT; } else { this->d_ptr->m_fe.flags &= ~ZEND_ACC_ABSTRACT; } return std::move(*this); } phpcxx::Method&& phpcxx::Method::setFinal(bool v) { if (v) { this->d_ptr->m_fe.flags |= ZEND_ACC_FINAL; } else { this->d_ptr->m_fe.flags &= ~ZEND_ACC_FINAL; } return std::move(*this); }
People with epilepsy have to learn to cope with the unpredictable nature of seizures – but that could soon be a thing of the past. A new brain implant can warn of seizures minutes before they strike, enabling them to get out of situations that could present a safety risk. Epileptic seizures are triggered by erratic brain activity. The seizures last for seconds or minutes, and their unpredictability makes them hazardous and disruptive for people with epilepsy, says Mark Cook of the University of Melbourne in Australia. Like earthquakes, “you can’t stop them, but if you knew when one was going to happen, you could prepare”, he says. With funding from NeuroVista, a medical device company in Seattle, Cook and his colleagues have developed a brain implant to do just that. The device consists of a small patch of electrodes that measure brain wave activity. Over time, the device’s software learns which patterns of brainwave activity indicate that a seizure is about to happen. When it detects such a pattern, the implant then transmits a signal through a wire to a receiver implanted under the wearer’s collarbone. This unit alerts the wearer by wirelessly activating a handheld gadget with coloured lights – a red warning light, for example, signals that a seizure is imminent. Cook’s team tested the device in 15 people with epilepsy over four months. In 11 of them, the system correctly predicted “red light” seizures – those likely to occur in a minimum of 4 minutes – more than 65 per cent of the time. In two of these people the device was able to predict every seizure, but it didn’t work as well in the remaining four, two of whom experienced side effects. Although the implant would only be used in severe cases, advance notice of a seizure could give those individuals a chance to stop driving, get out of social situations and avoid hazards, says Cook. “Just being able to predict them could improve people’s independence enormously,” he says. The device could also be linked to deep-brain-stimulation implants, which deliver small electric currents to the brain in order to halt seizures. These implants switch on automatically when seizures start. Triggering them in advance could prevent seizures more effectively, says Cook. An early warning system for seizures could also improve the effectiveness of anti-epileptic drugs such as benzodiazepines. They take 15 minutes to act and are usually ineffective once a seizure starts, says Christian Elger of the University of Bonn in Germany, who was not involved in the research. If seizure prediction devices prove effective, they could encourage research and development of more of these types of drugs. For some people with epilepsy, the device can detect seizures that they themselves aren’t aware of. One volunteer’s implant had recorded 102 seizures, but the individual himself only recalled 11 of them, since the seizure disrupted the memory centre of his brain. Other studies have shown that patients only report about a quarter of their seizures, says Elger. More accurate monitoring could allow physicians to better assess how well a drug is working, he says. Elger has high hopes for the device, but cautions that it too early to tell whether it prevents or lessens the effects of seizures, and whether patients can tolerate false alarms. Theoden Netoff at the University of Minnesota in Minneapolis agrees that the results are promising. Given that the people in the study had epilepsy that was particularly difficult to treat, the device may prove even more effective in those with milder forms of the condition. “It may be life-changing even if it has limited predictive power,” he says.
Consensus-based distributed control for economic dispatch problem with comprehensive constraints in a smart grid Economic dispatch problem (EDP) has become more complex and challenging in power systems due to the introduction of smart grids. In a smart grid, its expensive and unreliable for the existing centralized controller to achieve a minimum cost when generating a certain amount of power. In this work, we define a quadratic cost function and comprehensive constraints to improve the consensus algorithm. We propose a distributed control approach based on the improved consensus algorithm to solve the EDP in a smart grid. Different from the centralized approach, the proposed approach enables each generator to collect the mismatch between power demands and generations in a distributed manner. The mismatch in power is used as a feedback for each generator to adjust its power generation. The incremental cost of the generator is selected as the consensus quantity that converges to a common value eventually. Simulation results of different case studies are provided to demonstrate the effectiveness of the proposed algorithm. The comparisons between the proposed approach and the existing ones are also presented.
<gh_stars>0 import * as React from 'react'; import { SFC } from 'react'; import { Example } from 'docs/common'; import { Checkbox } from 'components'; const description = 'Checkbox can be checked as its initial state.'; const code = `import React from 'react'; import { Checkbox } from 'jsc-react-ui'; const CheckboxExample = () => <Checkbox checked>Cinema</Checkbox>;`; export const CheckedExampleCheckbox: SFC<any> = () => ( <Example title="Checked" description={description} code={code} > <Checkbox checked>Cinema</Checkbox> </Example> ); CheckedExampleCheckbox.displayName = 'CheckedExampleCheckbox';
/******************************************************************************************************** * @file uart.c * * @brief for TLSR chips * * @author telink * @date Sep. 30, 2010 * * @par Copyright (c) 2018, Telink Semiconductor (Shanghai) Co., Ltd. * All rights reserved. * * The information contained herein is confidential property of Telink * Semiconductor (Shanghai) Co., Ltd. and is available under the terms * of Commercial License Agreement between Telink Semiconductor (Shanghai) * Co., Ltd. and the licensee or the terms described here-in. This heading * MUST NOT be removed from this file. * * Licensees are granted free, non-transferable use of the information in this * file under Mutual Non-Disclosure Agreement. NO WARRENTY of ANY KIND is provided. * *******************************************************************************************************/ #include "hal/soc/soc.h" #include "hal/soc/uart.h" #include "drivers/8258/compiler.h" #include "drivers/8258/uart.h" #include "drivers/8258/dma.h" #include "drivers/8258/irq.h" #include "k_api.h" #include "common/ring_buffer.h" #include "tl_common.h" #include "drivers.h" #define UART_DMA 1 //uart use dma #define UART_NDMA 2 //uart not use dma #define UART_MODE UART_DMA volatile unsigned char uart_rx_flag = 0; volatile unsigned char uart_dmairq_tx_cnt = 0; volatile unsigned char uart_dmairq_rx_cnt = 0; volatile unsigned int uart_ndmairq_cnt = 0; volatile unsigned char uart_ndmairq_index = 0; #if (UART_MODE==UART_DMA) #define UART_DATA_LEN_RX (160-4) //data max ? (UART_DATA_LEN+4) must 16 byte aligned,if UART_DATA_LEN<140 cli have error!!! #define UART_DATA_LEN_TX (160-4) //data max ? (UART_DATA_LEN+4) must 16 byte aligned,if UART_DATA_LEN<140 cli have error!!! typedef struct{ unsigned int dma_len; // dma len must be 4 byte unsigned char data[UART_DATA_LEN_RX]; }uart_data_rx_t; typedef struct{ unsigned int dma_len; // dma len must be 4 byte unsigned char data[UART_DATA_LEN_TX]; }uart_data_tx_t; _attribute_aligned_(16) uart_data_rx_t rec_buff = {0, {0, } }; _attribute_aligned_(16) uart_data_tx_t send_buff = {0, {0, } }; #if HW_UART_RING_BUF_EN unsigned char uart_send_rb_buffer[512]; // at least 256; // 0x75 ring_buffer_t uart_send_rb = { .buf = uart_send_rb_buffer, .size = sizeof(uart_send_rb_buffer), .wptr = 0, .rptr = 0, }; #endif #elif(UART_MODE==UART_NDMA) #define rec_buff_Len 16 #define trans_buff_Len 16 __attribute__((aligned(4))) unsigned char rec_buff[rec_buff_Len] ={0}; __attribute__((aligned(4))) unsigned char trans_buff[trans_buff_Len] = {0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, \ 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff, 0x00}; #endif //typedef void (* uart_rx_cb_t)(void); void hal_uart_send_print(u16 len); void uart_rx_proc(void); //fixme Registering Callbacks at irq funtion!!!! //uart_rx_cb_t uart_rx_handler_cb = NULL; //fixme Registering Callbacks at irq funtion!!!! //extern uart_rx_cb_t uart_rx_handler_cb; #define USE_AOS_RING_BUF_FLOW_EN 0 #if USE_AOS_RING_BUF_FLOW_EN #define MAX_BUF_UART_BYTES 256 kbuf_queue_t g_buf_queue_uart; char g_buf_uart[MAX_BUF_UART_BYTES]; #else #define MAX_BUF_UART_BYTES 128 unsigned char uart_recv_rb_buffer[MAX_BUF_UART_BYTES]; // at least 256; // 0x75 ring_buffer_t uart_recv_rb = { .buf = uart_recv_rb_buffer, .size = sizeof(uart_recv_rb_buffer), .wptr = 0, .rptr = 0, }; #endif int32_t hal_uart_init(uart_dev_t *uart){ //note: dma addr must be set first before any other uart initialization! (confirmed by sihui) uart_recbuff_init( (unsigned short *)&rec_buff, sizeof(rec_buff)); uart_gpio_set(DMA_UART_TX, DMA_UART_RX);// uart tx/rx pin set uart_reset(); //will reset uart digital registers from 0x90 ~ 0x9f, so uart setting must set after this reset #if (CLOCK_SYS_CLOCK_HZ == 16000000) // uart_init(118, 13, PARITY_NONE, STOP_BIT_ONE); //baud rate: 9600 // uart_init(9, 13, PARITY_NONE, STOP_BIT_ONE); //baud rate: 115200 // uart_init(6, 8, PARITY_NONE, STOP_BIT_ONE); //baud rate: 256000 uart_init(0, 15, PARITY_NONE, STOP_BIT_ONE); //baud rate: 1024000 //uart_init(0, 7, PARITY_NONE, STOP_BIT_ONE); //baud rate: 2048000 #elif (CLOCK_SYS_CLOCK_HZ == 24000000) // uart_init(12, 15, PARITY_NONE, STOP_BIT_ONE); //baud rate: 115200 // uart_init(5, 15, PARITY_NONE, STOP_BIT_ONE); //baud rate: 256000 uart_init(1, 11, PARITY_NONE, STOP_BIT_ONE); //baud rate: 1024000 //uart_init(0, 11, PARITY_NONE, STOP_BIT_ONE); //baud rate: 2048000 #elif (CLOCK_SYS_CLOCK_HZ == 32000000) // uart_init(30, 8, PARITY_NONE, STOP_BIT_ONE); //baud rate: 115200 // uart_init(24, 4, PARITY_NONE, STOP_BIT_ONE); //baud rate: 256000 uart_init(1, 15, PARITY_NONE, STOP_BIT_ONE); //baud rate: 1024000 //uart_init(0, 15, PARITY_NONE, STOP_BIT_ONE); //baud rate: 2048000 #elif (CLOCK_SYS_CLOCK_HZ == 48000000) // uart_init(25, 15, PARITY_NONE, STOP_BIT_ONE); //baud rate: 115200 // uart_init(12, 15, PARITY_NONE, STOP_BIT_ONE); //baud rate: 230400 // uart_init(12, 7, PARITY_NONE, STOP_BIT_ONE); //baud rate: 460800 uart_init(12, 3, PARITY_NONE, STOP_BIT_ONE); //baud rate: 921600 //uart_init(1, 11, PARITY_NONE, STOP_BIT_ONE); //baud rate: 2048000 #endif #if (UART_MODE==UART_DMA) uart_dma_enable(1, 1); //uart data in hardware buffer moved by dma, so we need enable them first irq_set_mask(FLD_IRQ_DMA_EN); dma_chn_irq_enable(FLD_DMA_CHN_UART_RX | FLD_DMA_CHN_UART_TX, 1); //uart Rx/Tx dma irq enable uart_irq_enable(0, 0); //uart Rx/Tx irq no need, disable them #elif(UART_MODE==UART_NDMA) uart_dma_enable(0, 0); irq_clr_mask(FLD_IRQ_DMA_EN); dma_chn_irq_enable(FLD_DMA_CHN_UART_RX | FLD_DMA_CHN_UART_TX, 0); uart_irq_enable(1,0); //uart RX irq enable uart_ndma_irq_triglevel(1,0); //set the trig level. 1 indicate one byte will occur interrupt #endif #if USE_AOS_RING_BUF_FLOW_EN krhino_buf_queue_create(&g_buf_queue_uart, "buf_queue_uart",g_buf_uart, MAX_BUF_UART_BYTES, 1); #endif //uart_rx_handler_cb = uart_rx_proc; //irq_enable(); return 0; } _attribute_ram_code_ int32_t hal_uart_send(uart_dev_t *uart, const void *data, uint32_t size, uint32_t timeout_us) { //int i = 0; //uint8_t byte; int8_t send_flag = 0; #if (UART_MODE==UART_DMA) #if HW_UART_RING_BUF_EN uint8_t r = irq_disable(); send_flag = ring_buffer_write(&uart_send_rb, data, size); if((size > 1) // aos may call hal uart send directly in cli_putstr || ring_buffer_get_count(&uart_send_rb) > min(sizeof(uart_send_rb_buffer)/2,UART_DATA_LEN_TX)){ hal_uart_send_print(size); } irq_restore(r); #else send_buff.dma_len = size; memcpy(&send_buff.data,(unsigned short*)data,size); while(1){ send_flag = uart_dma_send( (unsigned short*)&send_buff); if((0 == send_flag) && timeout_us){ sleep_us(1); timeout_us --; }else{ break; } }; #endif #else for(i = 0; i < size; i++) { byte = *((uint8_t *)data + i); uart_ndma_send_byte(byte); } #endif if(send_flag==0) send_flag = -1; return send_flag; } #if HW_UART_RING_BUF_EN void hal_uart_send_loop(int wait_flag) { if(wait_flag){ unsigned int tick = clock_time(); while(!(reg_uart_status1 & FLD_UART_TX_DONE ) && !clock_time_exceed(tick, 2000)){ // busy } } uint8_t r = irq_disable(); if (reg_uart_status1 & FLD_UART_TX_DONE ){ if ( ring_buffer_get_count(&uart_send_rb) > 0 ){ send_buff.dma_len = ring_buffer_read(&uart_send_rb, send_buff.data, sizeof(send_buff.data)); uart_dma_send( (unsigned short*)&send_buff); } }else{ // static unsigned short RingBufUartBusy;RingBufUartBusy++; } irq_restore(r); } void hal_uart_send_print(u16 len) { static unsigned short RingBufSimPrintfMax; if(len > RingBufSimPrintfMax){RingBufSimPrintfMax = len;} hal_uart_send_loop(0); // no need to wait. } #endif int32_t hal_uart_recv(uart_dev_t *uart, void *data, uint32_t expect_size, uint32_t timeout){ uint8_t *pdata = (uint8_t *)data; //int i = 0; uint32_t rx_count = 0; int32_t ret = -1; //int32_t rev_size; if (data == NULL) { return -1; } #if USE_AOS_RING_BUF_FLOW_EN for (i = 0; i < expect_size; i++) { ret = krhino_buf_queue_recv(&g_buf_queue_uart, RHINO_NO_WAIT, &pdata[i], &rev_size); if((ret == 0) && (rev_size == 1)) { rx_count++; }else { break; } } #else rx_count = ring_buffer_read(&uart_recv_rb, pdata, expect_size); #endif if(rx_count != 0) { ret = 0; } else { ret = -1; } return ret; } int32_t hal_uart_recv_II(uart_dev_t *uart, void *data, uint32_t expect_size,uint32_t *recv_size, uint32_t timeout){ uint8_t *pdata = (uint8_t *)data; //int i = 0; uint32_t rx_count = 0; int32_t ret = -1; //int32_t rev_size; if (data == NULL) { return -1; } #if USE_AOS_RING_BUF_FLOW_EN for (i = 0; i < expect_size; i++) { ret = krhino_buf_queue_recv(&g_buf_queue_uart, RHINO_NO_WAIT, &pdata[i], &rev_size); if((ret == 0) && (rev_size == 1)) { rx_count++; }else { break; } } #else rx_count = ring_buffer_read(&uart_recv_rb, pdata, expect_size); #endif if (recv_size) *recv_size = rx_count; if(rx_count != 0) { ret = 0; } else { ret = -1; } return ret; } int32_t hal_uart_finalize(uart_dev_t *uart){ return 0; } _attribute_ram_code_ void uart_rx_proc(void) { #if (UART_MODE==UART_DMA) unsigned char uart_dma_irqsrc; //1. UART irq uart_dma_irqsrc = dma_chn_irq_status_get();///in function,interrupt flag have already been cleared,so need not to clear DMA interrupt flag here if(uart_dma_irqsrc & FLD_DMA_CHN_UART_RX){ dma_chn_irq_status_clr(FLD_DMA_CHN_UART_RX); uart_dmairq_rx_cnt++; #if USE_AOS_RING_BUF_FLOW_EN for(int i=0;i<rec_buff.dma_len;i++) krhino_buf_queue_send(&g_buf_queue_uart, &(rec_buff.data[i]), 1); #else ring_buffer_write(&uart_recv_rb, rec_buff.data, rec_buff.dma_len); #endif } if(uart_dma_irqsrc & FLD_DMA_CHN_UART_TX){ dma_chn_irq_status_clr(FLD_DMA_CHN_UART_TX); uart_dmairq_tx_cnt++; } #elif(UART_MODE==UART_NDMA) uint8_t bytedata[1] = {0x0a}; static unsigned char uart_ndma_irqsrc; uart_ndma_irqsrc = uart_ndmairq_get(); ///get the status of uart irq. if(uart_ndma_irqsrc){ //cycle the four registers 0x90 0x91 0x92 0x93,in addition reading will clear the irq. if(uart_rx_flag==0) { bytedata[0]=rec_buff[uart_ndmairq_cnt++] = reg_uart_data_buf(uart_ndmairq_index); _printf("uart_ndmairq_index %d\t bytedata:%x\n",uart_ndmairq_index,rec_buff[uart_ndmairq_cnt]); //bytedata = rec_buff[uart_ndmairq_cnt]; //gpio_toggle(GPIO_PA3); //krhino_buf_queue_send(&g_buf_queue_uart, bytedata, 1); uart_ndmairq_index++; uart_ndmairq_index &= 0x03;// cycle the four registers 0x90 0x91 0x92 0x93, it must be done like this for the design of SOC. if(uart_ndmairq_cnt%4==0 && uart_ndmairq_cnt!=0){ uart_rx_flag=1; #if USE_AOS_RING_BUF_FLOW_EN krhino_buf_queue_send(&g_buf_queue_uart, rec_buff, 16); #else ring_buffer_write(&uart_recv_rb, rec_buff, 16); #endif } } else{ read_reg8(0x90 + uart_ndmairq_index); uart_ndmairq_index++; uart_ndmairq_index &= 0x03; uart_ndmairq_cnt=0; //Clear uart_ndmairq_cnt uart_rx_flag=0; } } #endif }
import os import time import datetime import cv2 # suppress warning. import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import torch import _init_paths from opts import opts import ref from utils.utils import adjust_learning_rate from datasets.utils import create_dataset from datasets.junction import * from model.inception import inception from pathlib import Path import importlib project_root = ref.root_dir debug_dataset = False def init_trainer(trainer_name, opt, model, criterion, optimizer): trainerLib = importlib.import_module('trainer.{}_trainer'.format(trainer_name)) return trainerLib.Trainer(opt, model, criterion, optimizer) def init_criterion(criterion_name, opt): criterionLib = importlib.import_module('loss.{}_loss'.format(criterion_name)) return criterionLib.Loss(opt) def init_optimizer(opt, model): if opt.optimizer == 'sgd': optimizer = torch.optim.SGD(model.parameters(), lr=opt.lr, momentum=0.9, weight_decay=1e-6 ) elif opt.optimizer == 'adam': optimizer = torch.optim.Adam(model.parameters(), lr=opt.lr, betas=(opt.momentum, 0.999), eps=1e-8, weight_decay=opt.weightDecay ) elif opt.optimizer == 'rms': optimizer = torch.optim.RMSprop(model.parameters(), alpha=0.9, lr=opt.lr, eps=1e-6, weight_decay=1e-6 ) return optimizer # init def init_dataset(H, split, size_info=True): if split == 'train': return Junction(H, split='train') elif split == 'val': return Junction(H, split='val') elif split == 'test': return JunctionTest(H, split='test', size_info=size_info) else: raise NotImplementedError def init_dataloader(opt): test_loader = torch.utils.data.DataLoader( init_dataset(opt.hype, 'test'), batch_size=opt.batch_size, shuffle=False, num_workers=1 ) if opt.test else None train_loader = torch.utils.data.DataLoader( init_dataset(opt.hype, opt.split), batch_size=opt.batch_size, shuffle=True, num_workers=max(4, opt.batch_size), pin_memory=True ) if not opt.test else None if train_loader is not None: print("length of train_loader: {}".format(len(train_loader))) return train_loader, test_loader def check_dataset(opt, split): ## options for creating. H = opt.hype if opt.create_dataset: create_dataset(H, split, use_mp=True); print("finished creating dataset.") return True return False def init_folder(opt): if not os.path.isdir(opt.saveDir): os.mkdir(opt.saveDir) def main(): opt = opts().parse(); H = opt.hype if check_dataset(opt, 'train'): return init_folder(opt) torch.backends.cudnn.benchmark = True os.environ['CUDA_VISIBLE_DEVICES'] = '{}'.format(opt.gpu) lr_param = {} lr_param['lr_policy'] = 'step' lr_param['steppoints'] = H['steppoints'] #[9, 12, 15] #[8, 10, 12], [20, 24, 26] opt.lr_param = lr_param net = opt.net; opt.cuda = True # use cuda by default # build model/cuda if torch.cuda.is_available() and not opt.cuda: print("WARNING: You have a CUDA device, so you should probably run with --cuda") if net == 'inception': model = inception(['-', '+'], opt) elif net == 'resnet': raise NotImplementedError # use cuda by default. if opt.cuda: model.cuda() ## Init optimizer = init_optimizer(opt, model) criterion = init_criterion(opt.criterion, opt) trainer = init_trainer(opt.trainer, opt, model, criterion, optimizer)#Trainer(model, opt) train_loader, test_loader = init_dataloader(opt) ## Train if not opt.test: trainer.train(train_loader, val_loader=None) else: epoch_test = opt.checkepoch checkpoint = ref.output_root / "{expID}/model_{epoch_test}.pth".format(expID=opt.exp, epoch_test=epoch_test) trainer.test(test_loader, epoch_test, checkpoint) if __name__ == '__main__': main()
Ambiguity in the Dural Tail Sign on MRI Background: Meningiomas give rise to the dural tail sign (DTS) on contrast-enhanced magnetic resonance imaging (CEMRI). The presence of DTS does not always qualify for a meningioma, as it is seen in only 60-72% of cases. This sign has been described in various other lesions like lymphomas, metastasis, hemangiopericytomas, schwannomas and very rarely glioblastoma multiforme (GBM). The characteristics of dural-based GBMs are discussed here, as only eleven such cases are reported in the literature till date. Here we discuss the unique features of this rare presentation. Case Description: A 17-year-old male presented to the emergency department (ED) with, complaints of headache, recurrent vomiting, vision loss in right eye and altered sensorium. On examination patient was drowsy with right hemiparesis, secondary optic atrophy in the right eye and papilledema in the left eye. MRI brain showed, heterogeneous predominantly solid cystic lesion with central hypo-intense core suggestive of necrosis with heterogeneous enhancement and a positive DTS. Patient underwent emergency left parasagittal parieto-occipital craniotomy and gross total tumor excision including the involved dura and the falx. On opening the dura, tumor was surfacing, invading the superior sagittal sinus and the falx, greyish, soft to firm in consistency with central necrosis and highly vascular suggesting a high-grade lesion. Postoperative computed tomography (CT) of the brain showed evidence of gross total tumor (GTR) excision. The postoperative course of the patient was uneventful. Histopathological analysis revealed GBM with PNET like components. The dura as well as the falx were involved by the tumor. Conclusion: GBMs can arise in typical locations along with DTS mimicking meningiomas. Excision of the involved dura and the falx becomes important in this scenario, so as to achieve GTR. Hence high index of suspicion preoperatively aided by Magnetic Resonance Imaging (MRS) can help distinguish GBMs from meningioma, thereby impacting upon the prognosis. INTRODUCTION Dural tail sign (DTS) is considered the hallmark for the radiological diagnosis of a meningioma. It is seen in 60-72% cases of meningiomas and would represent either direct tumor invasion or reactive changes surrounding the tumor itself. [1,2,4,6, Dural tail has been reported in the literature in nonmeningiomatous pathologies such as lymphomas/chloromas, dural-based metastasis, hemangiopericytomas, schwannoma, chordomas, pleomorphic xantho-astrocytomas, and very rarely glioblastoma multiforme (GBM). Literature regarding GBMs presenting with dural tail mimicking meningiomas is sparse. Here, we report a rare case of GBM with dural tail mimicking a posterior one-third parasagittal meningioma and review the relevant literature. CASE REPORT A 17-year-old male with no comorbidities presented to the emergency department (ED) with complaints of headache and recurrent vomiting for 2 weeks, vision loss in right eye for 1 week, and altered sensorium for 2 days. On examination, the patient was drowsy but arousable, right hemiparesis grade 4/5, right-sided secondary optic atrophy, and left-sided papilledema (pseudofoster Kennedy syndrome). Magnetic resonance imaging (MRI) of the brain showed, T1-weighted images heterogeneous predominantly solid (iso-intense) cystic with central hypo-intense core suggestive of necrosis . T2-weighted images showed, solid (iso-intense) cystic (hyper-intense) with hyper-intense central core suggestive of necrosis . On contrast administration, the lesion demonstrated heterogeneous enhancement with central necrosis with a positive DTS . Patient was taken up for emergency surgery, and left parasagittal parieto-occipital craniotomy fashioned and gross total tumor excision was done. On opening the dura, tumor was seen surfacing and invading the superior sagittal sinus as well as the falx, with infiltration into the adjacent brain parenchyma. Tumor was greyish soft to firm in consistency with central necrosis and highly vascular suggestive of a high-grade lesion. Per-operatively patient had a transient episode of hypotension, which was managed. Approximate blood loss was 2 liters. Postoperative computed tomography scans showed complete tumor removal . Postoperative recovery was uneventful and patient was discharged in a stable condition. Final biopsy revealed GBM with primitive neuro-ectodermal (PNET) like components. DISCUSSION The presence of dural tail sign on MRI is highly suggestive of a meningioma but not a pathognomonic sign. The presence of a dural tail in GBM is very rare, and a thorough review of English literature revealed 10 cases of GBM exhibiting DTS mimicking meningioma. The demographic and clinical data are listed in Table 1. All except 2 patients including the index case were elderly, suggesting its common occurrence in that age group. Meningiomas are extra-axial tumors, arising from arachnoid cap cells and parasitize on the dural blood supply with subsequent invasion. Approximately 60-72% 60/M Left 5 th, 7 th, 8 th and lower cranial nerves involvement, gait disturbance, and cachexia Patient refused further treatment Died after 2 months due to severe cachexia 19/M Headache and seizure Not mentioned Not mentioned 57/M Headache, ataxia, and memory disturbances of meningiomas show classical DTS. Controversy exists regarding the nature of the dura showing the tail sign, with majority of the published studies claiming it to be reactive changes, whereas few studies have shown it to be due to actual tumoral involvement. [1,2,4,6, GBMs are intra-axial lesions exhibiting ring-like contrast enhancement with areas of central necrosis and gross perilesional edema. GBMs presenting as extra-axial mass and DTS is unusual, thereby leading to a diagnostic dilemma in the preoperative period. The criteria for the diagnosis of dural tail was given by Aoki et al. which included: a. Linear enhancement was present along the duramater originating from and extending outward from the tumor margin b. Enhancement was greater than elsewhere along the dura c. Findings were present in the two different imaging planes d. There was agreement among three observers. All the criteria were fulfilled in our case, thereby confirming DTS leading to the provisional diagnosis of a meningioma. Wilms et al. first reported the significance of the DTS in GBMs through histopathological confirmation of the involved dura. None of the five patients with the final biopsy of GBM reported in their series showed invasion of the dural tail by the tumor. Hence, they concluded that the dural tail to be just a reactive change rather than actual infiltration of the tumor. Ten cases of GBMs with a DTS have been reported in the literature till date, except Wilms et al. none of the other reports included dural biopsy. In the index case reported here, we have histological confirmation of the involvement of both the falx and the dura by the tumor beyond attachment. There were lytic changes on the inner table of the overlying bone, which were drilled away. Unlike meningiomas, GBMs are highly vascular and aggressive lesions invading normal brain, deriving their blood supply from pial vasculature. The vessels of duramater rarely feed GBMs, the enhanced dural tail sign is likely to develop from vascular congestion or proliferation. On the other hand, meningiomas derive their blood supply from dural vessels, mostly external carotid circulation (ECA) with few exceptions. Patel et al. demonstrated tumor blush and ECA supply in both their cases on angiography, and thereby misleading the preoperative diagnosis as meningioma. They subjected both the patients for angio-embolization followed by surgery. Blood loss was less than 500 ml in both their cases. Similarly, in one of the cases reported by Wilms et al. angiogram was performed demonstrating feeders from middle meningeal artery (MMA) similar to meningiomas . Angiogram was not performed in our case in view of the emergency setting, but the tumor had parasitized the falx and the convexity dura for its nutrition, as noted during the surgery. The massive blood loss encountered during the surgery would have been reduced, by prior angio-embolization, as was the case with Patel et al., but for the emergency situation. Magnetic resonance spectroscopy (MRS) is a useful adjunct in differentiating preoperatively meningiomas from GBMs. Majos et al. studied the role of proton MRS in differentiating various tumors and found large lipid/lactate resonance to be characteristic of GBMs and large alanine peaks to be characteristically seen in meningiomas. Heish et al. reported a case of GBM mimicking meningioma with the classical dural tail, where MRS revealed characteristic lipid/lactate peak strongly suggesting GBM, which was subsequently confirmed on histopathology. The origin of extra-axial GBMs has been a matter of debate. Various authors have proposed two mechanisms by which these lesions develop. One hypothesis involving GBM of the cranial nerves (CN) states that, GBM arise primarily from the CNS tissue that lay within the proximal parts of the CN itself. CNS tissue may extend well into the CN, and isolated islands of CNS tissue may even be found within the CN at a considerable distance from its exit point. The second hypothesis is that the tumor originated as primary in the heterotopic neuroglial cell nests in the leptomeninges of the adjacent brain. In the index case, considering the dural and falcine invasion can be possibly explained by the second hypothesis. CONCLUSION Lesions arising in typical locations for meningiomas but with atypical appearances, GBM should be considered in the differential diagnosis. MRS is a very valuable method of differentiating GBMs from meningiomas. Preoperative angiography appears to have a role in reducing the blood loss although not performed in the present case. High index of suspicion prior to surgery and excision of the involved dural elements would lead to a better outcome. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
/** * user1 allowed because he's member of group1 */ @Test public void user1_allowed_g_voter_u2g1_750() throws Exception { updateClient(org1Users[1]); queryHiveTableOrView(db_general, g_voter_u2g1_750); }
// hash calculates the body content hash func (body *luaBody) hash() (h string) { b := body.buffer() if b.Len() == 0 { h = hashStr("") } h = hashStr(b.String()) return }
<reponame>DevanshJain07/Donation<gh_stars>0 package com.example.donation.Activities; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.view.MenuItem; import android.view.View; import android.view.View.OnClickListener; import android.widget.TextView; import android.widget.Toast; import androidx.appcompat.app.AppCompatActivity; import androidx.appcompat.widget.Toolbar; import androidx.appcompat.widget.Toolbar.OnMenuItemClickListener; import androidx.preference.PreferenceManager; import androidx.recyclerview.widget.LinearLayoutManager; import androidx.recyclerview.widget.RecyclerView; import androidx.recyclerview.widget.RecyclerView.LayoutManager; import com.android.volley.AuthFailureError; import com.android.volley.Request.Method; import com.android.volley.Response.ErrorListener; import com.android.volley.Response.Listener; import com.android.volley.VolleyError; import com.android.volley.toolbox.StringRequest; import com.example.donation.Adapters.RequestAdapter; import com.example.donation.DataModels.RequestDataModel; import com.example.donation.R; import com.example.donation.Utils.Endpoints; import com.example.donation.Utils.VolleySingleton; import com.google.gson.Gson; import com.google.gson.reflect.TypeToken; import java.lang.reflect.Type; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Objects; public class MainActivity extends AppCompatActivity { private RecyclerView recyclerView; private List<RequestDataModel> requestDataModels; private RequestAdapter requestAdapter; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); TextView make_request_button = findViewById(R.id.make_request_button); make_request_button.setOnClickListener(new OnClickListener() { @Override public void onClick(View view) { startActivity(new Intent(MainActivity.this, MakeRequestActivity.class)); } }); requestDataModels = new ArrayList<>(); Toolbar toolbar = findViewById(R.id.toolbar); toolbar.setOnMenuItemClickListener(new OnMenuItemClickListener() { @Override public boolean onMenuItemClick(MenuItem item) { if (item.getItemId() == R.id.search_button) { //open search startActivity(new Intent(MainActivity.this, SearchActivity.class)); } return false; } }); recyclerView = findViewById(R.id.recyclerView); LayoutManager layoutManager = new LinearLayoutManager(this, RecyclerView.VERTICAL, false); recyclerView.setLayoutManager(layoutManager); requestAdapter = new RequestAdapter(requestDataModels, this); recyclerView.setAdapter(requestAdapter); populateHomePage(); TextView pick_location = findViewById(R.id.pick_location); String location = PreferenceManager.getDefaultSharedPreferences(this) .getString("city", "no_city_found"); if (!location.equals("no_city_found")) { pick_location.setText(location); } } private void populateHomePage() { final String city = PreferenceManager.getDefaultSharedPreferences(getApplicationContext()) .getString("city", "no_city"); StringRequest stringRequest = new StringRequest( Method.POST, Endpoints.get_requests, new Listener<String>() { @Override public void onResponse(String response) { Gson gson = new Gson(); Type type = new TypeToken<List<RequestDataModel>>() { }.getType(); List<RequestDataModel> dataModels = gson.fromJson(response, type); requestDataModels.addAll(dataModels); requestAdapter.notifyDataSetChanged(); } }, new ErrorListener() { @Override public void onErrorResponse(VolleyError error) { Toast.makeText(MainActivity.this, "Something went wrong:(", Toast.LENGTH_SHORT).show(); Log.d("VOLLEY", Objects.requireNonNull(error.getMessage())); } }) { @Override protected Map<String, String> getParams() throws AuthFailureError { Map<String, String> params = new HashMap<>(); params.put("city", city); return params; } }; VolleySingleton.getInstance(this).addToRequestQueue(stringRequest); } }
LMI Design of Sliding Mode Robust Control for Electric Linear Load Simulator For a linear steering gear electric linear load simulation system (ELLS), a sliding mode variable structure control strategy based on linear matrix inequality (LMI) design is proposed. First of all, aiming at the problem of redundant force interference in the actual dynamic loading process of the system, on the basis of establishing the state space equation of the ELLS, the LMI sliding mode variable structure controller is designed, which can be compensated only by the calculation of LMI. Secondly, in order to solve the problem of high frequency noise caused by the differential of the traditional sliding mode to the measured output value, the sliding mode controller (SMC) designed based on LMI can control the system accurately only by measuring the output value of the system, and the convergence of the designed controller is proved by Lyapunov function. Finally, a Simulink simulation model is built to verify the accurate control of the system by the SMC based on LMI.
def authenticate(username, password): API_AUTH = "https://api2.xlink.cn/v2/user_auth" auth_data = {'corp_id': "1007d2ad150c4000", 'email': username, 'password': password} r = requests.post(API_AUTH, json=auth_data, timeout=API_TIMEOUT) try: return (r.json()['access_token'], r.json()['user_id']) except KeyError: raise(LaurelException('API authentication failed'))
Reusing Educational Material for Teaching and Learning: Current Approaches and Directions In this article we survey some current approaches in the area of technologies for electronic documents for finding, reusing, and adapting documents for teaching or learning purposes. We describe how research in structured documents, document representation and retrieval, semantic representation of document content and relationships, and ontologies could be used to provide solutions to the problem of reusing educational material for teaching and learning.
Knowledge Cities: The Future of Cities in the Knowledge-based Economy Nowadays all major international development agencies and nations with the highest levels of overall development have adopted knowledge-based development (KBD) policies. In this way the new theme of "knowledge cities" (KCs), came to the front as an important topic of interest and discussion. The main purpose of the paper is to present in a coherent and comprehensive way the basic milestones of author's research related to KBD and, mainly, to the concept of KCs.
1. Field of the Invention The present invention relates generally to an improved data processing system and in particular to responding to events on a server. Still more particularly, the present invention relates to responding to attacks or errors on a server that degrade an ability of the server to perform an intended purpose. 2. Description of the Related Art A continuing problem in modern computing systems is attacks by remote clients on servers. Attacks can come in many forms, but are generally categorized as either denial of service (DOS) attacks or takeover attacks. Denial of service attacks seek to disable a service, such as a banking Internet service system. Usually, denial of service attacks seek to overwhelm one or more servers or software programs by flooding the servers or software programs with bogus data packets. Takeover attacks are more rare, but are more insidious as they seek to actually take over the server or software programs. In addition to attacks, servers or software programs handling network traffic can have relatively normal operating problems. For example, execution of an uncommon code path, a programming error, and an exception during execution of a program operating on one of the servers or clients can also cause a suspension of service. However, whether the event is malicious or benign, the effect can be the same: denial of service. In the case of large business enterprises, the cost of a service being down can be in the millions of dollars even if the service is down for only a few hours.
The Tampa Bay Rowdies were able to close out the spring season unbeaten at home with a convincing 3-0 victory over the Atlanta Silverbacks. The Rowdies got the goals and the shutout they were hoping for in their match, but a hectic 3-3 shootout in New York between the Cosmos and Armada FC meant the spring title narrowly slipped out of their grasp. Head coach Thomas Rongen rewarded the squad that performed superbly against FC Edmonton last week with another start this week. The only exception was Stefan Antonijevic, who returned to his regular starting center back role in place of Gale Agbossoumonde. After serving red card suspensions last week, leading scorer Maicon Santos and assist leader Georgi Hristov started the match on the bench for the first time this season. The passing in the midfield area was once again crisp and the attacking unit showed positive signs connecting while moving up the field. However, the energy in the first 45 didn’t quite match the level against Edmonton and the forwards found the sound Silverbacks back line much more difficult to break down. Despite dominating in possession, the Rowdies failed to produce many quality chances on net. The best chance of the first half came in the 37th minute when Brian Shriver found space on the right side of the penalty area to fire off a shot. Keeper Steward Ceus, though, had the angle closed down and was able to deflect the attempt away. With some words of encouragement from Rongen at halftime the players were able to break through and find the necessary edge to take the victory. “It was a flat first half,” said Rongen. “We addressed a few issues at halftime and the second half was much better, from an emotional standpoint. The tempo was better and obviously we were better in the final third, which was nice to see.” Along with Rongen’s halftime pep talk, the Rowdies also received a wakeup call in the in the 56th minute. A cross from the left flank was whiffed on by Atlanta’s Matt Horth, but then fell into the path a sprinting Shaka Bangura. The forward laid out to beat left back Ben Sweat and keeper Matt Pickens to the ball and his shot bounced off the crossbar and then the far post before bouncing out of danger. The Rowdies answered only four minutes later on a sequence that demonstrated the great awareness that Corey Hertzog and Darwin Espinal have developed with one another on the training field. Espinal penetrated the Silverbacks penalty area and beat his man to the endline before sending in a chip directly to Corey Hertzog on the near post. The ball came to him in an awkward position, but Hertzog showed great resourcefulness by redirecting the ball into the net with the upper right corner of his chest. “All week, the coaches have been telling me to go to the near post,” said Hertzog. “When I saw Darwin taking his man on, the first thing that came to my head was to get to the near post. He shot it at me and I just tried to get anything on it. … I hadn’t scored since the first game and I’ve been working hard all season to get the second goal. It feels good to go into that break with confidence.” After a relatively quiet night, midfielder Martin Núñez got the whole crowd at Al Lang field on their feet in the 75th minute with a sublime free kick from 22 yards out that Ceus knew absolutely nothing about. The goal was Núñez’s third of the season, which puts him in a tie with Maicon Santos. The Rowdies continued to dominate and things only got easier after Atlanta’s Jaime Chavez received two yellow cards in the span of one minute and was ejected from the match in the 86th minute. Hristov would grab the third goal off the bench when he converted a penalty earned by Robert Hernandez in the 89th minute. “We put ourselves in a good position going into the Fall Season,” said Rongen. “We haven’t lost at home, which is great. We’ve got a tough place to teams to play at now. There are a lot of positives from this 10-game Spring Season.” Rongen is justified in crowing about the Rowdies home record. The Rowdies have won their last four matches in St. Petersburg and drew the one before that, earning 13 out of possible 15 points. For comparison, the club only managed 13 points at home in the entire 2014 season. A big reason for that turnaround at home this year is the much-improved defense. The Rowdies have recorded four shutouts and are tied with the Cosmos for second fewest goals allowed (9). Veteran Tamika Mkandawire has been resolute while playing every available minute of the spring season at center back and has donned the captain’s armband for many of those minutes as well. “We’re trying to turn Al Lang into a fortress and make it difficult for teams to come here. We’ve done that so far and long may it continue.” said Mkandawire. “I think we’ve worked very hard to keep clean sheets and to look after each other on the pitch. We defended as a team, starting from the front. I just think we’ve worked harder. We’ve got a great set of guys. We’ve been working very hard on the training pitch with all the coaches.” The spring title may have eluded them, but there are many encouraging signs for the Rowdies heading into the Fall Season. The late surge of youngsters Espinal and Hernadez, midfielder Juan Guerra finding good form in the absence of Marcelo Saragosa for the final three matches, and one of the most consistent defenses in the league are all reasons to believe this team is a serious contender. For all the talk of the Cosmos running away with the spring, the Rowdies only finished one point behind them on the table. A terrific showing for a totally overhauled roster that Rongen believes hasn’t hit its best stride yet. If only one or two close results (which are a frequent occurrence in this league) fall another way and the Rowdies are the ones who’ve punched their playoff ticket in June. Rongen and company will be content to sit alone in second place heading in the Fall Season, though, as they look to secure one of the three remaining playoff berths. They’ll kick off the fall campaign against none other than Atlanta at home on July 4th. IMAGE, TAMPA BAY ROWDIES Share this: Tweet Email Pocket Print
import numpy as np from scipy.stats import randint as sp_randInt from scipy.stats import uniform as sp_randFloat from lightgbm import LGBMClassifier from sklearn.pipeline import Pipeline from sklearn.pipeline import make_pipeline from sklearn.model_selection import RandomizedSearchCV from sklearn.metrics import accuracy_score from sklearn.calibration import CalibratedClassifierCV import warnings warnings.filterwarnings("ignore") CV = 5 N_ITER = 150 RANDOM_SEED = 32 class LGBM_model: def __init__(self, n_jobs=15): self.n_jobs = n_jobs self._best_model_params = None self.model = LGBMClassifier(random_state=RANDOM_SEED, silent=True,) self.metric_learner = None def add_metric_learner(self, metric_learner): self.metric_learner = metric_learner def get_best_model_configuration(self, X, y): estimator = LGBMClassifier(random_state=RANDOM_SEED, silent=True,) if self.metric_learner: self.metric_learner.fit(X, y) parameters = {'max_depth': sp_randInt(2, 8), 'learning_rate': sp_randFloat(), 'num_leaves': sp_randInt(2, 15)} decision = RandomizedSearchCV(estimator=estimator, param_distributions=parameters, cv=CV, n_iter=N_ITER, n_jobs=self.n_jobs, verbose=1, scoring=accuracy_score, random_state=RANDOM_SEED) X = self.metric_learner.transform(X) if self.metric_learner else X decision.fit(X, y) if self.metric_learner: decision = Pipeline([ ('metric_learner', self.metric_learner), #('svc', estimator) ('lgbm', decision) ]) self._best_model_params = decision['lgbm'].best_params_ else: self._best_model_params = decision.best_params_ return decision def fit(self, X, y): self.classes_ = np.unique(y) self.model = self.get_best_model_configuration(X, y) def predict_proba(self, X): if self.metric_learner: X = self.metric_learner.transform(X) return self.model.predict_proba(X) def get_params(self): return self._best_model_params
package consensus import ( "encoding/hex" "encoding/json" "testing" "github.com/qlcchain/go-qlc/common/types" "github.com/qlcchain/go-qlc/p2p/protos" ) var ( utilsblock1 = `{ "type": "state", "addresses": "<KEY>", "previous": "0000000000000000000000000000000000000000000000000000000000000000", "representative": "qlc_3oftfjxu9x9pcjh1je3xfpikd441w1wo313qjc6ie1es5aobwed5x4pjojic", "balance": "00000000000000000000000000000060", "link": "D5BA6C7BB3F4F6545E08B03D6DA1258840E0395080378A890601991A2A9E3163", "token": "<KEY>", "signature": "AD57AA8819FA6A7811A13FF0684A79AFDEFB05077BCAD4EC7365C32D2A88D78C8C7C54717B40C0888A0692D05BF3771DF6D16A1F24AE612172922BBD4D93370F", "work": "13389dd67ced8429" } ` utilsblock2 = `{ "type": "state", "addresses": "<KEY>", "previous": "0000000000000000000000000000000000000000000000000000000000000000", "representative": "<KEY>", "balance": "00000000000000000000000000000050", "link": "D5BA6C7BB3F4F6545E08B03D6DA1258840E0395080378A890601991A2A9E3163", "token": "<KEY>", "signature": "AD57AA8819FA6A7811A13FF0684A79AFDEFB05077BCAD4EC7365C32D2A88D78C8C7C54717B40C0888A0692D05BF3771DF6D16A1F24AE612172922BBD4D93370F", "work": "13389dd67ced8429" } ` ) func TestIsAckSignValidate(t *testing.T) { blk1 := new(types.StateBlock) if err := json.Unmarshal([]byte(utilsblock1), &blk1); err != nil { t.Fatal("Unmarshal block error") } blk2 := new(types.StateBlock) if err := json.Unmarshal([]byte(utilsblock2), &blk2); err != nil { t.Fatal("Unmarshal block error") } var seedstring = "DB68096C0E2D2954F59DA5DAAE112B7B6F72BE35FC96327FE0D81FD0CE5794A9" s, err := hex.DecodeString(seedstring) if err != nil { t.Fatal("hex string error") } seed, err := types.BytesToSeed(s) if err != nil { t.Fatal("bytes to seed error") } ac, err := seed.Account(0) if err != nil { t.Fatal("seed to account error") } var va protos.ConfirmAckBlock va.Sequence = 0 va.Hash = append(va.Hash, blk1.GetHash()) va.Hash = append(va.Hash, blk2.GetHash()) va.Account = ac.Address() hbytes := make([]byte, 0) for _, h := range va.Hash { hbytes = append(hbytes, h[:]...) } hash, _ := types.HashBytes(hbytes) va.Signature = ac.Sign(hash) verify := IsAckSignValidate(&va) if verify != true { t.Fatal("verify error") } }
Facebook is placing a big emphasis on live video, and today the company took the next step by announcing the Livestream Mevo, the first broadcast camera that officially support Facebook Live. Not only is the Mevo the only camera approved by Zuckerberg & Co., but the live broadcaster allows users to edit as they stream by tapping on automatically generated targets. The Livestream Mevo is available for pre-order today for $299, which is discounted from its $399 final retail price. At 1.1 inches tall and 2 inches wide, the 1.8-ounce Mevo is an ultra-portable camera that you can keep on you at all times. Image: MevoThe Mevo requires an iPhone 5s or later running iOS 9 or higher, which you'll use as its editing bay while streaming. The camera automatically detects faces and moving bodies, and you can set it to jump from one subject by tapping the targets its placed over subjects. While the Mevo is limited to only 150 degrees of movement, it performs these cuts so smoothly that audiences could be fooled into thinking the stream is being shot by a series of cameras, and not just one. The camera captures video at 4K, which it then breaks down into seperate 1080p live feeds. Its recorded video is limited to 720p at up to 30fps. The Mevo can also edit for you, changing its focal point and zooming as it senses movement and sound. This way if you're more of a performer than a director, you don't need another crew member. Just be aware that you're not guaranteed any perfect edits when you hand over creative control to your hardware. The combination of subject-tracking capability and seamless editing will be a big step up for anyone who's currently streaming on Facebook Live, as they've been limited to the lenses on the front and back of their smartphones. Mevo streamers can also perform manual controls like pinch to zoom in and out and drag to pan. While the Mevo only has enough juice for an hour of streaming, you can extend its life to up to 10 hours with the external battery sold in the Pro bundle. The combination costs $549 in pre-orders, as opposed to the $649 final retail price. Each Mevo comes with a 16GB microSD card for storing recorded footage, and you can also share video to Instagram, Twitter, Vimeo. If you'd rather record and publish later, the Mevo also works as a standard video camera. Business Insider is reporting that Facebook will not charge a fee for users to stream video through its Live service, though Livestream used to charge $9 per month to stream on its platform. The social network also announced a more elaborate camera, the Facebook Surround 360, which records -- you guessed it -- 360-degree video. The company plans to release the plans for this 17-camera rig on GitHub as an open-source project. The Surround 360 will record and export immersive video in resolutions up to 8K that can be viewed on devices including the Gear VR and Oculus hardware. The rig will have 14 wide-angle cameras on its rim, one fish-eye lens camera on its top and two more on its underside. How Many Megapixels Do You Really Need?
Using Configuration Semantic Features and Machine Learning Algorithms to Predict Build Result in Cloud-Based Container Environment Container technologies are being widely used in large scale production cloud environments, of which Docker has become the de-facto industry standard. In practice, Docker builds often break, and a large amount of efforts are put into troubleshooting broken builds. Prior studies have evaluated the rate at which builds in large organizations fail. However, there is still a lack of early warning methods for predicting the Docker build result before the build starts. This paper provides a first attempt to propose an automatic method named PDBR. It aims to use the configuration semantic features extracted by AST and the machine learning algorithms to predict build result in the cloud-based container environment. The evaluation experiments based on more than 36,000 collected Docker builds show that PDBR achieves 73.45%-91.92% in F1 and 29.72%-72.16% in AUC. We also demonstrate that different ML classifiers have significant and large effects on the PDBR AUC performance.
The power capacity of semiconductor electronic devices is limited by temperature. Semiconductor electronic devices are destroyed when their internal temperature reaches a particular limit. The limit depends upon the nature of the electronic device. In order to increase the electrical capacity of such devices, they have to be cooled. The manner of cooling depends on the amount of power available for the cooling process. Different cooling fluids are used, depending on the application and depending on the density of the electronic devices. In some cases, finned cold plates carry the electronic devices and convection cooling by ambient air is sufficient to prevent overheating of the electronic devices. In other cases, liquids are used. In some cases, they are boiling liquids such as fluorinated hydrocarbon refrigerants, which are delivered to the cold plate in liquid form and are boiled therein to remove heat. Such systems perhaps have the highest heat removal rate for a limited cold plate area, but require a considerable amount of power to operate. In other systems, a cold liquid is circulated through the cold plate and the cold liquid may be refrigerator cooled, evaporatively cooled, or convectively cooled to the atmosphere. At relatively low pumping rates, with consequent low pumping power, a significant boundary layer builds on the various surfaces within the cold plate to reduce heat transfer rates. In an attempt to increase heat transfer, the mounting plate often carried fins thereon in order to increase interface area. However, as stated above, the velocity of the flow and, therefore, the forced convection heat transfer coefficient is controlled by the flow channel configuration. High liquid coolant velocities are achieved with small channels, but the fin and channel sizes are limited by manufacturing capabilities. Furthermore, high efficiency localized cooling was not possible in these structures because the coolant liquid must travel along the full length of the fins and the mounting plate. With the boundary layer built up over the large area, heat transfer was inhibited.
<reponame>CeilingPhantom/theharvest<filename>modules/tile.py ''' Manages the Tile screen ''' #Import needed modules import os import pygame as pg from pygame.locals import * from modules.gamestate import Gamestate class Tile(Gamestate): def __init__(self): super(Tile, self).__init__() self.box_w = 600 self.box_topleftx = self.screen_centerx - self.box_w/2 self.btn_w = 120 #redefine because different box dimensions compared to other screens using the close and back buttons self.close_btntxt_rect = self.close_btntxt.get_rect(midright=(self.box_topleftx+self.box_w-self.box_borderdist/2, self.box_toplefty+3*self.box_borderdist/4)) self.back_btntxt_rect = self.back_btntxt.get_rect(midleft=(self.box_topleftx+self.box_borderdist/2, self.box_toplefty+3*self.box_borderdist/4)) self.xaligndist = 30 self.xadjust = 20 self.ydist = 80 self.yadjust = 10 self.xalign1 = self.box_topleftx + self.xaligndist + self.xadjust self.xalign2 = self.screen_centerx - self.xadjust self.xalign3 = self.xalign1 + self.box_w/2 - self.xadjust #670 self.xalign4 = self.xalign2 + self.box_w/2 - self.xadjust self.tc_mdtopx = self.screen_centerx - self.box_w/4 + self.xadjust # tc = tier current self.tier_mdtopy = self.screen_centery + 3*self.yadjust self.earntxt_y = self.tier_mdtopy + self.ydist - self.yadjust self.maintxt_y = self.earntxt_y + self.ydist - 2*self.yadjust self.sellremovebuy_centerx = (self.xalign1 + self.tc_mdtopx)/2 self.sellremoveupgbuy_centery = self.box_toplefty + self.box_h - self.btn_h self.sellbuybtn_topleftx = self.sellremovebuy_centerx - self.btn_w/2 self.btn_topy = self.sellremoveupgbuy_centery - self.btn_h/2 self.img_s = 285 self.img_x = self.box_topleftx + 2*self.xadjust #340 + 30 + 270 = 640 self.img_y = self.box_toplefty + 2*self.xadjust #60 + 280 self.titleflavor_x = self.xalign3 + self.xadjust self.flavor_y = self.title_rect_centery + self.ydist/3 self.buyupg_msgtxta = self.font_body.render('Insufficient Money', True, self.c_lightgray) self.buyupg_msgtxta_rect = self.buyupg_msgtxta.get_rect(center=(self.screen_centerx, self.msgbox_3line_ya)) self.buyupg_msgtxtb = self.font_body2.render('At Least $1 Needed', True, self.c_lightgray) self.buyupg_msgtxtb_rect = self.buyupg_msgtxtb.get_rect(center=(self.screen_centerx, self.msgbox_3line_yb)) self.buyupg_msgtxtc = self.font_body2.render('Click to Return', True, self.c_lightgray) self.buyupg_msgtxtc_rect = self.buyupg_msgtxtc.get_rect(center=(self.screen_centerx, self.msgbox_3line_yc)) self.msgbox = pg.Surface((self.buyupg_msgtxta_rect.w+self.msgbox_bordershift, self.buyupg_msgtxta_rect.h+self.msgbox_bordershift)) self.msgbox.fill(self.c_darkgray) self.greenhouse_required_msgtxta = self.font_body.render('Greenhouse Required', True, self.c_lightgray) self.greenhouse_required_msgtxta_rect = self.greenhouse_required_msgtxta.get_rect(center=(self.screen_centerx, self.msgbox_3line_ya)) self.greenhouse_required_msgtxtb = self.font_body2.render('Crop Not in Season', True, self.c_lightgray) self.greenhouse_required_msgtxtb_rect = self.greenhouse_required_msgtxtb.get_rect(center=(self.screen_centerx, self.msgbox_3line_yb)) self.greenhouse_required_msgtxtc = self.font_body2.render('Click to Return', True, self.c_lightgray) self.greenhouse_required_msgtxtc_rect = self.greenhouse_required_msgtxtc.get_rect(center=(self.screen_centerx, self.msgbox_3line_yc)) self.silo_required_msgtxta = self.font_body.render('Silo Required', True, self.c_lightgray) self.silo_required_msgtxta_rect = self.silo_required_msgtxta.get_rect(center=(self.screen_centerx, self.msgbox_3line_ya)) self.silo_required_msgtxtb = self.font_body2.render('Livestock Need Silo for Food', True, self.c_lightgray) self.silo_required_msgtxtb_rect = self.silo_required_msgtxtb.get_rect(center=(self.screen_centerx, self.msgbox_3line_yb)) self.silo_required_msgtxtc = self.font_body2.render('Click to Return', True, self.c_lightgray) self.silo_required_msgtxtc_rect = self.silo_required_msgtxtc.get_rect(center=(self.screen_centerx, self.msgbox_3line_yc)) self.sellremoveupg_msgtxt_yes_x = self.screen_centerx-self.msgbox.get_width()/6 self.sellremoveupg_msgtxt_no_x = self.screen_centerx+self.msgbox.get_width()/6 self.sellremoveupg_msg_yesno_y = self.screen_centery+3*self.msgbox_bordershift/4 self.sellremoveupg_msgtxt = self.font_body.render('Are You Sure?', True, self.c_lightgray) self.sellremoveupg_msgtxt_rect = self.sellremoveupg_msgtxt.get_rect(center=(self.screen_centerx, self.screen_centery)) self.sellremoveupg_msgtxt_yes = self.font_body2.render('Yes', True, self.c_lightgray) self.sellremoveupg_msgtxt_yes_rect = self.sellremoveupg_msgtxt_yes.get_rect(center=(self.sellremoveupg_msgtxt_yes_x, self.sellremoveupg_msg_yesno_y)) self.sellremoveupg_msgtxt_no = self.font_body2.render('No', True, self.c_lightgray) self.sellremoveupg_msgtxt_no_rect = self.sellremoveupg_msgtxt_no.get_rect(center=(self.sellremoveupg_msgtxt_no_x, self.sellremoveupg_msg_yesno_y)) def startup(self, persistent): self.persist = persistent self.grid = self.persist['grid'] self.grid_tilecycle = self.persist['grid_tilecycle'] self.money = self.persist['money'] self.select_col = self.persist['select_col'] self.select_row = self.persist['select_row'] self.buytile = self.persist['buytile'] self.plowingtiles_dict = self.persist['plowingtiles_dict'] self.plowingtile_counter = self.persist['plowingtile_counter'] self.plowingtimer = self.persist['plowingtimer'] self.current_season = self.persist['current_season'] self.sfxvol = self.persist['sfxvol'] self.musicvol = self.persist['musicvol'] self.background_img = pg.image.load(os.path.join('resources/temp', 'background.png')).convert() self.can_upgrade = False self.sellremoveupg_msg = False self.insufficent_money = False self.greenhouse_required = False self.silo_required = False self.subtitle = None self.subtitle_rect = None if self.buytile is None: if not(self.grid[self.select_row][self.select_col] == 'Dirt0' or self.grid[self.select_row][self.select_col] == 'Field0' or self.grid[self.select_row][self.select_col] == 'Construct0'): self.tile = self.grid[self.select_row][self.select_col] #Set title self.title = self.tiles[self.tile].displayname #Set img self.img = self.tiles[self.tile].img0.convert() #Set flavor text self.flavortxta = self.tiles[self.tile].flavora self.flavortxtb = self.tiles[self.tile].flavorb #Set season text self.seasontxt = self.tiles[self.tile].season else: for self.plowingtile in self.plowingtiles_dict.keys(): if self.plowingtiles_dict[self.plowingtile][1] == self.select_row and self.plowingtiles_dict[self.plowingtile][2] == self.select_col: break self.tile = self.plowingtiles_dict[self.plowingtile][0] #Set title self.title = self.tiles[self.tile].displayname #Set subtitle if self.tiles[self.plowingtiles_dict[self.plowingtile][0]].tiletype == 'Crop' and self.grid[self.select_row][self.select_col] == 'Dirt0': self.subtitle = self.tiles['Field0'].displayname elif (self.tiles[self.plowingtiles_dict[self.plowingtile][0]].tiletype == 'Livestock' or self.tiles[self.plowingtiles_dict[self.plowingtile][0]].tiletype == 'Structure') and self.grid[self.select_row][self.select_col] == 'Dirt0': self.subtitle = self.tiles['Construct0'].displayname else: self.subtitle = self.title if len(self.tiles[self.grid[self.select_row][self.select_col]].displayname+' --> '+self.subtitle) > 26: self.subtitle = self.font_ibody3.render(self.tiles[self.grid[self.select_row][self.select_col]].displayname+' --> '+self.subtitle, True, self.c_black) else: self.subtitle = self.font_ibody2.render(self.tiles[self.grid[self.select_row][self.select_col]].displayname+' --> '+self.subtitle, True, self.c_black) self.subtitle_rect = self.subtitle.get_rect(midleft=(self.titleflavor_x, self.title_rect_centery-2*self.ydist/5)) #Set img self.img = self.tiles[self.grid[self.select_row][self.select_col]].img0.convert() #Set flavor text self.flavortxta = self.tiles[self.grid[self.select_row][self.select_col]].flavora self.flavortxtb = self.tiles[self.grid[self.select_row][self.select_col]].flavorb #Set season text self.seasontxt = self.tiles[self.tile].season #Set remove price self.removep = self.tiles[self.grid[self.select_row][self.select_col]].sellprice else: self.tile = self.buytile #Set title self.title = self.tiles[self.tile].displayname #Set img self.img = self.tiles[self.tile].img0.convert() #Set flavor text self.flavortxta = self.tiles[self.tile].flavora self.flavortxtb = self.tiles[self.tile].flavorb #Set season text self.seasontxt = self.tiles[self.tile].season #Set name to be used for determining the name of the tile 1 tier above self.name = self.tiles[self.tile].name[0:-1] #Set sell price self.sellp = self.tiles[self.tile].sellprice #Set upgrade price self.buyp = self.tiles[self.tile].buyupgprice #Set current tier earnings self.tc_earn = self.tiles[self.tile].earnings #Set current tier maintenance self.tc_main = self.tiles[self.tile].maintenance #Sets the structure area of effect info, if it is a structure self.upgtxt_list = [] for structure in self.structures: if self.tile == structure: tiercrnt = int(self.tile[-1]) lowest_tier = 1 name = list(self.tile[0:-1]) name.append(str(lowest_tier)) tier_structure_name = ''.join(name) tier1 = [] t1upgtxt_list = [] t1upgtxt_rect_list = [] t1upgtxt_list.append(self.font_body2.render('Tier 1: '+self.structure_tierupgtxt[tier_structure_name][0], True, self.c_black)) t1upgtxt_list.append(self.font_body2.render(self.structure_tierupgtxt[tier_structure_name][1], True, self.c_black)) t1upgtxt_list.append(self.font_body2.render(self.structure_tierupgtxt[tier_structure_name][2], True, self.c_black)) t1upgtxt_rect_list.append(t1upgtxt_list[0].get_rect(topleft=(self.titleflavor_x, self.flavor_y+5*self.ydist/4))) t1upgtxt_rect_list.append(t1upgtxt_list[1].get_rect(topleft=(self.titleflavor_x, self.flavor_y+3*self.ydist/2))) t1upgtxt_rect_list.append(t1upgtxt_list[2].get_rect(topleft=(self.titleflavor_x, self.flavor_y+7*self.ydist/4))) tier1.append(t1upgtxt_list) tier1.append(t1upgtxt_rect_list) self.upgtxt_list.append(tier1) lowest_tier += 1 if self.buytile is None and tiercrnt+1 >= lowest_tier: name = list(self.tile[0:-1]) name.append(str(lowest_tier)) tier_structure_name = ''.join(name) tier2 = [] t2upgtxt_list = [] t2upgtxt_rect_list = [] t2upgtxt_list.append(self.font_body2.render('Tier 2: '+self.structure_tierupgtxt[tier_structure_name], True, self.c_black)) t2upgtxt_rect_list.append(t2upgtxt_list[0].get_rect(topleft=(self.titleflavor_x, self.flavor_y+2*self.ydist))) tier2.append(t2upgtxt_list) tier2.append(t2upgtxt_rect_list) self.upgtxt_list.append(tier2) lowest_tier += 1 if lowest_tier <= tiercrnt+1 <= 3+1: name = list(self.tile[0:-1]) name.append(str(lowest_tier)) tier_structure_name = ''.join(name) tier3 = [] t3upgtxt_list = [] t3upgtxt_rect_list = [] t3upgtxt_list.append(self.font_body2.render('Tier 3: '+self.structure_tierupgtxt[tier_structure_name][0], True, self.c_black)) t3upgtxt_list.append(self.font_body2.render(self.structure_tierupgtxt[tier_structure_name][1], True, self.c_black)) t3upgtxt_list.append(self.font_body2.render(self.structure_tierupgtxt[tier_structure_name][2], True, self.c_black)) t3upgtxt_rect_list.append(t3upgtxt_list[0].get_rect(topleft=(self.titleflavor_x, self.flavor_y+9*self.ydist/4))) t3upgtxt_rect_list.append(t3upgtxt_list[1].get_rect(topleft=(self.titleflavor_x, self.flavor_y+5*self.ydist/2))) t3upgtxt_rect_list.append(t3upgtxt_list[2].get_rect(topleft=(self.titleflavor_x, self.flavor_y+11*self.ydist/4))) tier3.append(t3upgtxt_list) tier3.append(t3upgtxt_rect_list) self.upgtxt_list.append(tier3) break else: break else: break self.tile = self.tiles[self.tile] #Tile Info and Upgrade title if len(self.title) > 9: self.title = self.font_title2.render(self.title, True, self.c_black) else: self.title = self.font_title.render(self.title, True, self.c_black) self.title_rect = self.title.get_rect(midleft=(self.titleflavor_x, self.title_rect_centery)) #Img self.img = pg.transform.scale(self.img, (self.img_s, self.img_s)) #Flavor Text self.flavortxta = self.font_ibody.render(self.flavortxta, True, self.c_black) self.flavortxtb = self.font_ibody.render(self.flavortxtb, True, self.c_black) self.flavortxta_rect = self.flavortxta.get_rect(topleft=(self.titleflavor_x, self.flavor_y)) self.flavortxtb_rect = self.flavortxtb.get_rect(topleft=(self.titleflavor_x, self.flavor_y+self.ydist/4)) #Season Text self.seasontxt = self.font_body.render('Season: '+self.seasontxt, True, self.c_black) self.seasontxt_rect = self.seasontxt.get_rect(topleft=(self.titleflavor_x, self.flavor_y+3*self.ydist/4)) if self.tile != self.tiles['Grass0']: self.tiercrnt = int(self.tile.name[-1]) #Tier Current Heading self.tctxt = self.font_header.render('Tier '+str(self.tiercrnt), True, self.c_black) self.tctxt_rect = self.tctxt.get_rect(midtop=(self.tc_mdtopx, self.tier_mdtopy)) #TC Earnings self.earntxta = self.font_body.render('Earnings', True, self.c_black) self.tc_earntxta_rect = self.earntxta.get_rect(topleft=(self.xalign1, self.earntxt_y)) self.tc_earntxtb = self.font_body2.render('+$'+str(self.tc_earn)+'/10 days', True, self.c_black) self.tc_earntxtb_rect = self.tc_earntxtb.get_rect(topright=(self.xalign2, self.earntxt_y)) #TC Maintenance self.maintxta = self.font_body.render('Maintenance', True, self.c_black) self.tc_maintxta_rect = self.maintxta.get_rect(topleft=(self.xalign1, self.maintxt_y)) self.tc_maintxtb = self.font_body2.render('-$'+str(self.tc_main)+'/day', True, self.c_black) self.tc_maintxtb_rect = self.tc_maintxtb.get_rect(topright=(self.xalign2, self.maintxt_y)) #Sell if self.buytile is None: if not(self.grid[self.select_row][self.select_col] == 'Dirt0' or self.grid[self.select_row][self.select_col] == 'Field0' or self.grid[self.select_row][self.select_col] == 'Construct0'): self.selltxt = self.font_body.render('Sell', True, self.c_black) self.selltxt_rect = self.selltxt.get_rect(center=(self.sellremovebuy_centerx, self.sellremoveupgbuy_centery)) self.sellptxt = self.font_body2.render('+$'+str(self.sellp), True, self.c_black) self.sellptxt_rect = self.sellptxt.get_rect(midright=(self.xalign2, self.sellremoveupgbuy_centery)) #Remove else: self.removetxt = self.font_body.render('Remove', True, self.c_black) self.removetxt_rect = self.removetxt.get_rect(center=(self.sellremovebuy_centerx, self.sellremoveupgbuy_centery)) self.removeptxt = self.font_body2.render('-$'+str(self.removep), True, self.c_black) self.removeptxt_rect = self.removeptxt.get_rect(midright=(self.xalign2, self.sellremoveupgbuy_centery)) #Buy else: self.buytxt = self.font_body.render('Buy', True, self.c_black) self.buytxt_rect = self.buytxt.get_rect(center=(self.sellremovebuy_centerx, self.sellremoveupgbuy_centery)) self.buyptxt = self.font_body2.render('-$'+str(self.buyp), True, self.c_black) self.buyptxt_rect = self.buyptxt.get_rect(midright=(self.xalign2, self.sellremoveupgbuy_centery)) if self.tiercrnt < 3: self.tiernxt = str(self.tiercrnt + 1) upg_tile_name = list(self.name) upg_tile_name.append(self.tiernxt) self.upg_tile_name = ''.join(upg_tile_name) self.upg_tile = self.tiles[self.upg_tile_name] self.upgp = self.upg_tile.buyupgprice self.tn_earn = self.upg_tile.earnings self.tn_main = self.upg_tile.maintenance self.tn_mdtopx = self.screen_centerx + self.box_w/4 self.upg_centerx = (self.tn_mdtopx + self.xalign3)/2 self.upgbtn_topleftx = self.upg_centerx - self.btn_w/2 #Tier Next Heading self.tntxt = self.font_header.render('Tier '+self.tiernxt, True, self.c_black) self.tntxt_rect = self.tntxt.get_rect(midtop=(self.tn_mdtopx, self.tier_mdtopy)) #TN Earnings self.tn_earntxta_rect = self.earntxta.get_rect(topleft=(self.xalign3, self.earntxt_y)) self.tn_earntxtb = self.font_body2.render('+$'+str(self.tn_earn)+'/10 days', True, self.c_black) self.tn_earntxtb_rect = self.tn_earntxtb.get_rect(topright=(self.xalign4, self.earntxt_y)) #TN Maintenance self.tn_maintxta_rect = self.maintxta.get_rect(topleft=(self.xalign3, self.maintxt_y)) self.tn_maintxtb = self.font_body2.render('-$'+str(self.tn_main)+'/day', True, self.c_black) self.tn_maintxtb_rect = self.tn_maintxtb.get_rect(topright=(self.xalign4, self.maintxt_y)) #Upgrade if self.buytile is None: if not(self.grid[self.select_row][self.select_col] == 'Dirt0' or self.grid[self.select_row][self.select_col] == 'Field0' or self.grid[self.select_row][self.select_col] == 'Construct0'): self.upgtxt = self.font_body.render('Upgrade', True, self.c_black) self.upgtxt_rect = self.upgtxt.get_rect(center=(self.upg_centerx, self.sellremoveupgbuy_centery)) self.upgptxt = self.font_body2.render('-$'+str(self.upgp), True, self.c_black) self.upgptxt_rect = self.upgptxt.get_rect(midright=(self.xalign4, self.sellremoveupgbuy_centery)) def sell_tile(self): ''' Sells the selected tile ''' self.plowingtiles_dict[str(self.plowingtile_counter)] = ['Grass0', self.select_row, self.select_col, self.plowingtimer] self.grid[self.select_row][self.select_col] = 'Dirt0' self.money += self.sellp self.plowingtile_counter += 1 self.grid_tilecycle[self.select_row][self.select_col] = 0 self.persist['grid'] = self.grid self.persist['money'] = self.money self.persist['total_money_earned'] += self.sellp self.persist['plowingtiles_dict'] = self.plowingtiles_dict self.persist['plowingtile_counter'] = self.plowingtile_counter self.persist['grid_tilecycle'] = self.grid_tilecycle self.set_total_money_highest_lowest() self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_money) def upgrade_tile(self): ''' Upgrades the selected tile ''' self.grid[self.select_row][self.select_col] = self.upg_tile_name self.money -= self.upgp self.persist['grid'] = self.grid self.persist['total_money_spent'] += self.upgp self.persist['money'] = self.money self.set_total_money_highest_lowest() self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_money) def remove_tile(self): ''' Removes the selected tile ''' del self.plowingtiles_dict[self.plowingtile] self.plowingtiles_dict[str(self.plowingtile_counter)] = ['Grass0', self.select_row, self.select_col, self.plowingtimer] self.grid[self.select_row][self.select_col] = 'Dirt0' self.money -= self.removep self.plowingtile_counter += 1 self.grid_tilecycle[self.select_row][self.select_col] = 0 self.persist['grid'] = self.grid self.persist['total_money_spent'] += self.removep self.persist['money'] = self.money self.persist['plowingtiles_dict'] = self.plowingtiles_dict self.persist['plowingtile_counter'] = self.plowingtile_counter self.persist['grid_tilecycle'] = self.grid_tilecycle self.set_total_money_highest_lowest() self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_money) def buy_tile_or_not(self): ''' Determines if the player can buy the selected tile ''' if self.tile.tiletype == 'Crop': if self.tile.season == self.current_season or self.tile.season == 'All': self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_clicked) else: if self.tile_exists('Greenhouse'): self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_clicked) else: self.greenhouse_required = True self.play_sfx(self.sfx_denied) elif self.tile.tiletype == 'Livestock': if self.tile_exists('Silo'): self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_clicked) else: self.silo_required = True self.play_sfx(self.sfx_denied) else: self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_clicked) def get_event(self, event): if event.type == QUIT: self.quit = True elif event.type == KEYDOWN and event.key == K_ESCAPE: if self.can_upgrade or self.sellremoveupg_msg or self.insufficent_money or self.greenhouse_required or self.silo_required: self.can_upgrade = False self.sellremoveupg_msg = False self.insufficent_money = False self.greenhouse_required = False self.silo_required = False self.play_sfx(self.sfx_clicked) else: self.persist['buytile'] = None self.next_state = 'Farm' self.done = True self.play_sfx(self.sfx_clicked) elif event.type == MOUSEBUTTONDOWN and event.button == 1: if self.insufficent_money or self.greenhouse_required or self.silo_required or self.sellremoveupg_msg: if self.screen_centerx-self.msgbox.get_width()/2 < event.pos[0] < self.screen_centerx+self.msgbox.get_width()/2 and self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2 < event.pos[1] < self.screen_centery+self.msgbox.get_height()/2+self.msgbox_bordershift/2 and (self.insufficent_money or self.greenhouse_required or self.silo_required): self.insufficent_money = False self.greenhouse_required = False self.silo_required = False self.play_sfx(self.sfx_clicked) elif self.sellremoveupg_msg_yesno_y-self.sellremoveupg_msgtxt_yes.get_height()/2 < event.pos[1] < self.sellremoveupg_msg_yesno_y+self.sellremoveupg_msgtxt_yes.get_height()/2 and self.sellremoveupg_msg: #Press Yes if self.sellremoveupg_msgtxt_yes_x-self.sellremoveupg_msgtxt_yes.get_width()/2-self.btn_borderthick < event.pos[0] < self.sellremoveupg_msgtxt_yes_x+self.sellremoveupg_msgtxt_yes.get_width()/2+self.btn_borderthick: if self.buytile is None: if self.can_upgrade: self.upgrade_tile() self.can_upgrade = False self.sellremoveupg_msg = False self.play_sfx(self.sfx_money) else: if not(self.grid[self.select_row][self.select_col] == 'Dirt0' or self.grid[self.select_row][self.select_col] == 'Field0' or self.grid[self.select_row][self.select_col] == 'Construct0'): self.sell_tile() self.sellremoveupg_msg = False self.play_sfx(self.sfx_money) else: self.remove_tile() self.sellremoveupg_msg = False self.play_sfx(self.sfx_money) #Press No elif self.sellremoveupg_msgtxt_no_x-self.sellremoveupg_msgtxt_yes.get_width()/2-self.btn_borderthick < event.pos[0] < self.sellremoveupg_msgtxt_no_x+self.sellremoveupg_msgtxt_yes.get_width()/2+self.btn_borderthick: self.can_upgrade = False self.sellremoveupg_msg = False self.play_sfx(self.sfx_clicked) else: self.close_btn_farm(event) self.back_btn_farm(event) if self.btn_topy < event.pos[1] < self.btn_topy + self.btn_h: if self.tile != self.tiles['Grass0']: if self.sellbuybtn_topleftx < event.pos[0] < self.sellbuybtn_topleftx + self.btn_w: if self.buytile is None: self.sellremoveupg_msg = True self.play_sfx(self.sfx_clicked) else: if self.money > 0: self.buy_tile_or_not() else: self.insufficent_money = True self.play_sfx(self.sfx_denied) elif self.upgbtn_topleftx < event.pos[0] < self.upgbtn_topleftx + self.btn_w and self.tiercrnt < 3 and self.buytile is None and not(self.grid[self.select_row][self.select_col] == 'Dirt0' or self.grid[self.select_row][self.select_col] == 'Field0' or self.grid[self.select_row][self.select_col] == 'Construct0'): if self.money > 0: self.can_upgrade = True self.sellremoveupg_msg = True self.play_sfx(self.sfx_clicked) else: self.insufficent_money = True self.play_sfx(self.sfx_clicked) elif event.type == USEREVENT+2: self.play_next_music() def draw(self, surface): #Draw background image of player's farm surface.blit(self.background_img, (0, 0)) surface.blit(self.background_img_blackoverlay, (0, 0)) #Draw containing box pg.draw.rect(surface, self.c_white, (self.box_topleftx, self.box_toplefty, self.box_w, self.box_h)) pg.draw.rect(surface, self.c_black, (self.box_topleftx, self.box_toplefty, self.box_w, self.box_h), self.box_borderthick) #Draw text, close and back buttons surface.blit(self.title, self.title_rect) surface.blit(self.close_btntxt, self.close_btntxt_rect) surface.blit(self.back_btntxt, self.back_btntxt_rect) surface.blit(self.img, (self.img_x, self.img_y)) #Draw text for tiles in development if self.subtitle is not None: surface.blit(self.subtitle, self.subtitle_rect) #Draw flavor text surface.blit(self.flavortxta, self.flavortxta_rect) surface.blit(self.flavortxtb, self.flavortxtb_rect) #Draw season text surface.blit(self.seasontxt, self.seasontxt_rect) #Draw structure area of effect info text if bool(self.upgtxt_list): for tier in self.upgtxt_list: txt = tier[0] txt_rect = tier[1] for nth_txt in range(len(txt)): surface.blit(txt[nth_txt], txt_rect[nth_txt]) #Draw info about current tier and next tier (if not tier 3) if self.tile != self.tiles['Grass0']: pg.draw.rect(surface, self.c_black, (self.sellbuybtn_topleftx, self.btn_topy, self.btn_w, self.btn_h), self.btn_borderthick) surface.blit(self.tctxt, self.tctxt_rect) surface.blit(self.earntxta, self.tc_earntxta_rect) surface.blit(self.tc_earntxtb, self.tc_earntxtb_rect) surface.blit(self.maintxta, self.tc_maintxta_rect) surface.blit(self.tc_maintxtb, self.tc_maintxtb_rect) if self.buytile is None: if not(self.grid[self.select_row][self.select_col] == 'Dirt0' or self.grid[self.select_row][self.select_col] == 'Field0' or self.grid[self.select_row][self.select_col] == 'Construct0'): surface.blit(self.selltxt, self.selltxt_rect) surface.blit(self.sellptxt, self.sellptxt_rect) else: surface.blit(self.removetxt, self.removetxt_rect) surface.blit(self.removeptxt, self.removeptxt_rect) else: surface.blit(self.buytxt, self.buytxt_rect) surface.blit(self.buyptxt, self.buyptxt_rect) if self.tiercrnt < 3: surface.blit(self.tntxt, self.tntxt_rect) surface.blit(self.earntxta, self.tn_earntxta_rect) surface.blit(self.tn_earntxtb, self.tn_earntxtb_rect) surface.blit(self.maintxta, self.tn_maintxta_rect) surface.blit(self.tn_maintxtb, self.tn_maintxtb_rect) if self.buytile is None: if not(self.grid[self.select_row][self.select_col] == 'Dirt0' or self.grid[self.select_row][self.select_col] == 'Field0' or self.grid[self.select_row][self.select_col] == 'Construct0'): pg.draw.rect(surface, self.c_black, (self.upgbtn_topleftx, self.btn_topy, self.btn_w, self.btn_h), self.btn_borderthick) surface.blit(self.upgtxt, self.upgtxt_rect) surface.blit(self.upgptxt, self.upgptxt_rect) #Draw popup for insufficient money if self.insufficent_money: surface.blit(self.msgbox, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2)) pg.draw.rect(surface, self.c_black, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2, self.msgbox.get_width(), self.msgbox.get_height()), self.box_borderthick) surface.blit(self.buyupg_msgtxta, self.buyupg_msgtxta_rect) surface.blit(self.buyupg_msgtxtb, self.buyupg_msgtxtb_rect) surface.blit(self.buyupg_msgtxtc, self.buyupg_msgtxtc_rect) #Draw popup for greenhouse required if self.greenhouse_required: surface.blit(self.msgbox, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2)) pg.draw.rect(surface, self.c_black, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2, self.msgbox.get_width(), self.msgbox.get_height()), self.box_borderthick) surface.blit(self.greenhouse_required_msgtxta, self.greenhouse_required_msgtxta_rect) surface.blit(self.greenhouse_required_msgtxtb, self.greenhouse_required_msgtxtb_rect) surface.blit(self.greenhouse_required_msgtxtc, self.greenhouse_required_msgtxtc_rect) #Draw popup for silo required if self.silo_required: surface.blit(self.msgbox, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2)) pg.draw.rect(surface, self.c_black, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2, self.msgbox.get_width(), self.msgbox.get_height()), self.box_borderthick) surface.blit(self.silo_required_msgtxta, self.silo_required_msgtxta_rect) surface.blit(self.silo_required_msgtxtb, self.silo_required_msgtxtb_rect) surface.blit(self.silo_required_msgtxtc, self.silo_required_msgtxtc_rect) #Draw popup for selling, removing or upgrading a tile if self.sellremoveupg_msg: surface.blit(self.msgbox, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2)) pg.draw.rect(surface, self.c_black, (self.screen_centerx-self.msgbox.get_width()/2, self.screen_centery-self.msgbox.get_height()/2+self.msgbox_bordershift/2, self.msgbox.get_width(), self.msgbox.get_height()), self.box_borderthick) pg.draw.rect(surface, self.c_lightgray, (self.sellremoveupg_msgtxt_yes_x-self.sellremoveupg_msgtxt_yes.get_width()/2-self.btn_borderthick, self.sellremoveupg_msg_yesno_y-self.sellremoveupg_msgtxt_yes.get_height()/2, self.sellremoveupg_msgtxt_yes.get_width()+2*self.btn_borderthick, self.sellremoveupg_msgtxt_yes.get_height()), self.btn_borderthick) pg.draw.rect(surface, self.c_lightgray, (self.sellremoveupg_msgtxt_no_x-self.sellremoveupg_msgtxt_yes.get_width()/2-self.btn_borderthick, self.sellremoveupg_msg_yesno_y-self.sellremoveupg_msgtxt_yes.get_height()/2, self.sellremoveupg_msgtxt_yes.get_width()+2*self.btn_borderthick, self.sellremoveupg_msgtxt_yes.get_height()), self.btn_borderthick) surface.blit(self.sellremoveupg_msgtxt, self.sellremoveupg_msgtxt_rect) surface.blit(self.sellremoveupg_msgtxt_yes, self.sellremoveupg_msgtxt_yes_rect) surface.blit(self.sellremoveupg_msgtxt_no, self.sellremoveupg_msgtxt_no_rect)
The Student Government Association hosted a candlelight vigil Thursday night hoping to raise awareness of suicide in light of a student’s death by suicide on campus Wednesday. At the vigil, which was just outside the Counseling and Psychological Services office, SGA President Winni Zhang said she is concerned with awareness on campus and explained the importance of the candlelight vigil. Volunteers set up paper bags with sand and candles around the fountains located behind Student Service Center 1. A lot of planning went into the event, Zhang said. Many students attended the event and several spoke to representatives from CAPS about resources. CAPS clinicians set up a booth at the vigil with brochures and pamphlets which provided information to students seeking help with any mental health issue, whether in a group or private setting. The counseling office can be contacted in many ways, including through its website, by phone at 713-743-5454 and on a walk-in basis, CAPS Director Norma Ngo said. According to the Texas Tribune, the International Association of Counseling Services recommends a ratio of mental health staff to students of one to 1,000-1,500. The University of Houston falls in last place among Texas schools in terms of the ratio of mental health staff to students, Zhang said. Since UH falls so low in meeting these standards, SGA is concerned with improving the University’s resources to help deal with mental health issues, Zhang said. CORRECTION: This article has been updated to include the correct phone number for UH’s Counseling and Psychological Services: 713-743-5454.
<gh_stars>0 // // ObjectPool.h // pool_object_pattern // // Created by <NAME> on 24.08.16. // Copyright © 2016 <NAME>. All rights reserved. // #import <Foundation/Foundation.h> @class ExampleObject; @interface ObjectPool : NSObject + (instancetype) sharedInstance; - (ExampleObject *)acquire; - (void) releaseObject:(ExampleObject *)obj; @end
import { IconDefinition } from '../types'; declare const LinkedinFilled: IconDefinition; export default LinkedinFilled;
<filename>services/portafolio.go<gh_stars>0 package services import ( "context" "github.com/Edilberto-Vazquez/personal-API/libs" "github.com/Edilberto-Vazquez/personal-API/utils" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" ) func PortafolioFind() (results []bson.M, err error) { var cursor *mongo.Cursor coll := libs.Client.Database("my-api").Collection("portafolio") cursor, err = coll.Find(context.TODO(), bson.D{{}}) err = utils.CursorDecode(&results, cursor, err) return results, err }
Identification of novel maize miRNAs by measuring the precision of precursor processing Background miRNAs are known to play important regulatory roles throughout plant development. Until recently, nearly all the miRNAs in maize were identified by comparative analysis to miRNAs sequences of other plant species, such as rice and Arabidopsis. Results To find new miRNA in this important crop, small RNAs from mixed tissues were sequenced, resulting in over 15 million unique sequences. Our sequencing effort validated 23 of the 28 known maize miRNA families, including 49 unique miRNAs. Using a newly established criterion, based on the precision of miRNA processing from precursors, we identified 66 novel miRNAs in maize. These miRNAs can be grouped into 58 families, 54 of which have not been identified in any other species. Five new miRNAs were validated by northern blot. Moreover, we found targets for 23 of the 66 new miRNAs. The targets of two of these newly identified miRNAs were confirmed by 5'RACE. Conclusion We have implemented a novel method of identifying miRNA by measuring the precision of miRNA processing from precursors. Using this method, 66 novel miRNAs and 50 potential miRNAs have been identified in maize. Background MiRNAs are known to play crucial roles in the regulation of gene expression in plants, including functions such as, leaf polarity, auxin response, floral identity, flowering time, and stress response. MiRNAs are typically~21 nucleotides in length. In plants, miRNA genes are transcribed by RNA polymerasell into primary miRNA transcripts (pri-miRNA) which can form imperfect stem-loop secondary structure. Then the pri-miRNAs are trimmed and spliced into miRNA/miRNA* duplex by Dicer-like1 (DCL1) with the help of dsRNA binding protein HYL1 and dsRNA methylase HEN1 [1,. The length of the pre-miRNAs in plants ranges from about 80-nt to 300-nt, and is more variable than in animals. After being transported to the cytoplasm, the mature miRNAs can match to the corresponding target mRNAs through RNA-induced silencing complex (RISC) and the miRNA* are thought to be degraded. MiRNAs regulate their target mRNA either by cleaving in the middle of their binding sites or by translational repression. The plant miRNAs are highly complementary to their targets with about 0~4 nucleotides mismatches. The majority of miRNAs were originally discovered through traditional Sanger sequencing of small RNA pools. With the advent of second (next) generation sequencing technology, the rate of miRNA discovery increased dramatically. However, due to the complexity of small RNA population, identification of miRNAs from the small RNA pools of sequencing product was not trivial. Typically, genomic sequences matched to all the small RNA with a length of 19~22-nt were extended upstream and downstream to get a collection of candidate precursors. Their secondary structures were then checked using a number of criteria with Minimum Free Energy (MFE) as the most important one [17,. The presence of miRNA* has been regarded as a golden standard to reliably annotate a novel miRNA. Nevertheless, miRNA* have only been reported to be showed up with mature miRNA around 10% of the time. As miRNAs can be enriched in certain genomic regions, a clustering algorithm was sometimes used for miRNA identification from large scale small RNA sequencing data. In these studies, hotspots of small RNA generation were identified if they match with multiple known miRNAs; individual hairpin sequences within these hotspots were subsequently checked to see whether some of them could be qualified as miRNAs. As many miRNAs are conserved among different organisms, sequences of miRNAs found in one species can be used to identify corresponding miRNAs in other species through comparative analysis. However, not all the miRNAs are conserved across different organisms. Direct prediction of potential miRNAs, based on the characteristics of miRNA precursors, has been shown to be a useful approach to identify miRNAs for any organisms, provided that there are a large amount of genomic sequences available. However, as millions, even billions of inverted repeat sequences exist in complex genomes, candidate miRNAs identified just based on computational prediction often show a high rate of false positive. Maize is an important crop as well as a model of plant genetics. A number of miRNAs with specific function have been reported in maize. The miR172 was reported to target APETALA2 floral homeotic transcription factor that is required for spikelet meristem determination. Also, miR172 functions in promoting vegetative phase transition by regulating the APETALA2-like gene glossy15. The expression of teosinte glume architec-ture1 (tga1), which plays an important role in maize domestication, is regulated by miR156. The miR166 has been found to target a class III homeodomain leucine zipper (HD-ZIPIII) protein that acts on the asymmetry development of leaves in maize. There are a total of 84 unique maize mature miRNAs belonging to 28 miRNA families in the current version of miRBase (release 17). These 84 miRNAs are the products of 167 precursors. All of these miRNAs were originally identified by searching with known miRNA from other plant species, such as Arabidopsis and rice. Recently, 150 mature miRNAs from 26 families were validated by Illumina sequencing. To do de novo identification of new miRNAs in maize, we have sequenced small RNAs from mixed tissues, tissues of endosperm and embryo using a next generation sequencing system. Moreover, a new method of identifying novel miRNAs, by measuring the precision of miRNA processing from their precursors, was employed. This method, conceptually proposed by Meyer et al., holds that the precise processing from precursor is both necessary and sufficient criterion for miRNA annotation. We report here the establishment of such a method of identifying miRNAs by measuring the precision of miRNA processing from precursors. This method has resulted in 66 newly identified miRNAs and 50 potential miRNAs in maize. Of the 66 newly identified miRNAs, 62 belong to 54 families that have not been identified before in any other organisms. Sequencing of maize small RNAs In order to identify novel miRNAs from maize, four different small RNA samples (two from mixed tissues, one from embryo and another from endosperm) of B73 inbred line were sequenced. The sequencing effort resulted in over 43 million signatures with a length of 18~30nt, representing over 15 million unique sequences ( Table 1). The overall size distribution of the sequenced reads from all four sequencing effort were very similar, with the 24-nt class being the most abundant, followed by 22-nt and 21-nt classes (Figure 1). Such a size distribution is consistent with recent report that 22-nt siR-NAs were specifically enriched in maize compared with other plants. Although over 43 million sequences were generated, a large number of signatures were only sequenced once, suggesting that maize has a very complex small RNA composition. The percentages of small RNAs sequenced once in four samples were 81.8% and 77.9% in two mixed tissues, 77.5% in endosperm and 78.6% in embryo, respectively. As in other small RNA sequencing efforts, there was a small portion of distinct signatures that matched to mitochondria or chloroplast genomes. In the four independently sequenced samples, there were 4.7%, 5.9%, 7.2% and 19% total signatures that respectively represent 0.26%, 0.50%, 0.49% and 1.2% unique reads matched to non-coding RNAs including tRNA, rRNA, snRNA, snoRNA ( Table 2). Validation of known maize miRNAs in miRBase There are a total of 84 unique mature miRNA sequences belonging to 28 miRNA family in the current miRBase for maize. All these miRNAs were identified by computational method based on sequence conservation using sequences of known miRNAs of other species. Out of the 84 unique miRNA sequences, 49 can be confirmed by our sequencing Except for zma-miR393, zma-miR1432, zma-miR408, zma-miR482 and zma-miR395, 23 of 28 known maize miRNA families had members detected in at least one of the four sequenced libraries. Some of the conserved miRNAs showed very high abundances in our sequenced libraries, for example, zma-miR156a, b, c, d, e, f, g, h and i had more than 20, 000 reads in our four samples ( Table 3). Sequencing of the four libraries showed that some miRNAs from the current miRNA database may have been mis-annotated. For example, there are two variants for miR166 in the current miRBase. First, zma-miR166b, c, d, e, h, and i are annotated as 22-nt (UCGGACCAGGCUUCAUUCCCC), while zma-miR166a is annotated as 21-nt (UCGGACCAGGCUU-CAUUCCC). The 21-nt form has been sequenced 15432, 10857, 19833 and 37037 times respectively in four databases, while the 22-nt form was only sequenced 240, 260, 711 and 476 times. The 21-nt form is nearly one hundred times more abundant than that of 22-nt, therefore we concluded that zma-miR166b, c, d, e, h and i should have the same mature miRNA of 21-nt as zma-miR166a. Consistent with the general opinion that the miRNA* degrades soon after the biogenesis of mature miRNA, the miRNA* had much less abundance than its corresponding miRNA in the sequencing dataset. Out of 167 miRNA precursors of maize in the current miR-Base, 143 had miRNA* annotated. Among the annotated miRNA*, 62 of them could be found in our small RNA sequencing libraries. We also found 10 miRNA* among the remaining 25 precursors that have not been annotated before. The total sequencing abundance of miRNA* in our four libraries was about 0.7% of that of mature miRNAs. However, there were two exceptions where miRNA* had more reads than its corresponding miRNA as reported before. The abundance of the originally annotated miRNA* of zma-miR396a and zma-miR396b was much higher (31,120,199, 59 times in four sequenced libraries) than its annotated miRNA (only 16,9,38,20 in the same sequenced libraries). The same thing happened to zma-miR408, whose miRNA was sequenced less than its miRNA*. Both miRNAs had strong conservation among plant species and their target genes validated. This may suggest that a small fraction of miRNA* do not degrade as fast as others. Novel miRNA identification and target prediction During the miRNA biogenesis process, the pri-miRNA transcribed by RNA polymerase II is trimmed and spliced into miRNA/miRNA* duplex by Dicer-like1 (DCL1). The precise enzymatic cleavage of miRNA/ miRNA* from the precursor is a key criterion that distinguishes miRNAs from diverse siRNA. We observed that, for most miRNA precursors, there were few small RNA reads other than miRNA and miRNA* that mapped to the precursors. To gain an overall pattern of small RNA distribution along the miRNA precursors, we tested the percentage of small RNA reads mapped to position of mature miRNAs vs. reads mapped to other regions of the same miRNA precursors for all known maize miRNAs. The result showed that out of the 120 known miRNA precursors which had mature miRNA expressed in our four small RNA libraries, 104 (86.7%) had over 75% of the small RNA reads mapped to the exact mature miRNA/miRNA* sites or 4-nt around. Having 75% of reads mapped to the miRNA/miRNA* and its close vicinity had recently been proposed as a primary criterion for valid miRNA annotation. Our result further demonstrated that such a precise processing criterion could be used as a straightforward and reliable method to identify the miRNA from the diverse small RNA data. To identify novel miRNAs using the method described above, maize genome sequences (downloaded from http://www.maizesequence.org) with known transposons masked were used to generate inverted repeat sequences. A total of 330, 048 inverted repeat sequences with a copy number of no more than 10 in the maize genome were obtained. These inverted repeat sequences were then folded by RNAfold, in both sense and antisense directions, which effectively narrowed down the candidate precursors. Candidate single loop precursors with an overall length of 80-300bp were kept in this study. We then attempted to identify novel miRNAs from our four sequenced RNA samples separately using the precise processing criterion as described in methods ( Figure 2). There were 314 sense and 313 antisense RNAs that qualified as miRNA precursor candidates based on the primary criterion. Finally, the secondary structures of these candidates were carefully checked for their validity as miRNA precursors, along with their corresponding mature miRNAs (Figure 3). There were 13 new miRNAs identified from mixed tissues I, 22 from mixed tissues II, 30 from embryo, 38 from endosperm (Table 4). All together we obtained a total of 66 unique new miRNAs. These new miRNAs could be grouped into 58 families ( Table 4), given that two miRNAs with less than 4 nucleotides mismatches were grouped into one family. Sixty-two of the 66 newly identified miRNAs belonging to 54 families have not been identified before in any other organisms. Since some of the miRNAs are derived from multiple precursors, the 66 newly identified miRNAs correspond to 70 miRNA precursors. The full information and secondary structure were shown in Additional file 1 and Additional file 2. From the 66 new miRNAs, 16 were sequenced in all four libraries, 17 in three, 15 in two and 18 in one library. The expressions of the 5 newly identified miR-NAs were validated by Northern blot using RNAs from kernel of mixed stages ( Figure 4). As additional evidence to support the annotation of some of these miRNAs, 22 of the 70 new miRNA precursors were found to have miRNA* in our sequencing data (Additional file 1). The 54 miRNA families that were identified for the first time in maize from our sequencing effort provided an opportunity to identify conserved miRNAs that have not yet been discovered in other plant species. After searching the genomes of sorghum, rice and Arabidopsis, we found 17 conserved in sorghum, 14 in rice and 2 in Arabidopsis (Table 5). As most miRNAs are near perfect complementary to their corresponding targeted mRNAs, we performed the target prediction by allowing no more than 3 mismatches between miRNA and its corresponding mRNA sequences. After searching in the annotated maize filtered genes set, we found 41 targeted genes for 23 new miRNAs, 2 of which were validated by 5'RACE. GRMZM2G416426 and GRMZM2G037792 were targeted by miRNA3 and miRNA65, respectively ( Figure 5). GRMZM2G416426 was predicted to be an alcohol dehydrogenase 1 (adh1) and GRMZM2G037792 was a GRAS transcription factor. MiRNA65 was identical to miR171a, b, c in Arabidopsis, which is reported to target GRAS transcriptional factor in Arabidopsis, suggesting that this miRNA and target pairs were conserved among dicot and monocot plants. A complete list of our predicted miRNAs and their predicted targets are shown in Additional file 3. The target gene GRMZM2G401869 of new miRNA4, was annotated to be a ribosomal protein, reported to be regulated by miR-10a in mouse. MiRNA38 was predicted to target a plant specific abscisic acid (ABA) stress-induced protein (GRMZM 2G027241). Identification of new miRNAs according to the precision of excision from the stem-loop precursor MiRNAs have been known to play very important post-transcriptional regulation roles throughout plant development. Identifying new miRNA is therefore a critical step towards the understanding of biological regulation. However, small RNA populations in all organisms are extremely complex; while accurate miR-NAs identification is not straightforward. Thus far, the majority of reported miRNAs have been identified by "extending method" [17,. The short reads that resulted from sequencing were mapped to the known reference genome and then candidate precursors were taken by extending upstream and downstream of small map sites. The secondary structures of these extended sequences were then carefully checked for consideration as miRNA precursors. This method typically cost significant computation time, as millions or billions of small RNA sequence generated from sequencing need to be mapped to and extended in the genome individually. For any miRNA precursors, there are other small RNA sequences mapped to 4-nt around the mature miRNA, which often confuse the miRNA annotation. Lacking other supportive information, the appearance of miRNA* is regarded as an essential condition for valid miRNA annotation. However, being degraded after miRNA release, miRNA* has a much lower probability of being sequenced than that of mature miRNA. The annotation of miRNAs based on the appearance of miRNA* would often miss many true miRNAs. As the sequencing becomes relatively easily available with the development of new sequencing technology, a robust miRNAs identification system has become increasingly important. In this study, we adopted the primary criterion suggested recently by a large group of scientists in the field of plant miRNA. Our method is based on an assumption that: if any sequences with stem-loop secondary structure have 75% of all small RNAs mapped onto this stemloop fall in one distinct position (where the miRNA/ miRNA* locate), then this hairpin sequences should be annotated as a miRNA precursor. The advantages of our new method are apparent; it saves significant computation time, and the exact sequences of mature miRNAs for all the precursors are easy to determine. However, finding new miRNAs using this method is highly depended on the depth of small RNA sequencing, which is practical only using a next generation sequencing platform. Additionally, our method starting with the prediction of potential miRNA precursors using a very relaxed criterion, it is still possible that some precursors may have been missed, particularly for those of the multi-loop secondary structure. Although our method relied on the precision of excision from the stem-loop precursors, as demonstrated by the small RNA sequencing data, other cleavage patterns of miRNA precursors, such as the extensive degradome sequencing in rice, can also be used to verify miRNA prediction. The elegant degradome sequencing results showed that most conserved miRNA precursors were cleaved precisely at the beginning or end of miRNA/miRNA* duplex. Additional miRNA candidates Using this new method, we have identified 66 new miR-NAs, 62 of which have not been identified before in any other organism. The discovery of these miRNAs and their targeted genes was a critical step in understanding the complex miRNA regulation network of this important crop. According to our method, a relative high sequencing depth is required for new miRNAs identification. In our four libraries, unique small RNAs were sequenced an average of 2.6 times. Thus, we have taken 5 as the minimal abundance in the new miRNA prediction. However, some real miRNAs were not sequenced in high enough coverage and were missed. There were 50 small RNAs with a sequencing coverage lower than 5 but higher than 2. At the same time, the corresponding genomic regions of these 50 small RNA fulfill all the criteria for typical miRNA precursors; therefore, these 50 small RNAs are potential miRNA candidates (Additional file 4). Some miRNA precursors overlap with the protein-coding genes Based on the maize genome annotation release-5b downloaded from http://www.maizesequence.org/, the genome locations of the 167 known and 70 new miRNA precursors were determined. About 18% of the precursors were located within annotated protein coding genes ( Figure 6). For those miRNAs that fell on genes, 10% overlapped with exons (sense and anti-sense), and 7% were located in intron regions. This result was consistent with the result reported in P. patens, where more than half of the miRNA precursors overlapped with protein coding regions. The small RNA population in maize is highly complicated To identify novel maize miRNA, we conducted four next generation sequencing runs for small RNAs: two mixed tissues, embryo and endosperm. Although we generated over 40 million signatures, sequences from the four databases have a limited overlap, with only 233, 132 unique sequences appeared in all four libraries and a small fraction overlapped between two libraries ( Figure 7). This limited overlap indicates a very large number of small RNAs exist in maize. We noticed that some known miRNAs had very different abundance in the four databases especially between embryo and endosperm: 30 new miRNAs were sequenced either in embryo or endosperm. For example, zma-miR168a, b and zma-miR166a had a very high abundance in the two mixed tissues and the endosperm while they could not be detected in the embryo library, which indicates that they may be endosperm specific. Although their true tissue specificity needs to be further validated through experiments, their relatively high level of expression in embryo or endosperm suggested that they could have important regulatory roles throughout embryo/endosperm development. Conclusion We have implemented a novel process of identifying miRNA from small RNA sequencing data by measuring the precision of miRNA processing from precursors. Using this method, 66 novel miRNAs belonging to 54 families have been identified in maize. These newly identified miRNAs can be grouped into 58 families, of which 54 have not been identified in any other species. Plant Materials and sequencing B73 inbred was used in our study. Four separated RNA samples were sequenced. Two samples were the mixed tissues of root, stem, leaf, tassel, ear, shoot, pollen and silk. Another two samples were the tissues of endosperm and embryo. The embryo and endosperm were collected 12, 16, 20 and 24 days after pollination. For samples of mixed tissues, RNAs were extracted from 8 tissues separately by using TRIzol reagent (Invitrogen) and then mixed in equally amount for sequencing. The small RNAs of 18-28-nt in length were purified by polyacrylamide gel electrophoresis (PAGE). 3' and 5' adaptors were added for RT-PCR amplification and PCR products were subjected to sequencing. Low quality reads and the adaptor sequences were removed before further analysis. Novel miRNAs identification and target prediction The maize genome sequences masked with the MIPs repeat were downloaded from B73 genome project (release-3b.50, http://www.maizesequence.org/. Inverted repeat sequences were extracted by EMBOSS-einverted from the masked genome sequences using parameters "-gap = 20, -threshold = 60, -match = 5, -mismatch = -4, -maxrepeat = 400". As we noticed that a sequence can have different secondary structures in the sense and antisense direction when calculated by RNAfold, the inverted repeat sequences were folded in both directions to retrieve stem-loop sequences with single loops as the candidate miRNA precursors. We then attempted to identify novel miRNAs from four sequenced RNA samples separately. For each sequenced RNA samples, short reads were first mapped to all the candidate precursors. The precursors that fulfilled the following conditions were selected for further analysis: 1) precursor had at least one unique small RNA of 20~22-nt mapped on it, 2) the unique small RNA had at least five identical reads in a sequencing library, 3) the genomic sequences of the unique small RNA are less than 20 copies in B73 genome. For each selected precursor, a particular position that had the highest number of identical 20-22-nt small RNAs mapped to it was regarded as the potential mature miRNA. Finally, distributions of reads for all the mapped small RNAs for the selected precursors were checked. If the number of small RNAs mapped around the potential mature miRNA (including 4-nt upstream and downstream) account for 75% of all the reads mapped to the precursors, then the candidate precursor was regarded as a true miRNA precursor, while the most abundant small RNA mapped on it was regarded as the mature miRNA. After screening by the primary criteria, the secondary structures of the precursors were predicted again by RNAfold using additional parameters. The secondary structures of the inverted repeat should satisfy the following: the MFEI (minimum free energy calculated by RNAfold divided by the sequence length) should ≤-0.15; the miRNA candidates should be on the stem of the stem-loop sequences; the candidate miRNA and miRNA* should have no more than 5 mismatches. Inverted repeat sequences that passed all the filters were regarded as our new miRNA precursors, and their corresponding mature miRNAs were the small RNA with the largest abundance among the ones mapped to them. To find additional evidences for the newly identified miRNAs, the expression of the precursors were tested by BLAST analysis with existing expressed sequence tag (EST) database. We also tried to find the miRNA* in our four small RNA libraries as additional evidence for the annotation of new miRNAs. To find conserved miRNA in Arabidopsis, rice and sorghum for newly identified miRNAs in maize, genome sequences of Arabidopsis, rice and sorghum http://www. arabidopsis.org/, http://www.tigr.org, http://www.phytozome.net/ were downloaded. If the new miRNAs have conserved sequences of no more than 4 mismatches in the genome, we extend the corresponding sequences for further analysis. Two extensions were made: one upstream 30-nt and downstream 300nt, the other upstream 300nt and downstream 30-nt as putative precursors, for the reason that mature miRNA are on the 3' or 5' stem of its precursor. Then the putative miRNA precursors' secondary structures were predicted by RNAfold. If the secondary structure fulfilled the criterion for miRNA precursors, we considered the miRNA conserved in the genome. The maize annotated coding sequences and Go annotation were downloaded from B73 genome project http://www.maizesequence.org/. release-5b). Because most miRNAs were near perfect matches to their corresponding target mRNA, we identify the miRNA target using BLAST with no more than 3 mismatches between miRNA and target sequences. New miRNA validation by northern blot The RNA gel blot hybridization was performed as described previously. Total RNA was extracted using RNA pure plant kit (TIANGEN of Beijing, DP437). The low-molecular-weight (LMW) RNA was detected by 15% polyacrylamide gels, blotted onto Hybond N+ membrane (Amersham BioSciences) using the transferring machine (Bio-Rad Laboratories, Mini Trans-Blot) for 1.5 hours at 200 mA, and UV crosslinked. Blot probes for specific small RNAs were labeled using -32 P-dATP by T4 polynucleotide kinase (NEB, M0201). Blots were prehybridized and hybridized at 37°C for 7 and 24 hours respectively using perfectHyb plus hybridization buffer (Sigma, H7033). Blots were washed at 50°C first with 2 SSC/0.2% SDS for 15 minutes one time, the second wash was carried out using 1 SSC/ 0.1% SDS for 10 minutes, then repeated. miRNA target validation by 5' RACE Total RNA was extracted from maize (B73 inbreed) 14 days seedlings, then treated with RQ1 DNase I
<gh_stars>0 from django.test import TestCase from .models import * from users.models import Profile # Create your tests here. class ImageTestCase(TestCase): def setUp(self): """image creation """ user = User.objects.create( username = 'nashlil', first_name = 'lilian', last_name = 'kanana') Image.objects.create( name="me", caption="ooops", profile_id=user.id, user_id=user.id ) def test_image_name(self): """tests image name """ image=Image.objects.get(name="me") self.assertEqual(image.name, "me") class LikeTestCase(TestCase): def setUp(self): user = User.objects.create( username = 'Lucy', first_name = 'Mutanu', last_name = 'Kioko') Profile.objects.create( bio = 'me', profile_photo = 'media/profile_pics/pizzacart10-removebg-preview_edhHkdQ.png', user_id = user.id ) Image.objects.create( name="me", caption="ooops", profile_id=user.id, user_id=user.id ) def test_image_id(self): user = User.objects.create( username = 'nashlil', first_name = 'lilian', last_name = 'kanana') Image.objects.create( name="me", caption="ooops", profile_id=user.id, user_id=user.id )
/** * @author Adam Dziedzic * * To store information in znodes in ZooKeeper (this is for production * mode, in the dev mode, please use bytes from the String object. */ public class NodeInfo implements ZooKeeperData { /** * */ private static final long serialVersionUID = 1L; /* General information about the znode. */ private String information; /* * The number of milliseconds since January 1, 1970, 00:00:00 GMT * * The creation time is useful only in the creator (there can be time skew * between separate physical machines). * */ private Instant creationTime; /** * */ public NodeInfo() { this.creationTime = Instant.now(); } /** * */ public NodeInfo(String information) { this(); this.information = information; } /** * @return the creationTime of this info */ public Instant getCreationTime() { return creationTime; } /** * @return the information */ public String getInformation() { return information; } }
/** * Loads a SQL script from a file and executes it. */ public static void loadFile(File file, Connection conn) { try { log.info("Reading SQL script from " + file); new SQLScriptLoader(new InputStreamReader( new FileInputStream(file), "utf-8"), conn).execute(); } catch (UnsupportedEncodingException ex) { } catch (IOException ex) { throw new D2RQException( "Error accessing SQL startup script " + file + ": " + ex.getMessage(), D2RQException.STARTUP_SQL_SCRIPT_ACCESS); } }
package io.cattle.platform.systemstack.model; import java.util.List; public class TemplateCollection { List<Template> data; public List<Template> getData() { return data; } public void setData(List<Template> data) { this.data = data; } }
Model of capacitive micromechanical accelerometer including effect of squeezed gas film An electrical component model for a micromechanical accelerometer is presented. In addition to the varying capacitances, the motion of the seismic mass and the damping gas films are described by means of an electrical equivalent circuit. The resulting model can be analyzed together with the interfacing electronics utilizing all analysis modes available in a circuit simulator. The model has been implemented in the general purpose circuit simulation tool APLAC. The model simulations show an excellent match with the measured frequency responses.
Project Management Certification Benchmarking Research: 2020 Update This is the 4th (and probably final?) Update to this 10-year research program benchmarking some 100+ Global Project Management Related Credentials against Gladwell's 10,000 hour rule, the level of effort it takes to earn a Professional Engineer (PE) license in the USA and the process and level of effort to become a Licensed Commercial pilot.
. BACKGROUND AND OBJECTIVE In numerous experimental and epidemiologic studies Pentachlorphenol (PCP) and Hexachlorcyclohexan (Lindane) have been shown to be of potential carcinogenic risk for human epithelial cells. In the past, these two substances have been used for both, military and non-military purposes, e.g. for impregnation of textiles and uniforms. In this study we investigated the genotoxic effect of PCP and Lindane on human mucosal tissue from the middle and lower nasal turbinate. METHODS In biopsy samples obtained from nasal epithelia during surgery cell vitality was evaluated by trypan-blue-staining. The specimens were incubated for 60 minutes with PCP (0.3; 0.75 und 1.2 mumol/l) and Lindane (0.5; 0.75; and 1.0 mumol/ml). The induction of DNA-damage (single-strand-breaks and double-strand-breaks) caused by PCP and Lindane was measured using single-cell microgel electrophoresis. Evaluation was performed by fluorescence microscopy. RESULTS Especially in mucosa cells from the middle turbinate severe DNA-damages were recognized after exposition to PCP and Lindane proposing a strong genotoxic effect. In cells from the lower turbinate DNA-changes caused by PCP and Lindane were significantly lower. However a considerable genotoxic effect was also present. CONCLUSION This study shows for the first time that there are clear facts indicating mutagenic effects of PCP and Lindane on nasal epithelia. Furthermore, this is the first study showing different susceptibility of two anatomic subsites in the nose for different pesticides. Concerning the biological plausibility, this study offers important arguments for evaluating the role of PCP and Lindane in the induction of upper aerodigestive tract cancer.
/* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package clustering.link_back.pre; import clustering.Utils.MapReduceUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; /** * WorkflowDriver class for pre step. * * @author edwardlol * Created by edwardlol on 17-4-28. */ public class Driver extends Configured implements Tool { //~ Methods --------------------------------------------------------------- @Override public int run(String[] args) throws Exception { Job job = configJob(args); return job.waitForCompletion(true) ? 0 : 1; } public Job configJob(String[] args) throws Exception { if (args.length < 2) { System.err.printf("usage: %s input_dir output_dir\n" , getClass().getSimpleName()); System.exit(1); } Configuration conf = getConf(); conf = MapReduceUtils.initConf(conf); Job job = Job.getInstance(conf, "linkback pre step"); job.setJarByClass(Driver.class); FileInputFormat.addInputPath(job, new Path(args[0])); job.setInputFormatClass(KeyValueTextInputFormat.class); job.setMapperClass(AttachMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(Text.class); FileOutputFormat.setOutputPath(job, new Path(args[1])); return job; } //~ Entrance -------------------------------------------------------------- public static void main(String[] args) throws Exception { Configuration configuration = new Configuration(); System.exit(ToolRunner.run(configuration, new Driver(), args)); } } // End WorkflowDriver.java
The Rev. Doris Green says by the time her husband was diagnosed with prostate cancer it was Stage 4. She says if she’d known he had prostate cancer she would have advocated for him earlier. When Doris Green married an inmate in prison, she knew it was kind of weird, and yet for her it was also normal. As a prison chaplain the Rev. Green says she performed more than 20 weddings between inmates and women on the outside. Green says she eventually fell in love with and decided to marry Michael Smith, inmate N40598. “It’s my time, and I’m gonna do this and here’s your volunteer ID. Take it back,” Green said. She says she knows it was a little scandalous. When Green gave up her work as a prison chaplain she stayed involved in prison issues. She’s currently the director of correctional health and community affairs for the Aids Foundation of Chicago. She helps connect inmates leaving prison with health care on the outside. Because of her job she knows health care workers in the Department of Corrections. But that didn’t make much difference when her husband got sick. The date is December of 1997. Fourteen years before he died of prostate cancer a prison medical record shows he had a high PSA, which is an indicator of prostate cancer. The record says “needs follow up,” but Green says 14 years later her husband died from prostate cancer that hadn’t been treated. Between 80 and 100 people die each year inside Illinois prisons. WBEZ has sought information about those deaths, but the Department of Corrections under Gov. Pat Quinn is taking a “trust us, nothing to see here” attitude. However, persistent and disturbing complaints from inmates and their families make it hard to just move along. Green says in 2011 her husband was getting up to urinate five times a night and was in extreme pain. That followed a decade of complaints of back pain, noted in the medical record. Green pushed the prison system to get him to a doctor at an outside hospital. “So when the urologist tested him, really gave him the biopsy, it was Stage 4 prostate cancer and bone cancer in his back,” she said. The treating physician says at that point the PSA level had risen from 7.6 in 1997 to 250.6. He says he then prescribed an anti-hormonal injection, but that the Department of Corrections must never have given Smith that injection because the next time he saw Smith the PSA level was 892. He says the cancer should have been diagnosed much earlier. Green says she didn’t find out about that 1997 test with the high PSA level until after her husband’s death. While pushing for medical care for her husband Green says she’d also been asking the governor’s office for compassionate release so her husband could die at home, but that didn’t happen. She says he died in his cell. “And that same day he died I got a call from the governor’s office asking to meet with me about Michael Smith," Green said. "And the receptionist that called me was so, I can feel it in her voice. I felt that I wanted to comfort her in some way. I told her, I said, he just died. And she said, I’m so sorry. C’mon. Too much. Too late. Too much. It’s too late but it’s not too late for those that are in there." The Illinois Department of Corrections strongly disputes Green’s version of events. IDOC spokesman Tom Shaer says privacy laws prevent him from defending the department’s track record in the case of Green’s husband, but, “I can tell you that the claims made by the third party in this case, Ms. Green, are filled with false statements covering the time from inmate Smith’s diagnosis in 1997 and his death 14 years later, after I believe, I’m not sure, she married him while he was in prison. There are many false statements covering that time. I wish I could get into further specifics but I can’t do that. She evidently can. We legally cannot,” he said. The medical director for the Department of Corrections refused to discuss medical care, even in general terms, with WBEZ because of pending litigation. But there are always lawsuits pending. In fact, according to Shaer, there are 4,600 lawsuits against the Department of Corrections right now. Nonetheless, Shaer says citizens should be confident in the health care inside prisons. According to Bureau of Justice statistics, Illinois has one of the lowest inmate death rates in the country. Shaer says that’s proof that Illinois is providing good care. “The total number of deaths, the overall issue with people dying in Illinois prisons is absolutely a non-story,” said Shaer. Reporting on deaths in Illinois prisons will continue throughout the week.
A Lithic Industry at Ain Wif, Tripolitania Ain Wif is well known to students of Roman Tripolitania as the settlement and military road-station of Thenadassa, studied and published by Goodchild and Ward-Perkins. The site lies on the summit of the eastern bank of the Wadi Wif prior to its confluence with the larger Wadi Hammam, and is some fifteen kilometres west of Sidi as Sid (Tazzoli) village. The ain (spring) is marked by a small oasis of palms and, as Goodchild and Ward-Perkins pointed out, the assured water-supply was presumably the determining factor in the establishment of the Roman settlement, sometime in the first century A.D. During a visit to the site in December 1978, as Research Assistant to Olwen Brogan for the Society of Libyan Studies, I discovered a dense concentration of struck flints immediately to the north-east of the oasis. (Plate ?. Map ref. UR 463 686 on sheet 1989. II, Qarat al Bayda, series P761 of the U.S. Army Map Service). With the help of Miss Tina Watson, 124 struck flints, mostly waste flakes, and a number of fire-crazed flints were picked up from the surface within the space of half an hour. The main concentration lies within an area of about five metres square on sand and gravel and, no doubt, represents a prehistoric knapping-floor. It is doubtful whether much, if any, stratigraphy will be found on the site.
import anyTest from 'ava'; import { selfCheckMacro } from './selfCheckMacro'; import { ServiceProvider } from '../../../ServiceProvider'; import { HighDurationCheck } from './HighDurationCheck'; const { test, range } = selfCheckMacro(anyTest, ServiceProvider, HighDurationCheck); test('max', range, { driver_duration: 432000 }, 1, 1); test('min', range, { driver_duration: 4200 }, 0, 0); test('between', range, { driver_duration: 10200 }, 0, 1);
# coding=utf-8 # Copyright 2020-present Google Brain and Carnegie Mellon University Authors and the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ TF 2.0 Funnel model. """ import warnings from dataclasses import dataclass from typing import Optional, Tuple import tensorflow as tf from ...activations_tf import get_tf_activation from ...file_utils import ( MULTIPLE_CHOICE_DUMMY_INPUTS, ModelOutput, add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, replace_return_docstrings, ) from ...modeling_tf_outputs import ( TFBaseModelOutput, TFMaskedLMOutput, TFMultipleChoiceModelOutput, TFQuestionAnsweringModelOutput, TFSequenceClassifierOutput, TFTokenClassifierOutput, ) from ...modeling_tf_utils import ( TFMaskedLanguageModelingLoss, TFMultipleChoiceLoss, TFPreTrainedModel, TFQuestionAnsweringLoss, TFSequenceClassificationLoss, TFTokenClassificationLoss, get_initializer, keras_serializable, shape_list, ) from ...tokenization_utils import BatchEncoding from ...utils import logging from .configuration_funnel import FunnelConfig logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "FunnelConfig" _TOKENIZER_FOR_DOC = "FunnelTokenizer" TF_FUNNEL_PRETRAINED_MODEL_ARCHIVE_LIST = [ "funnel-transformer/small", # B4-4-4H768 "funnel-transformer/small-base", # B4-4-4H768, no decoder "funnel-transformer/medium", # B6-3x2-3x2H768 "funnel-transformer/medium-base", # B6-3x2-3x2H768, no decoder "funnel-transformer/intermediate", # B6-6-6H768 "funnel-transformer/intermediate-base", # B6-6-6H768, no decoder "funnel-transformer/large", # B8-8-8H1024 "funnel-transformer/large-base", # B8-8-8H1024, no decoder "funnel-transformer/xlarge-base", # B10-10-10H1024 "funnel-transformer/xlarge", # B10-10-10H1024, no decoder ] INF = 1e6 class TFFunnelEmbeddings(tf.keras.layers.Layer): """Construct the embeddings from word embeddings.""" def __init__(self, config, **kwargs): super().__init__(**kwargs) self.vocab_size = config.vocab_size self.hidden_size = config.hidden_size self.initializer_range = config.initializer_range self.layer_norm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layer_norm") self.dropout = tf.keras.layers.Dropout(config.hidden_dropout) def build(self, input_shape): """Build shared word embedding layer """ with tf.name_scope("word_embeddings"): # Create and initialize weights. The random normal initializer was chosen # arbitrarily, and works well. self.word_embeddings = self.add_weight( "weight", shape=[self.vocab_size, self.hidden_size], initializer=get_initializer(self.initializer_range), ) super().build(input_shape) def call( self, input_ids=None, inputs_embeds=None, mode="embedding", training=False, ): """ Get token embeddings of inputs Args: inputs: list of three int64 tensors with shape [batch_size, length]: (input_ids, position_ids, token_type_ids) mode: string, a valid value is one of "embedding" and "linear" Returns: outputs: (1) If mode == "embedding", output embedding tensor, float32 with shape [batch_size, length, embedding_size]; (2) mode == "linear", output linear tensor, float32 with shape [batch_size, length, vocab_size] Raises: ValueError: if mode is not valid. Shared weights logic adapted from https://github.com/tensorflow/models/blob/a009f4fb9d2fc4949e32192a944688925ef78659/official/transformer/v2/embedding_layer.py#L24 """ if mode == "embedding": return self._embedding(input_ids, inputs_embeds, training=training) elif mode == "linear": return self._linear(input_ids) else: raise ValueError("mode {} is not valid.".format(mode)) def _embedding(self, input_ids, inputs_embeds, training=False): """Applies embedding based on inputs tensor.""" assert not (input_ids is None and inputs_embeds is None) if inputs_embeds is None: inputs_embeds = tf.gather(self.word_embeddings, input_ids) embeddings = self.layer_norm(inputs_embeds) embeddings = self.dropout(embeddings, training=training) return embeddings def _linear(self, inputs): """ Computes logits by running inputs through a linear layer Args: inputs: A float32 tensor with shape [batch_size, length, hidden_size Returns: float32 tensor with shape [batch_size, length, vocab_size]. """ batch_size = shape_list(inputs)[0] length = shape_list(inputs)[1] x = tf.reshape(inputs, [-1, self.hidden_size]) logits = tf.matmul(x, self.word_embeddings, transpose_b=True) return tf.reshape(logits, [batch_size, length, self.vocab_size]) class TFFunnelAttentionStructure: """ Contains helpers for `TFFunnelRelMultiheadAttention `. """ cls_token_type_id: int = 2 def __init__(self, config): self.d_model = config.d_model self.attention_type = config.attention_type self.num_blocks = config.num_blocks self.separate_cls = config.separate_cls self.truncate_seq = config.truncate_seq self.pool_q_only = config.pool_q_only self.pooling_type = config.pooling_type self.sin_dropout = tf.keras.layers.Dropout(config.hidden_dropout) self.cos_dropout = tf.keras.layers.Dropout(config.hidden_dropout) # Track where we are at in terms of pooling from the original input, e.g., by how much the sequence length was # divided. self.pooling_mult = None def init_attention_inputs(self, inputs_embeds, attention_mask=None, token_type_ids=None, training=False): """ Returns the attention inputs associated to the inputs of the model. """ # inputs_embeds has shape batch_size x seq_len x d_model # attention_mask and token_type_ids have shape batch_size x seq_len self.pooling_mult = 1 self.seq_len = seq_len = inputs_embeds.shape[1] position_embeds = self.get_position_embeds(seq_len, dtype=inputs_embeds.dtype, training=training) token_type_mat = self.token_type_ids_to_mat(token_type_ids) if token_type_ids is not None else None cls_mask = ( tf.pad(tf.ones([seq_len - 1, seq_len - 1], dtype=inputs_embeds.dtype), [[1, 0], [1, 0]]) if self.separate_cls else None ) return (position_embeds, token_type_mat, attention_mask, cls_mask) def token_type_ids_to_mat(self, token_type_ids): """Convert `token_type_ids` to `token_type_mat`.""" token_type_mat = tf.equal(tf.expand_dims(token_type_ids, -1), tf.expand_dims(token_type_ids, -2)) # Treat <cls> as in the same segment as both A & B cls_ids = tf.equal(token_type_ids, tf.constant([self.cls_token_type_id], dtype=token_type_ids.dtype)) cls_mat = tf.logical_or(tf.expand_dims(cls_ids, -1), tf.expand_dims(cls_ids, -2)) return tf.logical_or(cls_mat, token_type_mat) def get_position_embeds(self, seq_len, dtype=tf.float32, training=False): """ Create and cache inputs related to relative position encoding. Those are very different depending on whether we are using the factorized or the relative shift attention: For the factorized attention, it returns the matrices (phi, pi, psi, omega) used in the paper, appendix A.2.2, final formula. For the relative shif attention, it returns all possible vectors R used in the paper, appendix A.2.1, final formula. Paper link: https://arxiv.org/abs/2006.03236 """ if self.attention_type == "factorized": # Notations from the paper, appending A.2.2, final formula. # We need to create and return the matrices phi, psi, pi and omega. pos_seq = tf.range(0, seq_len, 1.0, dtype=dtype) freq_seq = tf.range(0, self.d_model // 2, 1.0, dtype=dtype) inv_freq = 1 / (10000 ** (freq_seq / (self.d_model // 2))) sinusoid = tf.einsum("i,d->id", pos_seq, inv_freq) sin_embed = tf.sin(sinusoid) sin_embed_d = self.sin_dropout(sin_embed, training=training) cos_embed = tf.cos(sinusoid) cos_embed_d = self.cos_dropout(cos_embed, training=training) # This is different from the formula on the paper... phi = tf.concat([sin_embed_d, sin_embed_d], axis=-1) psi = tf.concat([cos_embed, sin_embed], axis=-1) pi = tf.concat([cos_embed_d, cos_embed_d], axis=-1) omega = tf.concat([-sin_embed, cos_embed], axis=-1) return (phi, pi, psi, omega) else: # Notations from the paper, appending A.2.1, final formula. # We need to create and return all the possible vectors R for all blocks and shifts. freq_seq = tf.range(0, self.d_model // 2, 1.0, dtype=dtype) inv_freq = 1 / (10000 ** (freq_seq / (self.d_model // 2))) # Maximum relative positions for the first input rel_pos_id = tf.range(-seq_len * 2, seq_len * 2, 1.0, dtype=dtype) zero_offset = seq_len * 2 sinusoid = tf.einsum("i,d->id", rel_pos_id, inv_freq) sin_embed = self.sin_dropout(tf.sin(sinusoid), training=training) cos_embed = self.cos_dropout(tf.cos(sinusoid), training=training) pos_embed = tf.concat([sin_embed, cos_embed], axis=-1) pos = tf.range(0, seq_len, dtype=dtype) pooled_pos = pos position_embeds_list = [] for block_index in range(0, self.num_blocks): # For each block with block_index > 0, we need two types position embeddings: # - Attention(pooled-q, unpooled-kv) # - Attention(pooled-q, pooled-kv) # For block_index = 0 we only need the second one and leave the first one as None. # First type if block_index == 0: position_embeds_pooling = None else: pooled_pos = self.stride_pool_pos(pos, block_index) # construct rel_pos_id stride = 2 ** (block_index - 1) rel_pos = self.relative_pos(pos, stride, pooled_pos, shift=2) # rel_pos = tf.expand_dims(rel_pos,1) + zero_offset # rel_pos = tf.broadcast_to(rel_pos, (rel_pos.shape[0], self.d_model)) rel_pos = rel_pos + zero_offset position_embeds_pooling = tf.gather(pos_embed, rel_pos, axis=0) # Second type pos = pooled_pos stride = 2 ** block_index rel_pos = self.relative_pos(pos, stride) # rel_pos = tf.expand_dims(rel_pos,1) + zero_offset # rel_pos = tf.broadcast_to(rel_pos, (rel_pos.shape[0], self.d_model)) rel_pos = rel_pos + zero_offset position_embeds_no_pooling = tf.gather(pos_embed, rel_pos, axis=0) position_embeds_list.append([position_embeds_no_pooling, position_embeds_pooling]) return position_embeds_list def stride_pool_pos(self, pos_id, block_index): """ Pool `pos_id` while keeping the cls token separate (if `self.separate_cls=True`). """ if self.separate_cls: # Under separate <cls>, we treat the <cls> as the first token in # the previous block of the 1st real block. Since the 1st real # block always has position 1, the position of the previous block # will be at `1 - 2 ** block_index`. cls_pos = tf.constant([-(2 ** block_index) + 1], dtype=pos_id.dtype) pooled_pos_id = pos_id[1:-1] if self.truncate_seq else pos_id[1:] return tf.concat([cls_pos, pooled_pos_id[::2]], 0) else: return pos_id[::2] def relative_pos(self, pos, stride, pooled_pos=None, shift=1): """ Build the relative positional vector between `pos` and `pooled_pos`. """ if pooled_pos is None: pooled_pos = pos ref_point = pooled_pos[0] - pos[0] num_remove = shift * pooled_pos.shape[0] max_dist = ref_point + num_remove * stride min_dist = pooled_pos[0] - pos[-1] return tf.range(max_dist, min_dist - 1, -stride, dtype=tf.int64) def stride_pool(self, tensor, axis): """ Perform pooling by stride slicing the tensor along the given axis. """ if tensor is None: return None # Do the stride pool recursively if axis is a list or a tuple of ints. if isinstance(axis, (list, tuple)): for ax in axis: tensor = self.stride_pool(tensor, ax) return tensor # Do the stride pool recursively if tensor is a list or tuple of tensors. if isinstance(tensor, (tuple, list)): return type(tensor)(self.stride_pool(x, axis) for x in tensor) # Deal with negative axis axis %= tensor.shape.ndims axis_slice = slice(None, -1, 2) if self.separate_cls and self.truncate_seq else slice(None, None, 2) enc_slice = [slice(None)] * axis + [axis_slice] if self.separate_cls: cls_slice = [slice(None)] * axis + [slice(None, 1)] tensor = tf.concat([tensor[cls_slice], tensor], axis) return tensor[enc_slice] def pool_tensor(self, tensor, mode="mean", stride=2): """Apply 1D pooling to a tensor of size [B x T (x H)].""" if tensor is None: return None # Do the pool recursively if tensor is a list or tuple of tensors. if isinstance(tensor, (tuple, list)): return type(tensor)(self.pool_tensor(tensor, mode=mode, stride=stride) for x in tensor) if self.separate_cls: suffix = tensor[:, :-1] if self.truncate_seq else tensor tensor = tf.concat([tensor[:, :1], suffix], axis=1) ndim = tensor.shape.ndims if ndim == 2: tensor = tensor[:, :, None] if mode == "mean": tensor = tf.nn.avg_pool1d(tensor, stride, strides=stride, data_format="NWC", padding="SAME") elif mode == "max": tensor = tf.nn.max_pool1d(tensor, stride, strides=stride, data_format="NWC", padding="SAME") elif mode == "min": tensor = -tf.nn.max_pool1d(-tensor, stride, strides=stride, data_format="NWC", padding="SAME") else: raise NotImplementedError("The supported modes are 'mean', 'max' and 'min'.") return tf.squeeze(tensor, 2) if ndim == 2 else tensor def pre_attention_pooling(self, output, attention_inputs): """ Pool `output` and the proper parts of `attention_inputs` before the attention layer. """ position_embeds, token_type_mat, attention_mask, cls_mask = attention_inputs if self.pool_q_only: if self.attention_type == "factorized": position_embeds = self.stride_pool(position_embeds[:2], 0) + position_embeds[2:] token_type_mat = self.stride_pool(token_type_mat, 1) cls_mask = self.stride_pool(cls_mask, 0) output = self.pool_tensor(output, mode=self.pooling_type) else: self.pooling_mult *= 2 if self.attention_type == "factorized": position_embeds = self.stride_pool(position_embeds, 0) token_type_mat = self.stride_pool(token_type_mat, [1, 2]) cls_mask = self.stride_pool(cls_mask, [1, 2]) attention_mask = self.pool_tensor(attention_mask, mode="min") output = self.pool_tensor(output, mode=self.pooling_type) attention_inputs = (position_embeds, token_type_mat, attention_mask, cls_mask) return output, attention_inputs def post_attention_pooling(self, attention_inputs): """ Pool the proper parts of `attention_inputs` after the attention layer. """ position_embeds, token_type_mat, attention_mask, cls_mask = attention_inputs if self.pool_q_only: self.pooling_mult *= 2 if self.attention_type == "factorized": position_embeds = position_embeds[:2] + self.stride_pool(position_embeds[2:], 0) token_type_mat = self.stride_pool(token_type_mat, 2) cls_mask = self.stride_pool(cls_mask, 1) attention_mask = self.pool_tensor(attention_mask, mode="min") attention_inputs = (position_embeds, token_type_mat, attention_mask, cls_mask) return attention_inputs def _relative_shift_gather(positional_attn, context_len, shift): batch_size, n_head, seq_len, max_rel_len = shape_list(positional_attn) # max_rel_len = 2 * context_len + shift -1 is the numbers of possible relative positions i-j # What's next is the same as doing the following gather in PyTorch, which might be clearer code but less efficient. # idxs = context_len + torch.arange(0, context_len).unsqueeze(0) - torch.arange(0, seq_len).unsqueeze(1) # # matrix of context_len + i-j # return positional_attn.gather(3, idxs.expand([batch_size, n_head, context_len, context_len])) positional_attn = tf.reshape(positional_attn, [batch_size, n_head, max_rel_len, seq_len]) positional_attn = positional_attn[:, :, shift:, :] positional_attn = tf.reshape(positional_attn, [batch_size, n_head, seq_len, max_rel_len - shift]) positional_attn = positional_attn[..., :context_len] return positional_attn class TFFunnelRelMultiheadAttention(tf.keras.layers.Layer): def __init__(self, config, block_index, **kwargs): super().__init__(**kwargs) self.attention_type = config.attention_type self.n_head = n_head = config.n_head self.d_head = d_head = config.d_head self.d_model = d_model = config.d_model self.initializer_range = config.initializer_range self.block_index = block_index self.hidden_dropout = tf.keras.layers.Dropout(config.hidden_dropout) self.attention_dropout = tf.keras.layers.Dropout(config.attention_dropout) initializer = get_initializer(config.initializer_range) self.q_head = tf.keras.layers.Dense( n_head * d_head, use_bias=False, kernel_initializer=initializer, name="q_head" ) self.k_head = tf.keras.layers.Dense(n_head * d_head, kernel_initializer=initializer, name="k_head") self.v_head = tf.keras.layers.Dense(n_head * d_head, kernel_initializer=initializer, name="v_head") self.post_proj = tf.keras.layers.Dense(d_model, kernel_initializer=initializer, name="post_proj") self.layer_norm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layer_norm") self.scale = 1.0 / (d_head ** 0.5) def build(self, input_shape): n_head, d_head, d_model = self.n_head, self.d_head, self.d_model initializer = get_initializer(self.initializer_range) self.r_w_bias = self.add_weight( shape=(n_head, d_head), initializer=initializer, trainable=True, name="r_w_bias" ) self.r_r_bias = self.add_weight( shape=(n_head, d_head), initializer=initializer, trainable=True, name="r_r_bias" ) self.r_kernel = self.add_weight( shape=(d_model, n_head, d_head), initializer=initializer, trainable=True, name="r_kernel" ) self.r_s_bias = self.add_weight( shape=(n_head, d_head), initializer=initializer, trainable=True, name="r_s_bias" ) self.seg_embed = self.add_weight( shape=(2, n_head, d_head), initializer=initializer, trainable=True, name="seg_embed" ) super().build(input_shape) def relative_positional_attention(self, position_embeds, q_head, context_len, cls_mask=None): """ Relative attention score for the positional encodings """ # q_head has shape batch_size x sea_len x n_head x d_head if self.attention_type == "factorized": # Notations from the paper, appending A.2.2, final formula (https://arxiv.org/abs/2006.03236) # phi and pi have shape seq_len x d_model, psi and omega have shape context_len x d_model phi, pi, psi, omega = position_embeds # Shape n_head x d_head u = self.r_r_bias * self.scale # Shape d_model x n_head x d_head w_r = self.r_kernel # Shape batch_size x sea_len x n_head x d_model q_r_attention = tf.einsum("binh,dnh->bind", q_head + u, w_r) q_r_attention_1 = q_r_attention * phi[:, None] q_r_attention_2 = q_r_attention * pi[:, None] # Shape batch_size x n_head x seq_len x context_len positional_attn = tf.einsum("bind,jd->bnij", q_r_attention_1, psi) + tf.einsum( "bind,jd->bnij", q_r_attention_2, omega ) else: shift = 2 if q_head.shape[1] != context_len else 1 # Notations from the paper, appending A.2.1, final formula (https://arxiv.org/abs/2006.03236) # Grab the proper positional encoding, shape max_rel_len x d_model r = position_embeds[self.block_index][shift - 1] # Shape n_head x d_head v = self.r_r_bias * self.scale # Shape d_model x n_head x d_head w_r = self.r_kernel # Shape max_rel_len x n_head x d_model r_head = tf.einsum("td,dnh->tnh", r, w_r) # Shape batch_size x n_head x seq_len x max_rel_len positional_attn = tf.einsum("binh,tnh->bnit", q_head + v, r_head) # Shape batch_size x n_head x seq_len x context_len positional_attn = _relative_shift_gather(positional_attn, context_len, shift) if cls_mask is not None: positional_attn *= cls_mask return positional_attn def relative_token_type_attention(self, token_type_mat, q_head, cls_mask=None): """ Relative attention score for the token_type_ids """ if token_type_mat is None: return 0 batch_size, seq_len, context_len = shape_list(token_type_mat) # q_head has shape batch_size x seq_len x n_head x d_head # Shape n_head x d_head r_s_bias = self.r_s_bias * self.scale # Shape batch_size x n_head x seq_len x 2 token_type_bias = tf.einsum("bind,snd->bnis", q_head + r_s_bias, self.seg_embed) # Shape batch_size x n_head x seq_len x context_len new_shape = [batch_size, q_head.shape[2], seq_len, context_len] token_type_mat = tf.broadcast_to(token_type_mat[:, None], new_shape) # Shapes batch_size x n_head x seq_len diff_token_type, same_token_type = tf.split(token_type_bias, 2, axis=-1) # Shape batch_size x n_head x seq_len x context_len token_type_attn = tf.where( token_type_mat, tf.broadcast_to(same_token_type, new_shape), tf.broadcast_to(diff_token_type, new_shape) ) if cls_mask is not None: token_type_attn *= cls_mask return token_type_attn def call(self, query, key, value, attention_inputs, output_attentions=False, training=False): # query has shape batch_size x seq_len x d_model # key and value have shapes batch_size x context_len x d_model position_embeds, token_type_mat, attention_mask, cls_mask = attention_inputs batch_size, seq_len, _ = shape_list(query) context_len = key.shape[1] n_head, d_head = self.n_head, self.d_head # Shape batch_size x seq_len x n_head x d_head q_head = tf.reshape(self.q_head(query), [batch_size, seq_len, n_head, d_head]) # Shapes batch_size x context_len x n_head x d_head k_head = tf.reshape(self.k_head(key), [batch_size, context_len, n_head, d_head]) v_head = tf.reshape(self.v_head(value), [batch_size, context_len, n_head, d_head]) q_head = q_head * self.scale # Shape n_head x d_head r_w_bias = self.r_w_bias * self.scale # Shapes batch_size x n_head x seq_len x context_len content_score = tf.einsum("bind,bjnd->bnij", q_head + r_w_bias, k_head) positional_attn = self.relative_positional_attention(position_embeds, q_head, context_len, cls_mask) token_type_attn = self.relative_token_type_attention(token_type_mat, q_head, cls_mask) # merge attention scores attn_score = content_score + positional_attn + token_type_attn # precision safe in case of mixed precision training dtype = attn_score.dtype if dtype != tf.float32: attn_score = tf.cast(attn_score, tf.float32) # perform masking if attention_mask is not None: attn_score = attn_score - INF * (1 - tf.cast(attention_mask[:, None, None], tf.float32)) # attention probability attn_prob = tf.nn.softmax(attn_score, axis=-1) if dtype != tf.float32: attn_prob = tf.cast(attn_prob, dtype) attn_prob = self.attention_dropout(attn_prob, training=training) # attention output, shape batch_size x seq_len x n_head x d_head attn_vec = tf.einsum("bnij,bjnd->bind", attn_prob, v_head) # Shape shape batch_size x seq_len x d_model attn_out = self.post_proj(tf.reshape(attn_vec, [batch_size, seq_len, n_head * d_head])) attn_out = self.hidden_dropout(attn_out, training=training) output = self.layer_norm(query + attn_out) return (output, attn_prob) if output_attentions else (output,) class TFFunnelPositionwiseFFN(tf.keras.layers.Layer): def __init__(self, config, **kwargs): super().__init__(**kwargs) initializer = get_initializer(config.initializer_range) self.linear_1 = tf.keras.layers.Dense(config.d_inner, kernel_initializer=initializer, name="linear_1") self.activation_function = get_tf_activation(config.hidden_act) self.activation_dropout = tf.keras.layers.Dropout(config.activation_dropout) self.linear_2 = tf.keras.layers.Dense(config.d_model, kernel_initializer=initializer, name="linear_2") self.dropout = tf.keras.layers.Dropout(config.hidden_dropout) self.layer_norm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layer_norm") def call(self, hidden, training=False): h = self.linear_1(hidden) h = self.activation_function(h) h = self.activation_dropout(h, training=training) h = self.linear_2(h) h = self.dropout(h, training=training) return self.layer_norm(hidden + h) class TFFunnelLayer(tf.keras.layers.Layer): def __init__(self, config, block_index, **kwargs): super().__init__(**kwargs) self.attention = TFFunnelRelMultiheadAttention(config, block_index, name="attention") self.ffn = TFFunnelPositionwiseFFN(config, name="ffn") def call(self, query, key, value, attention_inputs, output_attentions=False, training=False): attn = self.attention( query, key, value, attention_inputs, output_attentions=output_attentions, training=training ) output = self.ffn(attn[0], training=training) return (output, attn[1]) if output_attentions else (output,) class TFFunnelEncoder(tf.keras.layers.Layer): def __init__(self, config, **kwargs): super().__init__(**kwargs) self.separate_cls = config.separate_cls self.pool_q_only = config.pool_q_only self.block_repeats = config.block_repeats self.attention_structure = TFFunnelAttentionStructure(config) self.blocks = [ [TFFunnelLayer(config, block_index, name=f"blocks_._{block_index}_._{i}") for i in range(block_size)] for block_index, block_size in enumerate(config.block_sizes) ] def call( self, inputs_embeds, attention_mask=None, token_type_ids=None, output_attentions=False, output_hidden_states=False, return_dict=True, training=False, ): # The pooling is not implemented on long tensors, so we convert this mask. # attention_mask = tf.cast(attention_mask, inputs_embeds.dtype) attention_inputs = self.attention_structure.init_attention_inputs( inputs_embeds, attention_mask=attention_mask, token_type_ids=token_type_ids, training=training, ) hidden = inputs_embeds all_hidden_states = (inputs_embeds,) if output_hidden_states else None all_attentions = () if output_attentions else None for block_index, block in enumerate(self.blocks): pooling_flag = shape_list(hidden)[1] > (2 if self.separate_cls else 1) pooling_flag = pooling_flag and block_index > 0 if pooling_flag: pooled_hidden, attention_inputs = self.attention_structure.pre_attention_pooling( hidden, attention_inputs ) for (layer_index, layer) in enumerate(block): for repeat_index in range(self.block_repeats[block_index]): do_pooling = (repeat_index == 0) and (layer_index == 0) and pooling_flag if do_pooling: query = pooled_hidden key = value = hidden if self.pool_q_only else pooled_hidden else: query = key = value = hidden layer_output = layer( query, key, value, attention_inputs, output_attentions=output_attentions, training=training ) hidden = layer_output[0] if do_pooling: attention_inputs = self.attention_structure.post_attention_pooling(attention_inputs) if output_attentions: all_attentions = all_attentions + layer_output[1:] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden,) if not return_dict: return tuple(v for v in [hidden, all_hidden_states, all_attentions] if v is not None) return TFBaseModelOutput(last_hidden_state=hidden, hidden_states=all_hidden_states, attentions=all_attentions) def upsample(x, stride, target_len, separate_cls=True, truncate_seq=False): """ Upsample tensor `x` to match `target_len` by repeating the tokens `stride` time on the sequence length dimension. """ if stride == 1: return x if separate_cls: cls = x[:, :1] x = x[:, 1:] output = tf.repeat(x, repeats=stride, axis=1) if separate_cls: if truncate_seq: output = tf.pad(output, [[0, 0], [0, stride - 1], [0, 0]]) output = output[:, : target_len - 1] output = tf.concat([cls, output], axis=1) else: output = output[:, :target_len] return output class TFFunnelDecoder(tf.keras.layers.Layer): def __init__(self, config, **kwargs): super().__init__(**kwargs) self.separate_cls = config.separate_cls self.truncate_seq = config.truncate_seq self.stride = 2 ** (len(config.block_sizes) - 1) self.attention_structure = TFFunnelAttentionStructure(config) self.layers = [TFFunnelLayer(config, 0, name=f"layers_._{i}") for i in range(config.num_decoder_layers)] def call( self, final_hidden, first_block_hidden, attention_mask=None, token_type_ids=None, output_attentions=False, output_hidden_states=False, return_dict=True, training=False, ): upsampled_hidden = upsample( final_hidden, stride=self.stride, target_len=first_block_hidden.shape[1], separate_cls=self.separate_cls, truncate_seq=self.truncate_seq, ) hidden = upsampled_hidden + first_block_hidden all_hidden_states = (hidden,) if output_hidden_states else None all_attentions = () if output_attentions else None attention_inputs = self.attention_structure.init_attention_inputs( hidden, attention_mask=attention_mask, token_type_ids=token_type_ids, training=training, ) for layer in self.layers: layer_output = layer( hidden, hidden, hidden, attention_inputs, output_attentions=output_attentions, training=training ) hidden = layer_output[0] if output_attentions: all_attentions = all_attentions + layer_output[1:] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden,) if not return_dict: return tuple(v for v in [hidden, all_hidden_states, all_attentions] if v is not None) return TFBaseModelOutput(last_hidden_state=hidden, hidden_states=all_hidden_states, attentions=all_attentions) @keras_serializable class TFFunnelBaseLayer(tf.keras.layers.Layer): """ Base model without decoder """ config_class = FunnelConfig def __init__(self, config, **kwargs): super().__init__(**kwargs) self.output_attentions = config.output_attentions self.output_hidden_states = config.output_hidden_states self.return_dict = config.use_return_dict self.embeddings = TFFunnelEmbeddings(config, name="embeddings") self.encoder = TFFunnelEncoder(config, name="encoder") def get_input_embeddings(self): return self.embeddings def set_input_embeddings(self, value): self.embeddings.word_embeddings = value self.embeddings.vocab_size = value.shape[0] def _prune_heads(self, heads_to_prune): raise NotImplementedError # Not implemented yet in the library fr TF 2.0 models def call( self, inputs, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, ): if isinstance(inputs, (tuple, list)): input_ids = inputs[0] attention_mask = inputs[1] if len(inputs) > 1 else attention_mask token_type_ids = inputs[2] if len(inputs) > 2 else token_type_ids inputs_embeds = inputs[3] if len(inputs) > 3 else inputs_embeds output_attentions = inputs[4] if len(inputs) > 4 else output_attentions output_hidden_states = inputs[5] if len(inputs) > 5 else output_hidden_states return_dict = inputs[6] if len(inputs) > 6 else return_dict assert len(inputs) <= 7, "Too many inputs." elif isinstance(inputs, (dict, BatchEncoding)): input_ids = inputs.get("input_ids") attention_mask = inputs.get("attention_mask", attention_mask) token_type_ids = inputs.get("token_type_ids", token_type_ids) inputs_embeds = inputs.get("inputs_embeds", inputs_embeds) output_attentions = inputs.get("output_attentions", output_attentions) output_hidden_states = inputs.get("output_hidden_states", output_hidden_states) return_dict = inputs.get("return_dict", return_dict) assert len(inputs) <= 7, "Too many inputs." else: input_ids = inputs output_attentions = output_attentions if output_attentions is not None else self.output_attentions output_hidden_states = output_hidden_states if output_hidden_states is not None else self.output_hidden_states return_dict = return_dict if return_dict is not None else self.return_dict if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = shape_list(input_ids) elif inputs_embeds is not None: input_shape = shape_list(inputs_embeds)[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if attention_mask is None: attention_mask = tf.fill(input_shape, 1) if token_type_ids is None: token_type_ids = tf.fill(input_shape, 0) if inputs_embeds is None: inputs_embeds = self.embeddings(input_ids, training=training) encoder_outputs = self.encoder( inputs_embeds, attention_mask=attention_mask, token_type_ids=token_type_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) return encoder_outputs @keras_serializable class TFFunnelMainLayer(tf.keras.layers.Layer): """ Base model with decoder """ config_class = FunnelConfig def __init__(self, config, **kwargs): super().__init__(**kwargs) self.block_sizes = config.block_sizes self.output_attentions = config.output_attentions self.output_hidden_states = config.output_hidden_states self.return_dict = config.use_return_dict self.embeddings = TFFunnelEmbeddings(config, name="embeddings") self.encoder = TFFunnelEncoder(config, name="encoder") self.decoder = TFFunnelDecoder(config, name="decoder") def get_input_embeddings(self): return self.embeddings def set_input_embeddings(self, value): self.embeddings.word_embeddings = value self.embeddings.vocab_size = value.shape[0] def _prune_heads(self, heads_to_prune): raise NotImplementedError # Not implemented yet in the library fr TF 2.0 models def call( self, inputs, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, ): if isinstance(inputs, (tuple, list)): input_ids = inputs[0] attention_mask = inputs[1] if len(inputs) > 1 else attention_mask token_type_ids = inputs[2] if len(inputs) > 2 else token_type_ids inputs_embeds = inputs[3] if len(inputs) > 3 else inputs_embeds output_attentions = inputs[4] if len(inputs) > 4 else output_attentions output_hidden_states = inputs[5] if len(inputs) > 5 else output_hidden_states return_dict = inputs[6] if len(inputs) > 6 else return_dict assert len(inputs) <= 7, "Too many inputs." elif isinstance(inputs, (dict, BatchEncoding)): input_ids = inputs.get("input_ids") attention_mask = inputs.get("attention_mask", attention_mask) token_type_ids = inputs.get("token_type_ids", token_type_ids) inputs_embeds = inputs.get("inputs_embeds", inputs_embeds) output_attentions = inputs.get("output_attentions", output_attentions) output_hidden_states = inputs.get("output_hidden_states", output_hidden_states) return_dict = inputs.get("return_dict", return_dict) assert len(inputs) <= 7, "Too many inputs." else: input_ids = inputs output_attentions = output_attentions if output_attentions is not None else self.output_attentions output_hidden_states = output_hidden_states if output_hidden_states is not None else self.output_hidden_states return_dict = return_dict if return_dict is not None else self.return_dict if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = shape_list(input_ids) elif inputs_embeds is not None: input_shape = shape_list(inputs_embeds)[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if attention_mask is None: attention_mask = tf.fill(input_shape, 1) if token_type_ids is None: token_type_ids = tf.fill(input_shape, 0) if inputs_embeds is None: inputs_embeds = self.embeddings(input_ids, training=training) encoder_outputs = self.encoder( inputs_embeds, attention_mask=attention_mask, token_type_ids=token_type_ids, output_attentions=output_attentions, output_hidden_states=True, return_dict=return_dict, training=training, ) decoder_outputs = self.decoder( final_hidden=encoder_outputs[0], first_block_hidden=encoder_outputs[1][self.block_sizes[0]], attention_mask=attention_mask, token_type_ids=token_type_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if not return_dict: idx = 0 outputs = (decoder_outputs[0],) if output_hidden_states: idx += 1 outputs = outputs + (encoder_outputs[1] + decoder_outputs[idx],) if output_attentions: idx += 1 outputs = outputs + (encoder_outputs[2] + decoder_outputs[idx],) return outputs return TFBaseModelOutput( last_hidden_state=decoder_outputs[0], hidden_states=(encoder_outputs.hidden_states + decoder_outputs.hidden_states) if output_hidden_states else None, attentions=(encoder_outputs.attentions + decoder_outputs.attentions) if output_attentions else None, ) class TFFunnelDiscriminatorPredictions(tf.keras.layers.Layer): """Prediction module for the discriminator, made up of two dense layers.""" def __init__(self, config, **kwargs): super().__init__(**kwargs) initializer = get_initializer(config.initializer_range) self.dense = tf.keras.layers.Dense(config.d_model, kernel_initializer=initializer, name="dense") self.activation_function = get_tf_activation(config.hidden_act) self.dense_prediction = tf.keras.layers.Dense(1, kernel_initializer=initializer, name="dense_prediction") def call(self, discriminator_hidden_states): hidden_states = self.dense(discriminator_hidden_states) hidden_states = self.activation_function(hidden_states) logits = tf.squeeze(self.dense_prediction(hidden_states)) return logits class TFFunnelMaskedLMHead(tf.keras.layers.Layer): def __init__(self, config, input_embeddings, **kwargs): super().__init__(**kwargs) self.vocab_size = config.vocab_size self.input_embeddings = input_embeddings def build(self, input_shape): self.bias = self.add_weight(shape=(self.vocab_size,), initializer="zeros", trainable=True, name="bias") super().build(input_shape) def call(self, hidden_states, training=False): hidden_states = self.input_embeddings(hidden_states, mode="linear") hidden_states = hidden_states + self.bias return hidden_states class TFFunnelClassificationHead(tf.keras.layers.Layer): def __init__(self, config, n_labels, **kwargs): super().__init__(**kwargs) initializer = get_initializer(config.initializer_range) self.linear_hidden = tf.keras.layers.Dense( config.d_model, kernel_initializer=initializer, name="linear_hidden" ) self.dropout = tf.keras.layers.Dropout(config.hidden_dropout) self.linear_out = tf.keras.layers.Dense(n_labels, kernel_initializer=initializer, name="linear_out") def call(self, hidden, training=False): hidden = self.linear_hidden(hidden) hidden = tf.keras.activations.tanh(hidden) hidden = self.dropout(hidden, training=training) return self.linear_out(hidden) class TFFunnelPreTrainedModel(TFPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = FunnelConfig base_model_prefix = "funnel" @dataclass class TFFunnelForPreTrainingOutput(ModelOutput): """ Output type of :class:`~transformers.FunnelForPreTraining`. Args: logits (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`): Prediction scores of the head (scores for each token before SoftMax). hidden_states (:obj:`tuple(tf.ensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. """ logits: tf.Tensor = None hidden_states: Optional[Tuple[tf.Tensor]] = None attentions: Optional[Tuple[tf.Tensor]] = None FUNNEL_START_DOCSTRING = r""" The Funnel Transformer model was proposed in `Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing <https://arxiv.org/abs/2006.03236>`__ by <NAME>, <NAME>, <NAME>, <NAME>. This model inherits from :class:`~transformers.TFPreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a `tf.keras.Model <https://www.tensorflow.org/api_docs/python/tf/keras/Model>`__ subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. .. note:: TF 2.0 models accepts two formats as inputs: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional arguments. This second option is useful when using :meth:`tf.keras.Model.fit` method which currently requires having all the tensors in the first argument of the model call function: :obj:`model(inputs)`. If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : - a single Tensor with :obj:`input_ids` only and nothing else: :obj:`model(inputs_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: :obj:`model([input_ids, attention_mask])` or :obj:`model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: :obj:`model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Parameters: config (:class:`~transformers.XxxConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. """ FUNNEL_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using :class:`~transformers.FunnelTokenizer`. See :func:`transformers.PreTrainedTokenizer.__call__` and :func:`transformers.PreTrainedTokenizer.encode` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`({0})`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ token_type_ids (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`({0})`, `optional`): Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0, 1]``: - 0 corresponds to a `sentence A` token, - 1 corresponds to a `sentence B` token. `What are token type IDs? <../glossary.html#token-type-ids>`__ inputs_embeds (:obj:`tf.Tensor` of shape :obj:`({0}, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. training (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). """ @add_start_docstrings( """ The base Funnel Transformer Model transformer outputting raw hidden-states without upsampling head (also called decoder) or any task-specific head on top. """, FUNNEL_START_DOCSTRING, ) class TFFunnelBaseModel(TFFunnelPreTrainedModel): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.funnel = TFFunnelBaseLayer(config, name="funnel") @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small-base", output_type=TFBaseModelOutput, config_class=_CONFIG_FOR_DOC, ) def call(self, inputs, **kwargs): return self.funnel(inputs, **kwargs) @add_start_docstrings( "The bare Funnel Transformer Model transformer outputting raw hidden-states without any specific head on top.", FUNNEL_START_DOCSTRING, ) class TFFunnelModel(TFFunnelPreTrainedModel): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.funnel = TFFunnelMainLayer(config, name="funnel") @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=TFBaseModelOutput, config_class=_CONFIG_FOR_DOC, ) def call(self, inputs, **kwargs): return self.funnel(inputs, **kwargs) @add_start_docstrings( """ Funnel model with a binary classification head on top as used during pre-training for identifying generated tokens. """, FUNNEL_START_DOCSTRING, ) class TFFunnelForPreTraining(TFFunnelPreTrainedModel): def __init__(self, config, **kwargs): super().__init__(config, **kwargs) self.funnel = TFFunnelMainLayer(config, name="funnel") self.discriminator_predictions = TFFunnelDiscriminatorPredictions(config, name="discriminator_predictions") @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @replace_return_docstrings(output_type=TFFunnelForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) def call( self, inputs, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs ): r""" Returns: Examples:: >>> from transformers import FunnelTokenizer, TFFunnelForPreTraining >>> import torch >>> tokenizer = TFFunnelTokenizer.from_pretrained('funnel-transformer/small') >>> model = TFFunnelForPreTraining.from_pretrained('funnel-transformer/small') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors= "tf") >>> logits = model(inputs).logits """ return_dict = return_dict if return_dict is not None else self.funnel.return_dict if inputs is None and "input_ids" in kwargs and isinstance(kwargs["input_ids"], (dict, BatchEncoding)): warnings.warn( "Using `input_ids` as a dictionary keyword argument is deprecated. Please use `inputs` instead." ) inputs = kwargs["input_ids"] discriminator_hidden_states = self.funnel( inputs, attention_mask, token_type_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict=return_dict, training=training, ) discriminator_sequence_output = discriminator_hidden_states[0] logits = self.discriminator_predictions(discriminator_sequence_output) if not return_dict: return (logits,) + discriminator_hidden_states[1:] return TFFunnelForPreTrainingOutput( logits=logits, hidden_states=discriminator_hidden_states.hidden_states, attentions=discriminator_hidden_states.attentions, ) @add_start_docstrings("""Funnel Model with a `language modeling` head on top. """, FUNNEL_START_DOCSTRING) class TFFunnelForMaskedLM(TFFunnelPreTrainedModel, TFMaskedLanguageModelingLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.funnel = TFFunnelMainLayer(config, name="funnel") self.lm_head = TFFunnelMaskedLMHead(config, self.funnel.embeddings, name="lm_head") @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=TFMaskedLMOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, inputs=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` """ return_dict = return_dict if return_dict is not None else self.funnel.return_dict if isinstance(inputs, (tuple, list)): labels = inputs[7] if len(inputs) > 7 else labels if len(inputs) > 7: inputs = inputs[:7] elif isinstance(inputs, (dict, BatchEncoding)): labels = inputs.pop("labels", labels) outputs = self.funnel( inputs, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) sequence_output = outputs[0] prediction_scores = self.lm_head(sequence_output, training=training) loss = None if labels is None else self.compute_loss(labels, prediction_scores) if not return_dict: output = (prediction_scores,) + outputs[1:] return ((loss,) + output) if loss is not None else output return TFMaskedLMOutput( loss=loss, logits=prediction_scores, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. """, FUNNEL_START_DOCSTRING, ) class TFFunnelForSequenceClassification(TFFunnelPreTrainedModel, TFSequenceClassificationLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.funnel = TFFunnelBaseLayer(config, name="funnel") self.classifier = TFFunnelClassificationHead(config, config.num_labels, name="classifier") @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small-base", output_type=TFSequenceClassifierOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, inputs=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ..., config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss), If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.funnel.return_dict if isinstance(inputs, (tuple, list)): labels = inputs[7] if len(inputs) > 7 else labels if len(inputs) > 7: inputs = inputs[:7] elif isinstance(inputs, (dict, BatchEncoding)): labels = inputs.pop("labels", labels) outputs = self.funnel( inputs, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) last_hidden_state = outputs[0] pooled_output = last_hidden_state[:, 0] logits = self.classifier(pooled_output, training=training) loss = None if labels is None else self.compute_loss(labels, logits) if not return_dict: output = (logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return TFSequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """, FUNNEL_START_DOCSTRING, ) class TFFunnelForMultipleChoice(TFFunnelPreTrainedModel, TFMultipleChoiceLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.funnel = TFFunnelBaseLayer(config, name="funnel") self.classifier = TFFunnelClassificationHead(config, 1, name="classifier") @property def dummy_inputs(self): """ Dummy inputs to build the network. Returns: tf.Tensor with dummy inputs """ return {"input_ids": tf.constant(MULTIPLE_CHOICE_DUMMY_INPUTS)} @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small-base", output_type=TFMultipleChoiceModelOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, inputs, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the multiple choice classification loss. Indices should be in ``[0, ..., num_choices]`` where :obj:`num_choices` is the size of the second dimension of the input tensors. (See :obj:`input_ids` above) """ if isinstance(inputs, (tuple, list)): input_ids = inputs[0] attention_mask = inputs[1] if len(inputs) > 1 else attention_mask token_type_ids = inputs[2] if len(inputs) > 2 else token_type_ids inputs_embeds = inputs[3] if len(inputs) > 3 else inputs_embeds output_attentions = inputs[4] if len(inputs) > 4 else output_attentions output_hidden_states = inputs[5] if len(inputs) > 5 else output_hidden_states return_dict = inputs[6] if len(inputs) > 6 else return_dict labels = inputs[7] if len(inputs) > 7 else labels assert len(inputs) <= 8, "Too many inputs." elif isinstance(inputs, (dict, BatchEncoding)): input_ids = inputs.get("input_ids") attention_mask = inputs.get("attention_mask", attention_mask) token_type_ids = inputs.get("token_type_ids", token_type_ids) inputs_embeds = inputs.get("inputs_embeds", inputs_embeds) output_attentions = inputs.get("output_attentions", output_attentions) output_hidden_states = inputs.get("output_hidden_states", output_hidden_states) return_dict = inputs.get("return_dict", return_dict) labels = inputs.get("labels", labels) assert len(inputs) <= 8, "Too many inputs." else: input_ids = inputs return_dict = return_dict if return_dict is not None else self.funnel.return_dict if input_ids is not None: num_choices = shape_list(input_ids)[1] seq_length = shape_list(input_ids)[2] else: num_choices = shape_list(inputs_embeds)[1] seq_length = shape_list(inputs_embeds)[2] flat_input_ids = tf.reshape(input_ids, (-1, seq_length)) if input_ids is not None else None flat_attention_mask = tf.reshape(attention_mask, (-1, seq_length)) if attention_mask is not None else None flat_token_type_ids = tf.reshape(token_type_ids, (-1, seq_length)) if token_type_ids is not None else None flat_inputs_embeds = ( tf.reshape(inputs_embeds, (-1, seq_length, shape_list(inputs_embeds)[3])) if inputs_embeds is not None else None ) outputs = self.funnel( flat_input_ids, attention_mask=flat_attention_mask, token_type_ids=flat_token_type_ids, inputs_embeds=flat_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) last_hidden_state = outputs[0] pooled_output = last_hidden_state[:, 0] logits = self.classifier(pooled_output, training=training) reshaped_logits = tf.reshape(logits, (-1, num_choices)) loss = None if labels is None else self.compute_loss(labels, reshaped_logits) if not return_dict: output = (reshaped_logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return TFMultipleChoiceModelOutput( loss=loss, logits=reshaped_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """, FUNNEL_START_DOCSTRING, ) class TFFunnelForTokenClassification(TFFunnelPreTrainedModel, TFTokenClassificationLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.funnel = TFFunnelMainLayer(config, name="funnel") self.dropout = tf.keras.layers.Dropout(config.hidden_dropout) self.classifier = tf.keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" ) @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=TFTokenClassifierOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, inputs=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - 1]``. """ return_dict = return_dict if return_dict is not None else self.funnel.return_dict if isinstance(inputs, (tuple, list)): labels = inputs[7] if len(inputs) > 7 else labels if len(inputs) > 7: inputs = inputs[:7] elif isinstance(inputs, (dict, BatchEncoding)): labels = inputs.pop("labels", labels) outputs = self.funnel( inputs, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) sequence_output = outputs[0] sequence_output = self.dropout(sequence_output, training=training) logits = self.classifier(sequence_output) loss = None if labels is None else self.compute_loss(labels, logits) if not return_dict: output = (logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return TFTokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). """, FUNNEL_START_DOCSTRING, ) class TFFunnelForQuestionAnswering(TFFunnelPreTrainedModel, TFQuestionAnsweringLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.funnel = TFFunnelMainLayer(config, name="funnel") self.qa_outputs = tf.keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" ) @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=TFQuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, inputs=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, start_positions=None, end_positions=None, training=False, ): r""" start_positions (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ return_dict = return_dict if return_dict is not None else self.funnel.return_dict if isinstance(inputs, (tuple, list)): start_positions = inputs[7] if len(inputs) > 7 else start_positions end_positions = inputs[8] if len(inputs) > 8 else end_positions if len(inputs) > 7: inputs = inputs[:7] elif isinstance(inputs, (dict, BatchEncoding)): start_positions = inputs.pop("start_positions", start_positions) end_positions = inputs.pop("end_positions", start_positions) outputs = self.funnel( inputs, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) sequence_output = outputs[0] logits = self.qa_outputs(sequence_output) start_logits, end_logits = tf.split(logits, 2, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1) end_logits = tf.squeeze(end_logits, axis=-1) loss = None if start_positions is not None and end_positions is not None: labels = {"start_position": start_positions, "end_position": end_positions} loss = self.compute_loss(labels, (start_logits, end_logits)) if not return_dict: output = (start_logits, end_logits) + outputs[1:] return ((loss,) + output) if loss is not None else output return TFQuestionAnsweringModelOutput( loss=loss, start_logits=start_logits, end_logits=end_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, )
def move_stage(self, values): if self.capture_thread is None: e, v = self.popup("Move stage", [ [ sg.Text("Target position"), sg.Input(key="target", size=(6, 1)), sg.Text("mm") ], [sg.Ok(), sg.Cancel()] ]) if e == "Ok": target = self.parse_numeric(v["target"], float, 0, 196, None) if target is None: logging.error("Invalid input - should be float between 0.0 and 196.0") else: self.capture_thread = InterruptableThread( target=self.move_stage_thread, args=(values, target)) self.capture_thread.start() else: logging.debug("Can't move stage because capture_thread is already in use")
Survivors of the Manchester Arena suicide bombing and the families of those who died have attended a remembrance service at the city’s cathedral on the anniversary of the attack. They were joined by Prince William and Theresa May at the service, which took place as a national minute’s silence was observed to remember the 22 victims of the atrocity. Dr Rev David Walker, the bishop of Manchester, told the service that the city would never forget those who died on May 22nd last year. He also pledged lifelong support for the 800 people who were injured physically or psychologically in the attack. “Part of the horror . . . is that [the Arena] appeared to have been deliberately chosen as a venue full of young people,” he said. “Today they are one year into living with those life-changing injuries, with many decades of continuing to do so lying ahead of them. Ariana Grande, who performed at Manchester Arena that night, sent a message to those hurt in the attack: “Thinking of you all today and every day. I love you with all of me and am sending you all of the light and warmth I have to offer on this challenging day,” she tweeted. As well as the prime minister and the duke, who read from I Corinthians 13:4 – “Love is patient; love is kind; love is not envious or boastful or arrogant or rude” – the service was attended by first responders to the scene and civic leaders. Other national figures included the Labour leader Jeremy Corbyn, SNP leader Nicola Sturgeon and Sir Vince Cable, the leader of the Liberal Democrats. The service was relayed to a big screen outside in Cathedral Gardens, where several thousand people were gathered. Among them was Jean Osborne (69), who was clutching a laminated photograph of her daughter, Caroline Davis, and their friend Wendy Fawell. All three women had worked together at a school in Guiseley, doing dinners and the after school club. Fawell died in the attack; Davis was seriously injured. They had gone to pick up their daughters from the concert and were waiting in the foyer when Salman Abedi, a 22-year-old Mancunian of Libyan heritage, detonated a bomb in his rucksack. They welcomed the bishop’s pledge to remember the injured, saying Davis had been forced to go back to work as a dinner lady in spite of persistent health issues. Shrapnel from the bomb sliced her heel, a blast burn destroyed part of her skin and she had to have one arm reconstructed. Also in the crowd were many teenagers wearing Ariana Grande T-shirts they bought on the night of the attack. Lorraine Ness (19) and her cousin Leigh Tilley (10) had travelled from Fife in Scotland. “We wanted to pay our respects and get closure after what happened here that night,” said Lorraine, who has been receiving counselling for the psychological trauma she suffered. Thousands of messages of support on cardboard tags have been attached by members of the public to 28 Japanese maple trees, which form the Trees of Hope trail from the square to Victoria railway and tram station. More than 7,000 hand-stitched hearts have also been dotted around the city centre, with people encouraged to smile as they pass them for a social media campaign, #aheart4mcr. The minute’s silence was marked at government buildings, and at the Grenfell Tower fire inquiry in central London. Inside the cathedral, 22 candles – made using wax from the thousands of candles left in St Ann’s Square a year ago – were lit. Photographs of the 22 victims chosen by their families were shown on the screen: 28-year-old John Atkinson, a support worker for people with autism, was sticking his tongue out at the camera; Polish couple Angelika and Marcin Klis, 39 and 42, were photographed around the corner in Exchange Square, hours before they went to pick their daughters up from the concert. Inseparable teenage sweethearts Liam Curry, 19, and Chloe Rutherford, 17, were shown together by the Tyne Bridge near their native South Shields.
Automated Optical Inspection Method for Light-Emitting Diode Defect Detection Using Unsupervised Generative Adversarial Neural Network : Many automated optical inspection (AOI) companies use supervised object detection networks to inspect items, a technique which expends tremendous time and energy to mark defectives. Therefore, we propose an AOI system which uses an unsupervised learning network as the base algorithm to simultaneously generate anomaly alerts and reduce labeling costs. This AOI system works by deploying the GANomaly neural network and the supervised network to the manufacturing system. To improve the ability to distinguish anomaly items from normal items in industry and enhance the overall performance of the manufacturing process, the system uses the structural similarity index (SSIM) as part of the loss function as well as the scoring parameters. Thus, the proposed system will achieve the requirements of smart factories in the future (Industry 4.0). Introduction After data input and data preprocessing, an ordinary AOI system will generally use a basic algorithm or a supervised model as the inspection method. These methods require customization or huge amounts of training data and are only suitable for processing a predefined type of data. Unfortunately, with such a wide variety of products and production rates in a general factory setting, it is difficult to obtain sufficient anomaly data to train supervised models. This in turn makes it excessively laborious and timeconsuming to customize each algorithm or to train each set of weights of the supervised model for a specific anomaly type. Unsupervised generative neural networks have been developed rapidly in recent years. Among them, the application models of anomaly detection series are provided with inspection abilities for processing various kinds of data. Since unsupervised generative neural networks only require a small amount of normal data for training, it is more suitable to deploy unsupervised neural networks in factories rather than supervised networks for the AOI applications in the future. In 2015, the variational autoencoder-based anomaly detection method, which is a reconstruction-based anomaly detection model, achieves the purpose of image generation by adding the Gaussian distribution of the original data as the output restriction to the latent vector in the autoencoder. A Generative Adversarial Network (GAN) is used for anomaly detection and consists of a generative network and a discriminator. The generative network learns to create fake images which are similar to known sample images. The discriminator learns to distinguish real images from these synthetic fake images, and identifies abnormal pixels. Therefore, GANs can recognize unknown defects and different types of new samples via unsupervised learning. GANs are divided into three types: AnoGAN, Efficient-GAN and GANomaly models. In 2017, Thomas Schlegl, Philipp Seebck, Sebastian M. Waldstein, Ursula Schmidt-Erfurth, and Georg Langs published a generative adversarial network (GAN) based unsupervised anomaly detection model named AnoGAN. It is a reconstruction-based model with a score function constructed using mapping from the image space to the latent space. AnoGAN is applied to retinal optical coherence tomography (OCT) image detection. To achieve a better performance on images and network intrusion datasets than other published methods, and significantly shorten the test time, efficient GAN-based anomaly detection (EGBAD) models, raised in 2018, learn encoders which map inputs to latent representations in training, and thus EGBAD models do not have to recover latent representations at test time. With the development of industrial intelligence, manufacturing technology has evolved to High Variety Low Volume production. The highly adaptable deep learning algorithms have gradually been adopted in AOI systems. The attributes of industrial scene data are quite different from the attributes of data from public databases. Compared to public databases, most data in industrial scenes are unblemished and look alike; the defect samples are fewer and fewer nowadays. However, a lack of defect data makes it hard to detect objects precisely in object detection or to create labels correctly for segmentations. To resolve this problem, we propose a GANomaly technique which finds abnormal pixel characteristics by learning from normal samples. Furthermore, we use loss functions to improve the performance of the whole GANomaly model. Experiment and Architecture of the System In this section, we introduce the concept of our AOI system, which contains the dataset, the algorithm, the scoring method, and the settings. Introduction of the AOI System In this paper, we propose a human-machine interaction AOI System which can respond rapidly to different data inputs. Before applying an algorithm or a supervised defect detection model in the assembly line, we deploy a GANomaly anomaly detection model, as seen in Figure 1. Since the GANomaly model is highly effective at identifying anomalies for a variety of different samples, and only requires a small amount of the normal training data, it is fast in processing data and easily achieves a fairly good detection effect. We employ GANomaly to distinguish newly encountered defects from existing data, as the model can quickly notify the developer that there is a need to operate a new abnormal data analysis when a new abnormal data distribution occurs. After confirming that a new type of abnormal data has been identified, the developer can use the pre-labeled method proposed in this paper for GANomaly to generate labeled data automatically. These labeled data will be provided to the supervised neural network for training and to adjust the weights. Therefore, our AOI system not only improves the overall yield rate significantly by reducing the impact caused by new incoming defects, but the system also cuts down the labor and time costs for labeling new anomaly data. method proposed in this paper for GANomaly to generate labeled data automatically. These labeled data will be provided to the supervised neural network for training and to adjust the weights. Therefore, our AOI system not only improves the overall yield rate significantly by reducing the impact caused by new incoming defects, but the system also cuts down the labor and time costs for labeling new anomaly data. Dataset Our dataset is a collection of packaged LED surface images, where the LED size is 1.6 mm 1.6 mm, and the region of interest is 2.3 mm 2.3 mm. There are 5 different packaging and wiring methods, as well as three categories of defects: plate rift defects, unclean, and glue overflow defects. These defects are individually distributed on the surfaces of different types of LEDs. Besides, not all types of LEDs will be confronted with all kinds of defects. As shown in Figure 2, plate rift defects do not occur in Type 3, Type 4, and Type 5; however, unclean defects do not occur in Type 4 and Type 5. Dataset Our dataset is a collection of packaged LED surface images, where the LED size is 1.6 mm 1.6 mm, and the region of interest is 2.3 mm 2.3 mm. There are 5 different packaging and wiring methods, as well as three categories of defects: plate rift defects, unclean, and glue overflow defects. These defects are individually distributed on the surfaces of different types of LEDs. Besides, not all types of LEDs will be confronted with all kinds of defects. As shown in Figure 2, plate rift defects do not occur in Type 3, Type 4, and Type 5; however, unclean defects do not occur in Type 4 and Type 5. method proposed in this paper for GANomaly to generate labeled data automatically. These labeled data will be provided to the supervised neural network for training and to adjust the weights. Therefore, our AOI system not only improves the overall yield rate significantly by reducing the impact caused by new incoming defects, but the system also cuts down the labor and time costs for labeling new anomaly data. Dataset Our dataset is a collection of packaged LED surface images, where the LED size is 1.6 mm 1.6 mm, and the region of interest is 2.3 mm 2.3 mm. There are 5 different packaging and wiring methods, as well as three categories of defects: plate rift defects, unclean, and glue overflow defects. These defects are individually distributed on the surfaces of different types of LEDs. Besides, not all types of LEDs will be confronted with all kinds of defects. As shown in Figure 2, plate rift defects do not occur in Type 3, Type 4, and Type 5; however, unclean defects do not occur in Type 4 and Type 5. Coaxial lights were used to evenly illuminate the packaged LEDs and a CMOS industrial camera was used to acquire images of which the resolutions are 1280 1024 pixels. To recognize the insides and the outsides of LEDs, there are two image preprocessing steps that need to be complete before training. First, gray-scaling the image, obtaining the LED profile data by a specific threshold, and then getting the full-page LED surface image map by rotating and capturing the image based on the profile of the LED is completed. Second, color normalization is performed using Gamma Correction on the inside and the outside of the LED surface separately. Without color normalization, it is hard to recognize the insides and the outsides of LEDs, and the system will waste time detecting defects on unnecessary parts. After color normalization, we can obtain an image map, as shown in Figure 2, which is helpful for feature extraction in the neural network. The Contextual Loss Function and the Score Function The backbone of our algorithm, as previously discussed, is GANomaly, which is an unsupervised generative confrontation anomaly detection model based on autoencoder and imported into a confrontation network. By using a conditional generative confrontation network, our algorithm converts high-dimensional training data images into latent space representations and uses the concept of the encoding/decoding model to reconstruct the images which contain the features of learning data. After that, the reconstructed images are graded by a scoring function, and the abnormal data are identified by delineating the normal zones. To help GANomaly focus on adjusting the data to normal, three loss functions are employed to standardize the operations of GANomaly. These three types of functions are: contextual loss, encoder loss, and adversarial loss. As shown in Figure 3 (drawn based on ), z is the latent space, and the contextual loss function compares the differences between the fake image generated from the autoencoder and the original image by L1 function (Option 1 in Figure 3). The encoder loss function compares the differences between two latent representations. One of the latent representations is generated by the encoder during the auto-encoding, and the other is generated by the same encoder of which the input is a fake image. The adversarial loss function compares the differences between the two latent representations generated by the encoder. One is generated from the real image and the other one is generated from the fake image. The function then operates the mini-max operation with the two intermediate latent representations and calculates the distance between these two function outputs. With the three loss functions mentioned above, the model can generate the features of normal data by training with only a few normal images. consists of data calculated from the thresholds which can be set, which is similar to each sample score. The AUC represents the ability to detect anomalies. "AUC = 0.5" implies no ability of anomaly detection, and "AUC = 1" implies the complete ability of anomaly detection. To apply the AOI system to the industry, the required AUC is at least 0.85 or even 0.9. There are 2 comparison functions in a GANomaly model, one is the loss function used in training and the other is the loss function using in the comparisons between real sample images and generated fake normal sample images. For the purpose of convenience, the models are named with the following format: GANomaly--. Our experiments contain three architectures. One is GANomaly-L1-L1, which provides us with outputs from the original GANomaly model of which inputs are the LED data set. Another is GANomaly-SSIM-L1, employing the mSSIM loss function instead of the L1 loss function as the contextual loss function in GANomaly, and test models with L1 score function to improve the performance. The other is GANomaly-SSIM-SSIM, replacing the L1 score with the SSIM score as the contextual loss function and the score function. When a model generates a fake image, it defines a score function to compare the differences between the fake and the real images, as with Equation. where x i represents the ith input, A(x i ) is the anomaly score, R(x i ) is the L1 score produced from the comparison between the fake and the real images, L(x i ) is the L2 score based on the latent representation of real and fake images, and is the weight between the contextual and encoder scores. The obtained score will be normalized by Equation, and the normalized result is the final score we are interested in. where A is the anomaly score vector, x i is the ith vector, A(x i ) is the score vector of x i, and A(x i ) is the normalized score vector. After that, we can use all the obtained images to calculate the area under the curve (AUC) value, which represents the overall performance of the model. The AUC value is the most important index of an anomaly detection model, where the main curve is the Receiver Operation Characteristic (ROC) curve. The ROC curve is a graphical representation of how well a classification model performs. The curve is plotted on the TPR-FPR (True Positive Rate-False Positive Rate) plane and consists of data calculated from the thresholds which can be set, which is similar to each sample score. The AUC represents the ability to detect anomalies. "AUC = 0.5" implies no ability of anomaly detection, and "AUC = 1" implies the complete ability of anomaly detection. To apply the AOI system to the industry, the required AUC is at least 0.85 or even 0.9. There are 2 comparison functions in a GANomaly model, one is the loss function used in training and the other is the loss function using in the comparisons between real sample images and generated fake normal sample images. For the purpose of convenience, the models are named with the following format: GANomaly--. Our experiments contain three architectures. One is GANomaly-L1-L1, which provides us with outputs from the original GANomaly model of which inputs are the LED data set. Another is GANomaly-SSIM-L1, employing the mSSIM loss function instead of the L1 loss function as the contextual loss function in GANomaly, and test models with L1 score function to improve the performance. The other is GANomaly-SSIM-SSIM, replacing the L1 score with the SSIM score as the contextual loss function and the score function. It is worth noting that the GANomaly architectures of GANomaly-SSIM-L1 and GANomaly-SSIM-SSIM are the same, so the same parameters will be used in the parameter optimizations process mentioned in the next section. As shown in Figure 4, two different comparison methods, L1 and mSSIM scores, are used to compare the unknown sample images and the generated fake normal sample images. This study proposes the use of a pre-labeled method to extract the profile of the abnormal data, which implies the defect positions on LEDs detected by GANomaly, and converts the detection results into labeled data. The unsupervised feature of GANomaly is then employed to automatically generate labeled data, which also has a positive effect on the supervised models in AOI systems. This experiment compares the effectiveness of the system with and without the pre-labeled method, based on the quality and the time it took to complete the labelling. This study proposes the use of a pre-labeled method to extract the profile of the abnormal data, which implies the defect positions on LEDs detected by GANomaly, and converts the detection results into labeled data. The unsupervised feature of GANomaly is then employed to automatically generate labeled data, which also has a positive effect on the supervised models in AOI systems. This experiment compares the effectiveness of the Crystals 2021, 11, 1048 6 of 12 system with and without the pre-labeled method, based on the quality and the time it took to complete the labelling. Parameter Optimizations The three algorithm architectures mentioned in the previous section will go through a series of parameter optimizations for our data set, before the performance is tested. The adjustments for GANomaly are described below. The weights of three loss functions are tested separately at intervals of 10, and the tests range from 1 to 100. Each of the functions has 11 parameters, thus there are a total of 113 possibilities under the interactive test. Since it is inefficient to calculate these one by one, we use a 3-stage random sampling approximation method to obtain the best solution. The parameter adjustments for the score function are described below. Optimizing the scoring weight. To find the best ratio for the score function, based on the difference between the model contextual and the encoder, we set the interval to 0.1 and the range from 0 to 1. For architecture 2 and architecture 3, the SSIM score in the score function is optimized with the kernel size, and the interval is set to 1 and range from 8 to 14. After the parameter optimizations, the optimized parameters of GANomaly-L1-L1, GANomaly-SSIM-L1, and GANomaly-SSIM-SSIM are listed in Table 1. After sufficient convergence, which is mentioned in the next section, the system uses optimized parameters and then selects the best result from the outputs. Hardware and Software Configurations We used Intel ® core™ i7-5820k CPU and Nvidia GTX 1080 Titan GPU to train the model and to test the performance of our algorithms, and used PyTorch to perform the data pre-processing, GANomaly, SSIM, and Pre-labeled method. The training parameters are listed in Table 2. Since the data amount from the factory is limited, we use 500 normal images to train the model. There are 100 training images per LED type. The number of testing (validation) images is 600, including 300 normal data and 300 abnormal data. There are 60 normal data for each LED type, and 100 abnormal data for each defect. Due to the limited data amount, the number of defect categories under different LED types do not follow any specific distribution. Performance of GANomaly-L1-L1 After the parameter optimizations, the adversarial loss function dominates the generation data flow in algorithm GANomaly-L1-L1. The unnecessary details may cause the scoring function to over-grade, resulting in an excessive overlap in the score distribution. Therefore, after the image is generated, the image resolution is resized to 64 64 pixels to reduce the number of unnecessary details. Under the framework of this algorithm, the best solution obtained by our calculation is the anomaly detection with AUC = 0.8395, as shown in Figure 5a. As seen in Figure 5b, the overlap of normal and abnormal data is quite large. The reasons for this are described below. The ability of the model to generate original data is limited. In other words, the generated image cannot sufficiently retain the normal data features of the original real image, and the fake image restoration is excessively consistent, so the features images are not good enough to identify abnormalities in the subsequent scoring step. Under the assumption that the first problem does not exist, the abilities of the previous scoring functions are too compendious to distinguish images, which is the main problem of the L1 score function. Performance of GANomaly-SSIM-L1 Under GANomaly-SSIM-L1, mSSIM is used as the contextual loss function of the model. To improve the requirements for the image similarity through SSIM, based on the characteristics of image brightness, contrast, and structural differences, we resized the image resolution to 128 128 pixels and used mSSIM loss function to compare images after the parameter optimizations. After the sufficient convergence, we obtained the better performance of the detection ability of AUC = 0.8801, as shown in Figure 6a. As seen in Figure 6b, the normal and anom- The data distribution jitters in Figure 5a,b are caused by the limited data amount. This does not affect the overall performance. Performance of GANomaly-SSIM-L1 Under GANomaly-SSIM-L1, mSSIM is used as the contextual loss function of the model. To improve the requirements for the image similarity through SSIM, based on the characteristics of image brightness, contrast, and structural differences, we resized the image resolution to 128 128 pixels and used mSSIM loss function to compare images after the parameter optimizations. After the sufficient convergence, we obtained the better performance of the detection ability of AUC = 0.8801, as shown in Figure 6a. As seen in Figure 6b, the normal and anomalous score distributions are narrower than GANomaly-L1-L1, but the overlapping scores are lower. From these results, the impact of Problem 1, mentioned in Section 3.1, is reduced when the score function remains unchanged but is operating under a different model. The narrower score distribution is due to the better preservation of the individual normal features of the real image by the model so that the score distribution will be closer to the lower score during subsequent re-scoring. Compared with GANomaly-L1-L1, it is easier to identify abnormal blocks and generate fewer misjudgment events. Therefore, the overall performance is improved. Performance of GANomaly-SSIM-L1 Under GANomaly-SSIM-L1, mSSIM is used as the contextual loss function of the model. To improve the requirements for the image similarity through SSIM, based on the characteristics of image brightness, contrast, and structural differences, we resized the image resolution to 128 128 pixels and used mSSIM loss function to compare images after the parameter optimizations. After the sufficient convergence, we obtained the better performance of the detection ability of AUC = 0.8801, as shown in Figure 6a. As seen in Figure 6b, the normal and anomalous score distributions are narrower than GANomaly-L1-L1, but the overlapping scores are lower. From these results, the impact of Problem 1, mentioned in Section 3.1, is reduced when the score function remains unchanged but is operating under a different model. The narrower score distribution is due to the better preservation of the individual normal features of the real image by the model so that the score distribution will be closer to the lower score during subsequent re-scoring. Compared with GANomaly-L1-L1, it is easier to identify abnormal blocks and generate fewer misjudgment events. Therefore, the overall performance is improved. Performance of GANomaly-SSIM-SSIM Improvements were made in GANomaly-L1-SSIM, using the comparison ability of SSIM in the scoring function in this section. This was achieved by retaining the overall optimized parameters and only replacing the L1 score in the scoring function with mSSIM scores. We obtained a better solution, AUC = 0.9524, as shown in Figure 7a. It can be seen Performance of GANomaly-SSIM-SSIM Improvements were made in GANomaly-L1-SSIM, using the comparison ability of SSIM in the scoring function in this section. This was achieved by retaining the overall optimized parameters and only replacing the L1 score in the scoring function with mSSIM scores. We obtained a better solution, AUC = 0.9524, as shown in Figure 7a. It can be seen from Figure 7b that, compared with GANomaly-L1-SSIM, the overall score distribution becomes wider, and the overlapped part drops significantly. This phenomenon explains that SSIM compares the three factors the most, and the score judgment will be more delicately distinguished. This results in the score distribution widening toward the high score, making the positive and abnormal data more obvious. It also enhances the recognition ability. from Figure 7b that, compared with GANomaly-L1-SSIM, the overall score distribution becomes wider, and the overlapped part drops significantly. This phenomenon explains that SSIM compares the three factors the most, and the score judgment will be more delicately distinguished. This results in the score distribution widening toward the high score, making the positive and abnormal data more obvious. It also enhances the recognition ability. Comparisons of Algorithms As analyzed above, the restoration ability of GANomaly-L1-L1 is limited, and the restored images of L1 fake images are too consistent to make perfect judgments during the comparison. In comparison with GANomaly-L1-L1, GANomaly-SSIM-L1 and GANomaly-SSIM-SSIM not only restore the normal images but also retain the normal image features individually, which improves the accuracy. To directly explain the performance by generated images, Figure 8 lists the real images, L1 fake images, L1 comparison results, SSIM fake images, and SSIM comparison results of the three defects. After training the GANomaly model, pre-processing the sample image acquired from the CMOS camera, and inputting the image into the generative network, the generative network generates fake normal sample images depending on the loss functions (L1 or SSIM) they used. Those real sample images of three different defect types (plate rift, unclean, and glue overflow) can be used to generate fake normal images, labeled "Fake Normal Image (L1)" and "Fake normal Image (SSIM)" in Figure 8, corresponding to the first loss functions (L1 or SSIM) that the generative networks adopted. The discriminators then use the second loss functions to compare the fake normal sample images and the real sample images, and identify the abnormal pixels. Following this, the algorithms output L1 comparison results, labeled Inference (L1) in Figure 8. When it comes to the plate rift and unclean defects, it can be observed that the noises in Inference (SSIM) are fewer than in Inference (L1). Although the L1 comparison function can catch the defect positions when board cracks and stain defects are targeted, the over grabbing phenomenon of the non-defect positions is also quite serious. What is worse, the different values of pixels in sample images with defect glue overflow are very small, which makes the L1 difficult to detect. However, SSIM comparison function can perfectly grasp the defect locations when it comes to board cracks, stains, and glue overflow defects. That is, SSIM can effectively judge abnormal information during the scoring. In addition to the ability to identify anomalies in these comparison algorithms, the calculation speed is an important indicator of the AOI system. The general industrial Comparisons of Algorithms As analyzed above, the restoration ability of GANomaly-L1-L1 is limited, and the restored images of L1 fake images are too consistent to make perfect judgments during the comparison. In comparison with GANomaly-L1-L1, GANomaly-SSIM-L1 and GANomaly-SSIM-SSIM not only restore the normal images but also retain the normal image features individually, which improves the accuracy. To directly explain the performance by generated images, Figure 8 lists the real images, L1 fake images, L1 comparison results, SSIM fake images, and SSIM comparison results of the three defects. inspections require an image identification speed that is faster than 0.03 s per frame. All of the algorithms mentioned above can satisfy the requirements and, as shown in Table 3, the best performance is 0.0076 s per frame. Performance of Pre-Labeled Method We propose the pre-labeled method to convert the comparison images generated by GANomaly into labeled files suitable for the supervised model. In this experiment, we provided the same unlabeled images and pre-labeled files to seven testers, and recorded After training the GANomaly model, pre-processing the sample image acquired from the CMOS camera, and inputting the image into the generative network, the generative network generates fake normal sample images depending on the loss functions (L1 or SSIM) they used. Those real sample images of three different defect types (plate rift, unclean, and glue overflow) can be used to generate fake normal images, labeled "Fake Normal Image (L1)" and "Fake normal Image (SSIM)" in Figure 8, corresponding to the first loss functions (L1 or SSIM) that the generative networks adopted. The discriminators then use the second loss functions to compare the fake normal sample images and the real sample images, and identify the abnormal pixels. Following this, the algorithms output L1 comparison results, labeled Inference (L1) in Figure 8. When it comes to the plate rift and unclean defects, it can be observed that the noises in Inference (SSIM) are fewer than in Inference (L1). Although the L1 comparison function can catch the defect positions when board cracks and stain defects are targeted, the over grabbing phenomenon of the non-defect positions is also quite serious. What is worse, the different values of pixels in sample images with defect glue overflow are very small, which makes the L1 difficult to detect. However, SSIM comparison function can perfectly grasp the defect locations when it comes to board cracks, stains, and glue overflow defects. That is, SSIM can effectively judge abnormal information during the scoring. In addition to the ability to identify anomalies in these comparison algorithms, the calculation speed is an important indicator of the AOI system. The general industrial inspections require an image identification speed that is faster than 0.03 s per frame. All of the algorithms mentioned above can satisfy the requirements and, as shown in Table 3, the best performance is 0.0076 s per frame. Performance of Pre-Labeled Method We propose the pre-labeled method to convert the comparison images generated by GANomaly into labeled files suitable for the supervised model. In this experiment, we provided the same unlabeled images and pre-labeled files to seven testers, and recorded the time they took to label the images, after the labor marking training. A tester can mark 1.06 unlabeled images per minute and 2.33 pre-labeled images per minute on average. As shown in Table 4, the marking speed can be increased by 21 times via the pre-labeled method. Table 4. The labeling speed under labor labeling and GANomaly-SSIM-SSIM. Number of Labels per Minute Enhancement Labor 3 -GANomaly-SSIM-SSIM 63 21 According to Figure 9, different markers possess different marking qualities for the same image. For instance, Tester 1 marked quite completely as soon as possible, Tester 2 marked too much area and abbreviated, Tester 3 incorrectly marked the defect location, and Tester 4 marked the wrong mark. The unstable marking quality directly affects the inspection ability of the supervised model. With the pre-labeled method, the labeling personnel only has to make a quick judgment and minor repairs based on the labeling data generated by GANomaly, and then feed the model training, thereby reducing the labeling labor costs. Moreover, the labeled data generated by GANomaly are consistent and the unstable marking quality will not occur, improving the ability of the subsequent supervised model to identify defects. Crystals 2021, 11, x FOR PEER REVIEW 11 of 12 Figure 9. The labeling condition within four different subjects, the red arrows imply bad labeling by labor. Conclusions In this paper, we proposed an AOI defect detection system with a supervised model assisted by an unsupervised model. The model adopts GANomaly and imports structural similarity indicators to strengthen the original model. The detection rate is increased by 13.4%, and the AUC reaches 0.9524. We also proposed the pre-labeled method for GANomaly to automatically generate labeled data files to assist in supervised model training. Compared with direct labeling, the generation speed is increased by 21 times. Moreover, the stability and the completeness of data are guaranteed, which enhances the training effectiveness of the supervised model. To achieve the purpose of smart factories, we integrated the algorithms and the pre-labeled method into a complete AOI system, which can handle the detection of LED surface defects and can also provide abnormal alarm prompts and rapid responses. Conclusions In this paper, we proposed an AOI defect detection system with a supervised model assisted by an unsupervised model. The model adopts GANomaly and imports structural similarity indicators to strengthen the original model. The detection rate is increased by 13.4%, and the AUC reaches 0.9524. We also proposed the pre-labeled method for GANomaly to automatically generate labeled data files to assist in supervised model training. Compared with direct labeling, the generation speed is increased by 21 times. Moreover, the stability and the completeness of data are guaranteed, which enhances the training effectiveness of the supervised model. To achieve the purpose of smart factories, we integrated the algorithms and the pre-labeled method into a complete AOI system, which can handle the detection of LED surface defects and can also provide abnormal alarm prompts and rapid responses.
<reponame>yrjwcharm/beeshell-master import React from 'react' import { View, Text, TouchableOpacity, Dimensions, TextStyle } from 'react-native' import { SlideModal, SlideModalProps } from '../SlideModal' import styleUtils from '../../common/styles/utils' import bottomModalStyles from './styles' const screen = Dimensions.get('window') export interface BottomModalProps extends SlideModalProps { testID?: string titleContainer?: any title?: string titleStyle?: TextStyle rightLabel?: any rightLabelText?: string rightLabelTextStyle?: TextStyle rightCallback?: Function leftLabel?: any leftLabelText?: string leftLabelTextStyle?: TextStyle leftCallback?: Function } export class BottomModal extends SlideModal<BottomModalProps> { static defaultProps = { ...SlideModal.defaultProps, cancelable: true, screenWidth: screen.width, titleContainer: null, title: '标题', titleStyle: {}, rightLabel: null, rightLabelText: '完成', rightLabelTextStyle: {}, rightCallback: null, leftLabel: null, leftLabelText: '取消', leftLabelTextStyle: {}, leftCallback: null, } constructor (props) { super(props) } getHeader () { const styles = bottomModalStyles const { titleContainer, title, titleStyle, rightLabel, rightLabelText, rightLabelTextStyle, rightCallback, leftLabel, leftLabelText, leftLabelTextStyle, leftCallback } = this.props let rightEl = null if (rightLabel || rightLabelText) { rightEl = ( <TouchableOpacity testID='right' activeOpacity={1} onPress={() => { this.close().then(() => { rightCallback && rightCallback() }) }}> { React.isValidElement(rightLabel) ? rightLabel : <Text style={[ styles.operator, styleUtils.textRight, styleUtils.textPrimaryDark, styleUtils.textBold, rightLabelTextStyle ]} numberOfLines={1}> {rightLabelText} </Text> } </TouchableOpacity> ) } let leftEl = null if (leftLabel || leftLabelText) { leftEl = ( <TouchableOpacity testID='left' activeOpacity={1} onPress={() => { this.close().then(() => { leftCallback && leftCallback() }) }} > { React.isValidElement(leftLabel) ? leftLabel : <Text style={[styles.operator, styleUtils.textLeft, leftLabelTextStyle]} numberOfLines={1}> {leftLabelText} </Text> } </TouchableOpacity> ) } let titleContainerEl = null if (titleContainer || title) { titleContainerEl = React.isValidElement(titleContainer) ? titleContainer : ( <Text style={[styles.title, titleStyle]}>{title}</Text> ) } return ( <View style={styles.header}> <View style={styles.colSide}>{leftEl}</View> <View style={styles.colMiddle}> {titleContainerEl} </View> <View style={styles.colSide}>{rightEl}</View> </View> ) } getBody () { return this.props.children } getContent () { const styles = bottomModalStyles const inner = ( <View testID={this.props.testID} style={[styles.container, { width: this.props.screenWidth }, this.props.style]}> {this.getHeader()} {/* TouchableOpacity 没设置高度时 onPress 有问题*/} {this.getBody()} </View> ) return SlideModal.prototype.getContent.call(this, inner) } }
The present invention is directed toward a device for protecting pilings and similar structures mounted in water from damage due to ice uplift. As is well known in the art, severe damage can be caused to docks and similar structures as a result of the uplifting of the support piles in the winter months. This results from the ice which forms near the upper surface of the water and which surrouds the pilings and becomes tightly affixed thereto. As the tide rises, the ice mass that has formed around the piling rises carrying the piling upwardly therewith. It is not uncommon for the pilings to be moved several feet and the same can actually be removed from the hole into which it had been supported. This movement and dislodgment of the pilings has the even more significant consequence of seriously damaging the docks and other structures which are being supported causing expensive and time-consuming annual repairs. Attempts have been made in the past to prevent such uplifting of pilings. For example, U.S. Pat. Nos. 4,114,388 and 4,252,471 show frusto-conically shaped guard members which are intended to be securely fastened to the pilings at the water level. Should ice form around the guard member and then be moved upwardly by the rising tides, the ice should simply slide upwardly across the guard member in view of the reduced diameter thereof. Such devices should, theoretically, provide some protection. However, since they must be permanently secured to the pilings, they are relatively expensive to install. Furthermore, in the event that the relatively thin outer wall of the guard member ever becomes deformed or if a hole should ever be formed therein, ice could securely fasten itself thereto and not slide off of the guard member as intended. Such deformation of the guard member is not unlikely since the same can be easily hit by boats or the like. Other devices for preventing uplifting of pilings have also been proposed. For example, the devices shown in U.S. Pat. Nos. 3,170,299 and 3,370,432 are sleeves which have an inner diameter slightly greater than the outer diameter of the pilings and which are intended to surround the same. The sleeves are buoyant and move up and down with the water line as the tide rises and lowers. Each of these two patents includes means for preventing ice from forming in the annular gap between the sleeve and the piling. There is, however, no assurance that such result can be obtained. Furthermore, the device shown in U.S. Pat. No. 3,370,432 must be frequently checked to ensure the proper amount of antifreeze which is placed in the annular gap. Another prior art device which is structurally somewhat similar to the device shown in U.S. Pat. No. 3,370,432 is illustrated in U.S. Pat. No. 934,176. This patent is also directed toward a sleeve which surrounds a piling and includes an annular gap. The device is not specifically intended to prevent movement due to ice uplifting but, rather, includes a preservative solution in the annular gap so that as the tide rises and lowers, the solution is constantly applied to the piling.