title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Rabbis Reject Attempts at Dividing Communities Around the Georgia Runoff Election | The following letter, which was circulated by the Jewish Democratic Council of America (JDCA), asks Rabbis, Cantors, and other Jewish faith leaders to reject attempts to divide the Black and Jewish communities, and to divide the Jewish community, by spreading falsehoods about candidate for U.S. Senate, Rev. Raphael Warnock.
This is not an endorsement letter, rather a rejection of the attempts at division. Those who sign are representing themselves and not the organizations they are affiliated with.
If you are a Rabbi or Jewish faith leader and would like to add your name to this letter, please visit https://forms.gle/e1KoR4aBariMFgiu6.
As rabbis and religious leaders, we recognize and respect the devotion to his Christian faith that underlies Rev. Raphael Warnock’s strong support for Israel and his partnership with the Jewish people. As the Senior Pastor of Ebenezer Baptist Church, Dr. Martin Luther King, Jr.’s church, Rev. Warnock is deeply involved in the interfaith community and works with rabbis and religious leaders of all faiths in recognition of our shared values. We stand with Rev. Warnock and reject the baseless claims and attacks targeting him amid this Senate election. We are also deeply concerned about the possibility that racial bias is driving these false accusations, which we reject in the strongest possible terms.
Rev. Warnock has stated that the histories of oppression shared by the Black and Jewish communities, which includes Black Jews living at the intersection of both communities, guide him in his embrace of Dr. Martin Luther King, Jr.’s declaration that “Israel’s right to exist as a state in security is incontestable.”
Rev. Warnock recognizes that being a true friend also means being a truth-teller who does not shy away from hard conversations, and he has made no secret of his strong reservations and concerns over Israeli settlement expansion, which may impede prospects for a two-state solution to the Israeli-Palestinian conflict.
At the same time, Rev. Warnock is strongly opposed to the global boycott, divestment and sanctions (BDS) movement “and its anti-Semitic underpinnings, including its supporters’ refusal to acknowledge Israel’s right to exist.” He supports the unprecedented, ten-year, $38 billion aid package finalized with Israel by the Obama-Biden administration. He supports a two-state solution rooted in the protection and security of Israel’s borders, its identity as a Jewish, democratic state, and a commitment to improving the quality of life for both the Jewish and Palestinian people.
Rev. Warnock has made clear that claims that he believes Israel is an apartheid state are “patently false.” In his own words, “I do not believe that.” Rev. Warnock has made his views on the Israeli-Palestinian conflict clear. He has stated, “while I am deeply concerned about injustices impacting the Palestinian people…I know that no singular country’s actions are perfect, including Israel’s — and yet the country is a true friend and our strong, democratic ally in the region.”
We sign this letter not as an endorsement of a particular candidate, but a rejection of false and divisive slander entering our community.
All Americans, especially Jewish Americans, should reject attempts to divide the Black and Jewish communities, and to divide the Jewish community, by spreading falsehoods about Rev. Warnock. We abhor the politics of dividing traditionally marginalized communities in order to consolidate political power. The friendship between Jewish clergy and Black Christian clergy has stood strong for justice for generations, and cannot be so easily broken by bad faith actors who care little for our communities.
We stand with Rev. Raphael Warnock against these attacks, and implore other Jewish leaders to stand with us.
Sincerely,
Rabbi Ruth Abusch-Magder, Atlanta, GA
Rabbi Peter Berg, Atlanta, GA
Rabbi Joab Eichenberg-Eilon, Marietta, GA
Rabbi Brian Glusman, Atlanta, GA
Rabbi Ari Kaiman, Atlanta, GA
Rabbi Lauren Henderson, Atlanta, GA
Rabbi Steven Lebow, Rome, GA
Rabbi Joshua Lesser, Atlanta, GA
Rabbi Ellen Nemhauser, Atlanta, GA
Rabbi Laurence Rosenthal, Atlanta, GA
Rabbi Neil Sandler, Atlanta, GA
Rabba Melissa Scholten-Gutierrez, Atlanta, GA
Rabbi Larry Sernovitz, Marietta, GA
Rabbi Scott Sperling, Atlanta, GA
Darshanit Miriam Udel, Atlanta, GA
Rabbi Harvey J Winokur, Roswell, GA
Rabbi Michael S. Beals, Wilmington, DE
Rabbi Sharon Brous, Los Angeles, CA
Rabbi Denise L. Eger, West Hollywood, CA
Rabbi Amy Eilberg, Los Altos, CA Rabbi
Lauren Holtzblatt, Washington, DC
Rabbi Jill Jacobs, New York, NY
Rabbi Rachel Kahn-Troster, Teaneck, NJ
Rabbi Sharon Kleinbaum, New York, NY
Rabbi Michael Adam Latz, Minneapolis, MN
Rabbi Sandra Lawson, Burlington, NC
Rabbi Ellen Lippmann, Brooklyn, NY
Rabbi Sally J. Priesand, Tinton Falls, NJ
Rabbi Dr. Bradley Shavit Artson Los Angeles, CA
Rabbi Joel Simonds, Los Angels, CA
Rabbi Toba Spitzer, Newton, MA
Rabbi Michael Strassfeld, New York, NY
Rabbi David A Teutsch, Philadelphia PA
Rabbi Rachel Timoner, Brooklyn, NY
Rabbi Aaron Alexander, Washington, DC
Rabbi Adina Allen, Berkley, CA
Rabbi Nelly Altenburger, Middletown, CT
Rabbi Renni S. Altman, Poughkeepsie, NY
Rabbi Guy Austrian, New York, NY
Rabbi Justus Baird, Princeton, NJ
Rabbi Henry Bamberger, Utica, NY
Rabbi Rachel Barenblat, North Adams, MA
Rabbi Ruven Barkan ,Tucson, AZ
Cantor Chanin Becker, Tarrytown, NY
Rabbi Phyllis Berman, Philadelphia, PA
Rabbi Edward Bernstein, FL
Rabbi Linda Bertenthal, Davenport, IA
Rabbi Ana Bonnheim, Charlotte, NC
Rabbi Erin Boxt, Knoxville, TN
Rabbi Julie Bressler, Needham, MA
Rabbi Joshua Brown, Akron, OH
Rabbi Geoffrey Claussen, Elon, NC
Rabbi Aryeh Cohen, Los Angeles, CA
Rabbi Ayelet S. Cohen, New York, NY
Rabbi Elli Cohn, Zurich, Switzerland
Rabbi Kimberly Cohen, Dallas TX
Rabbi Howard A. Cohen, Marshfield, MA
Rabbi Deborah Anne Cohen, Philadelphia, PA
Rabbi Tamara Cohen, Philadephia, PA
Cantor Richard Cohn, New York, NY
Rabbi Faith Joy Dantowitz, Morgan Hill, CA
Rabbi Beth D. Davidson, Manchester, NH
Cantor Susan Lisa, Deutsch, CA
Rabbi Doris Dyen, Pittsburgh, PA
Rabbi David Eber, Evanston, IL
Rabbi Ruth Ehrenstein Smith, Baltimore, MD
Rabbi Rebecca Einstein Schorr, Emmaus, PA
Rabbi Barat Ellman Brooklyn, NY
Rabbi Susan Falk, Brooklyn, NY
Rabbi Charles Feinberg, Washington, DC
Rabbi Nora Feinstein, Princeton, NJ
Rabbi Jen Feldman, Chapel Hill, NC
Rabbi Michael Fessler, Poughkeepsie, NY
Rabbi Zev-Hayyim Feyer, Pomona, CA
Rabbi Avi Fine, Seattle, WA
Rabbi Betsy Forester, Madison, WI
Rabbi Jeff Foust, Waltham, MA
Rabbi Laurie Franklin, Missoula, MT
Rabbi Sarah Freidson, Mahopac, NY
Rabbi Michael Friedland, South Bend IN
Cantor Kath Fromson Akron, OH
Rabbi Gordon Fuller, Columbia, MD
Rabbi Laura Geller, Beverly Hills, CA
Rabbi Jonah Geffen, New York, NY
Rabbi Kim S. Geringer, Hoboken NJ
Cantor Vicky Glikin, Dallas, TX
Rabbi Edwin Goldberg, Spring, TX
Rabbi Rachel Goldenberg, Jackson Heights, NY
Rabbi Samuel Gordon, Wilmette, IL
Rabbi Arthur Green, Newton, MA
Rabbi David Greenstein, Montclair, NJ
Rabbi Reuven Greenvald, New York, NY
Rabbi Suzanne Griffel, Michigan City, IN and Chicago, IL
Rabbi Susan Grossman, Columbia, MD
Rabbi David Gruber, Frisco, TX
Rabbi Moshe Raphael Halfon, Long Beach CA
Rabbi Arielle Hanien, Los Angeles, CA
Rabbi Lizzi Heydemann, Chicago, IL
Rabbi Mark Hurvitz, New York, NY
Rabbi Yitzhak Husbands-Hankin, Eugene, OR
Rabbi David Ingber, New York, NY
Ginger Jacobs, Los Angeles, CA
Rabbi Margie Jacobs Berkeley, CA
Rabbi Steven B. Jacobs, Sacramento, CA
Rabbi Daria Jacobs-Velde, Laurel, MD
Rabbi David Jaffe, Sharon, MA
Rabbi Ellen Jaffe-Gill, Virginia Beach, VA
Rabbi Marisa James, New York, NY
Rabbi Janine Jankovitz, Philadelphia, PA
Rabbi Dr. Rebecca Joseph, Berkley, CA
Rabbi Debra Newman Kamin, Northfield, IL
Rabbi Nancy Kasten, Dallas, TX
Rabbi Rory Katz, Baltimore, MD
Rabbi Emma Kippley-Ogman, St. Paul, MN
Rabbi Michael Klayman, Great Neck, NY
Rabbi Myriam Klotz, New York, NY
Rabbi Michael Knopf, Richmond, VA
Rabbi Jacqueline Koch Ellenson, New York, NY
Rabbi Cherie Koller-Fox, Newton, MA
Rabbi Debra Kolodny, Portland, OR
Rabbi Michael L. Kramer, Hockessin, DE
Rabbi Alex Kress, Los Angeles, CA
Rabbi Gabriel Kretzmer-Seed, Bronx, NY
Rabbi Karen Kriger Bogard, St. Louis, MO
Rabbi Gail Labovitz, Los Angeles, CA
Rabbi Howard Laibson, Seal Beach, CA
Cantor Allen Leider, Falls Church, VA
Rabbi Darby Leigh, Concord MA
Rabbi Benjamin Levy, Monroe Twp,
NJ Cantor Abbe Lyons, Ithaca,
NY Rabbi Marc Margolius, New York,
NY Rabbi Jeffrey Marker, Brooklyn NY
Rabbi Dr.Susan Marks, Sarssota, FL
Rabbi Lev Meirowitz Nelson, Brooklyn, NY Rabbi
Jessica Kate Meyer, San Francisco, CA
Rabbi Abby Michaleski, Vineland, NJ
Rabbi Heather Miller, Aliso Viejo, CA
Rabbi Diana Miller, Lambertville, NJ
Rabbi Joel Mosbacher, New York, NY
Rabbi Linda Motzkin, Saratoga Springs, NY
Cantor Sarah Myerson, Brooklyn, NY
Rabbi Elana Nemitoff-Bresler, Westport, CT
Rabbi David Novak, Palm Desert, CA
Rabbi Ephraim Pelcovits, Los Angeles, CA
Rabbi Scott Perlo, Brooklyn, NY
Rabbi Hara Person, Brooklyn, NY
Rabbi Daniel Polish, Poughkeepsie NY
Rabbi Amber Powers, Abington, PA
Rabbi Jonah Gabriel Rank, New Hope PA
Rabbi Ruti Regan, Somerville, MA
Rabbi Debra Robbins, Dallas TX
Rabbi Francine Roston, Whitefish, MT
Rabbi Michael Rothbaum, Acton MA
Rabbi Ruhi Sophia Rubenstein, Eugene, OR
Rabbi Jonathan L Rubenstein, Saratoga Springs, NY
Rabbi Regina Sandler-Phillips, New York
Rabbi Peter Schaktman, Utica, NY
Rabbi Jeremy Schwartz, Willimantic, CT
Rabbi David Seidenberg, Northampton, MA
Rabbi Dr. Elyse Seidner-Joseph, West Chester, PA
Rabbi Gerry Serotta, Chevy Chase, MD
Rabbi Drorah Setel, Rochester, NY
Rabbi Randy Sheinberg, New Hyde Park, NY
Rabbi Jeremy Sher, Oakland, CA
Rabbi Becky Ann Silverstein, Boston, MA
Rabbi Mia Simring, New York City, NY
Rabbi Jonathan Slater, New York, NY
Rabbi Ruth Ehrenstein Smith, Baltimore, MD
Rabbi Eric Solomon, Raleigh, NC
Rabbi Abby Stein New York, NY Rabbi Oren Steinitz Elmira, NY
Rabbi Eleanor Steinman, Los Angeles, CA
Rabbi Alana Suskin, Rockville, MD
Rabbi Michael Swarttz, Westborough, MA
Rabbi Andrew H. Terkel, Dallas, TX
Rabbi Josh Warshawsky, Columbus, OH
Rabbi Pamela Wax, Bronx NY
Rabbi Elyse Wechterman, Abington, PA
Rabbi Marley Weiner, Carlisle, PA
Rabbi Daniel A Weiner, Seattle, WA
Rabbi Rachel Weiss, Evanston, IL
Rabbi Alex Weissman, Providence, RI
Rabbi Nancy H. Wiener, New York, NY
Rabbi Joseph Wolf, Portland, OR
Rabbi Greg Wolfe, Davis, CA
Rabbi Michal Woll, Milwaukee, WI
Rabbi Lina Zerbarini, Huntington Station, NY
Rabbi Simcha Zevit, Narberth, PA
Rabbi Jill Zimmerman, Laguna Woods, CA
Rabbi Benjamin Zober, Reno, NV | https://medium.com/@jewishdems/hundreds-of-rabbis-reject-attempts-to-use-the-senate-runoff-in-ga-to-divide-communities-f93c5a46dc6b | ['Jewish Democratic Council Of America'] | 2020-12-04 20:07:00.275000+00:00 | ['Georgia', 'Election 2020', 'Rabbi', 'Jewish Democrats', 'Judaism'] |
Netflix’de Gözden Kaçan 10 Müthiş Film!! | Founded in 2020, Kisafilms.com is the leading voice of online movie curating to discover and promote the new wave of filmmakers creating innovative stories for | https://medium.com/@kisafilmsonline/netflixde-g%C3%B6zden-ka%C3%A7an-10-m%C3%BCthi%C5%9F-film-256b95d9cf3c | [] | 2020-12-18 16:30:55.790000+00:00 | ['Netflix', 'List'] |
《毒液2:屠杀开始 Venom: Let There Be Carnage》官方预告片(2021) | 屠杀开始的剧情简介 · · · · · ·
据悉,索尼已将《毒液》续集提上日程。《毒液》凭借1亿美元的投资目前拿到6.77亿美元全球票房,最终甚至可能摸高到8亿美元,拿到的利润会超过索尼之前所有的蜘蛛侠电影。虽然影片媒体口碑一般,但观众似乎很认可。接下来的《毒液》续集除了要展现伍迪·哈里森扮演的屠杀,如何将汤姆·赫兰德扮演的蜘蛛侠融入进来。目前这版的彼得·帕克还是高中生,算是索尼和MCU合作的产物,他和汤姆·哈迪扮演的埃迪同处在一个宇宙、一个时间线上吗?
In 1889, on November 1 in Gotha, Germany Anna Therese Johanne Hoch, who later would be known as Hannah Hoch was born. Being the eldest of five children, the girl was brought up in a comfortable and quiet environment of the small town. Her parents, a supervisor in an insurance company and an amateur painter sent her to Girl’s High school. However, at the age of 15 Hannah had to quit studying for the long six years to take care of her newborn sister. Only in 1912 she continued her education with Harold Bengen in School of Applied Arts, mastering glass design. As the World War I broke up Hannah returned to the native town to work in the Red Cross.
The first years after war the young woman recommenced her studying, getting to know graphic arts. 1915 was highlighted by an acquaintance with an Austrian artist Raoul Hausmann, which grew into the long-lasting romantic relationship and involvement in Berlin Dada movement. For ten years till 1926 Hoch worked in Berlin’s major publisher of newspapers and magazines. Her task was to design embroidering, knitting and crocheting patterns for the booklets.
Being on vacation with her beloved in 1918, Hannah discovered ‘the principle of photomontage in cut-and-paste images that soldiers sent to their families’ (National gallery of Art). This find affected greatly on her artistic production, and she created mass-media photographs comprising the elements of photomontage and handwork patterns, thus combining traditional and modern culture. Her prior preoccupation was to represent the ‘new woman’ of the Weimar Republic with new social role and given freedoms.
Hoch was the only woman in Berlin Dada, who took part in all kinds of events and exhibitions showcasing her socially critical works of art. Till 1931 she participated in exhibitions but with the rise of National Social regime was forbidden to present her creative work. Till her last breath in 1978 Hannah Hoch lived and worked in the outskirts of Berlin-Heiligensee.
The piece of art which is going to be analyzed in this research is ‘The beautiful girl’ designed in 1919–1920. It combines the elements of technology and females. In the middle of the picture one can clearly see a woman dressed in a modern bathing suit with a light bulb on her head which probably serves as a sun umbrella. In the background a large advertisement with a woman’s hair-do on top is presented. Maud Lavin describes strange human as ‘she is part human, part machine, part commodity’ (Lavin). The woman is surrounded by the images of industrialization as tires, gears, signals and BMW logos. A woman’s profile with the cat eyes, untrusting and skeptical, in the upper right corner is eye-catching as well. This unusually large eye symbolizes DADA movement — a monocle, which is present in almost every Hoch’s work. The colour scheme does not offer rich palette of tints, including mostly black, white, orange and red pieces. The photo is surrounded by the BMW circles which add the spots of blue.
An apt description of the piece is given in the book ‘Cut with the Kitchen Knife’ and states that it is ‘a portrait of a modern woman defined by signs of femininity, technology, media and advertising’ (Lavin). In other words Hannah Hoch focused on the woman of the new age, free and keeping up with the fast-moving world. The artist promoted feministic ideas and from her point of view urbanization and modern technologies were meant to give hope to woman to gain equality of genders. With this photomontage she commented on how the woman was expected to combine the role of a wife and mother with the role of a worker in the industrialized world. The light bulb instead of a face shows that women were perceived as unthinking machines which do not question their position and can be turned on or off at any time at man’s will. But at the same time they were to remain attractive to satisfy men’s needs. The watch is viewed as the representation of how quickly women are to adapt to the changes.
In a nutshell, Hoch concentrated on two opposite visions of the modern woman: the one from the television screens — smoking, working, wearing sexy clothes, voting and the real one who remained being a housewife.
The beautiful girl’ is an example of the art within the DADA movement. An artistic and literal current began in 1916 as the reaction to World War I and spread throughout Northern America and Europe. Every single convention was challenged and bourgeois society was scandalized. The Dadaists stated that over-valuing conformity, classism and nationalism among modern cultures led to horrors of the World War I. In other words, they rejected logic and reason and turned to irrationality, chaos and nonsense. The first DADA international Fair was organized in Berlin in 1920 exposing a shocking discontentment with military and German nationalism (Dada. A five minute history).
Hannah Hoch was introduced to the world of DADA by Raoul Hausman who together with Kurt Schwitters, Piet Mondrian and Hans Richter was one of the influential artists in the movement. Hoch became the only German woman who referred to DADA. She managed to follow the general Dadaist aesthetic, but at the same time she surely and steadily incorporated a feminist philosophy. Her aim was to submit female equality within the canvass of other DADA’s conceptions.
Though Hannah Hoch officially was a member of the movement, she never became the true one, because men saw her only as ‘a charming and gifted amateur artist’ (Lavin). Hans Richter, an unofficial spokesperson shared his opinion about the only woman in their community in the following words: ‘the girl who produced sandwiches, beer and coffee on a limited budget’ forgetting that she was among the few members with stable income.
In spite of the gender oppressions, Hannah’s desire to convey her idea was never weakened. Difficulties only strengthened her and made her an outstanding artist. A note with these return words was found among her possessions: ‘None of these men were satisfied with just an ordinary woman. But neither were they included to abandon the (conventional) male/masculine morality toward the woman. Enlightened by Freud, in protest against the older generation. . . they all desired this ‘New Woman’ and her groundbreaking will to freedom. But — they more or less brutally rejected the notion that they, too, had to adopt new attitudes. . . This led to these truly Strinbergian dramas that typified the private lives of these men’ (Maloney).
Hoch’s technique was characterized by fusing male and female parts of the body or bodies of females from different epochs — a ‘traditional’ woman and ‘modern’, liberated and free of sexual stereotypes one. What’s more, combining male and female parts, the female ones were always more distinctive and vibrant, while the male ones took their place in the background. Hannah created unique works of art experimenting with paintings, collages, graphic and photography. Her women were made from bits and pieces from dolls, mannequins of brides or children as these members of the society were not considered as valuable.
Today Hannah Hoch is most associated with her famous photomontage ‘Cut with the kitchen knife DADA through the last Weimer Beer-Belly Cultural epoch of Germany’ (1919–1920). This piece of art highlights social confusion during the era of Weimar Republic, oppositionists and government radicals (Grabner). In spite of never being truly accepted by the rest of her society, this woman with a quiet voice managed to speak out loud her feministic message.
Looking at Hannah Hoch’s art for the first time I found it confusing, because couldn’t comprehend the meaning. It was quite obvious that every single piece and structure is a symbol of the era, its ideas and beliefs. However, after having learned about her life and constant endeavors to declare about female’s right, little by little I started to realize what’s what. … | https://medium.com/@jjeje5526/%E5%AF%BC%E6%BC%94-%E5%AE%89%E8%BF%AA-%E7%91%9F%E9%87%91%E6%96%AF-%E7%BC%96%E5%89%A7-%E5%87%AF%E8%8E%89-%E9%A9%AC%E5%A1%9E%E5%B0%94-%E4%B8%BB%E6%BC%94-%E6%B1%A4%E5%A7%86-%E5%93%88%E8%BF%AA-%E7%B1%B3%E6%AD%87%E5%B0%94-%E5%A8%81%E5%BB%89%E5%A7%86%E6%96%AF-%E4%BC%8D%E8%BF%AA-%E5%93%88%E9%87%8C%E6%A3%AE-%E5%A8%9C%E5%A5%A5%E7%B1%B3-%E5%93%88%E9%87%8C%E6%96%AF-%E6%96%AF%E8%92%82%E8%8A%AC-%E6%A0%BC%E6%8B%89%E6%B1%89%E5%A7%86-%E6%9B%B4%E5%A4%9A-%E7%B1%BB%E5%9E%8B-%E5%8A%A8%E4%BD%9C-%E7%A7%91%E5%B9%BB-%E6%83%8A%E6%82%9A-ee5593180883 | ['Jaja Jeje'] | 2021-03-22 16:01:03.070000+00:00 | ['Thailand', 'Hong Kong', 'Taiwan'] |
2020 Conference: Unlocking Financial Product Innovation Through Customer-centered Design | The International Financial Inclusion Conference is a platform for ecosystem stakeholders to congregate and share knowledge, ideas and unique experiences as well as discuss latest research frontiers and findings.
Past conferences have focused on diverse themes including:
market-enabling policies to facilitate last-mile innovations that enhance DFS delivery,
financial inclusion as a catalyst for sustainable economic growth and national development,
market-creating innovations as a solution to non-consumption, among several others.
This year’s conference addresses financial product innovation and customer centricity.
2020 is a milestone year, and the COVID-19 pandemic has introduced new unforeseen factors, distorting Nigeria’s financial inclusion journey and further exposing the fragility of the financial services system. The lockdown and its antecedents emphasised the need to innovate and re-imagine financial products and services to meet the evolving needs and expectations of customers, improve product-market fit and move the financial inclusion needle.
This can only be possible via a shift towards customer-centricity.
Customer-centricity places customers (and not the product) at the center of interest. This unlocks diverse possibilities for providers to be innovative and create better products and services that win in the market.
The 2020 International Financial Inclusion Conference will explore customer-centered design principles and how they unlock diverse possibilities for providers to be innovative and create better products and services that can win in the unbanked and underserved market. | https://medium.com/@sustainabledfs/unlocking-financial-product-innovation-through-customer-centered-design-9e7a8098f062 | ['Sustainable', 'Inclusive Dfs'] | 2020-12-08 16:25:48.445000+00:00 | ['Human Centered Design', 'Financial Services', 'Conference', 'Product Development'] |
A Tale of Two Gorgeknots, Part II | “Orri-har ham-de sloog.” The primary said, then sucked in a work glove which floated four feet away. The secondary followed immediately by slurpin’ in a dust ball enshrouded empty food package that hovered under the kick plate of the forward console and bobbed back into its tub.
“Sure you do, dung breath,” Nixon said. “Youse probably want we should be your next pizza.”
“What?” I asked.
“It said it had a solution.”
“Well, let’s hear it.”
Whereupon the primary went into a semi-lengthy discourse. Nixon nodded now and then and sneezed occasionally. I went over to the corner and retched. The secondary, after a polite interval, slurped up the emesis from afar.
Eventually, the primary signaled the end of its monologue by emitting a juicy frapping sound from somewhere on its being. Nixon turned to me to explain.
“Youse ain’t gonna believe dis, but deez mugs say they can survive in deep space indefinitely. They goes into dis sorta, uh, whaddaya callit, a sleepin’ thing, it’s a, oh, you know…”
“Dormancy? Hibernation?”
“Yeah, dat’s it!. Well, anyways, they goes doormat and can stay dat way forever.”
“But what about us?”
“Ve die in zee tub, boobala.”
“Nom-bi, Nom-beee! Tool-har imba duub rhesh!” thundered the primary.
“Meegle meegle meeeeegle!” appended the secondary with falsetto emphasis.
“What’re they so excited about, Nixon? Is there more?
“Nyet,” she said.
“Yoosh-ba! Yoosh!” said the primary.
“Nix, we ain’t got time for this. Gimme the rest!”
“Oh, man, it ain’t no thing. The puke ball it say we can slide into its cheeks. You be in the primary, me in the secondary. It say it got a pouch or sumpin in the side its mouth. He say it be used for storin’ extra food. I say thas booshee, man. It’s a muhfuggin trick, man. All them smelly things want t’do is eat our ass, man. Ain’t no kinda storage pouch, sheee.”
“You mean to tell me they want us to climb inside… geck… inside their mouths and punched out the lock chute?”
“Damn straight, Jack. All them muhfugs think about be eatin’. They just want a good last supper before they go into the big black. Y’know what m’sayin? We’re talkin’ a bite fo’ da night, home. A chomp fo’ da chumps.”
“Nom-bi, nom-bi! Tharsh-har mnob frrrue!”
“Noodle, meek, meek!”
“Now hold on a minute. The whole reason these things sail with us is because they provide everything we need to survive in space.” I tried to recall my schoolboy science.
“Seems they manufacture things like oxygen, nitrogen, hydrox, lithuanians, magnesium manipulate, potassium propagate, sulfuric crowbait, monosodium glutius, dioxy-rhino-newclamore acid, carbon tetrazzini, amaretto, and many more essentials for life.”
“They stink!”
“Well, of course they do. But you can’t let a little thing like stink get in the way. We’re talking survival here, Nix. The bottom line is, if we don’t trust ’em and go with ’em as they say, we’re quasar meat. What’ve we got to lose?”
A two-toned bleat issued from the console and the message changed and rolled across the screen:
DESTRUCT SEQuE..SEQUIN..SEqUEL..SEquENc DoInG NIcELY
please ReMAIN sEEtED while THE FLIght PRETENDERS Prepare for an EaRLY ARRIVAL…DEPARTURE…ARRIVAL…
“Yoosh,” the primary added with finality.
Nixon looked at me with an expression of pure loathing and revulsion.
“Nix, we’re going in. There ain’t no way out. At least this way there’s maybe a chance.” I grabbed her by the scruff of her short hairy neck and tossed her into the waiting maw of the secondary Gorgeknot. I stood there looking at Nixon, wondering if maybe being vaporized wasn’t better than being turned into Gorgeknot excrement after all, and was about to say so when I saw Nixon’s eyes grow wide with terror.
“Great horny toads!” I heard her say just before the Gorgeknot’s slimy lips snapped shut around her.
I didn’t have time to reflect. I moved the Gorgeknot tubs quickly through the inner doors of the lock chute, set the timer on the outer doors to a minute, and stepped into the waiting jaws of the primary.
The last thing I saw was the changing message on the red view screen:
50 sEconDs to DESTRuct
PlEase fASTEN Your SeETBells…bARBELTS…SeatBars
and DistINGUISH ALL SMOgIng…smOKING…MAAMMALS
5 SECONDS TO DETROIT
rETURN YOUR seetbarks to an…
…5 mINUtes to DENEB…
FORTHRIGHT pOSItion…
PLEASE standby FOR TOTAL ANNIHILATION
Image by JL G from Pixabay
THANK YOU…
HaVE aN ice TRAy | https://medium.com/illumination/a-tale-of-two-gorgeknots-part-ii-4cc6a6a7be3d | ['Phil Truman'] | 2020-12-24 04:41:32.473000+00:00 | ['Short Fiction', 'Short Story', 'Science Fiction', 'Humor', 'Fiction'] |
[The American Conservative] The Problem with "European Values" | [The American Conservative] The Problem with "European Values"
Ploys like conditioning Covid-19 relief on “rule of law” in Hungary and Poland won’t make the EU stronger. They may end up breaking the bloc apart. Jorge González-Gallarza ·Dec 5, 2020
A timeless axiom of statesmanship is catching up with the European Union (EU), one the bloc hoped it would never confront. If advancing lofty ideals requires foul plots against your own, then the real aim may be different — ulterior, baser, more unsavory — than initially advertised.
Continue reading the entire piece at The American Conservative (TAC) here. | https://medium.com/@jorgeggallarza/the-american-conservative-the-problem-with-european-values-cb50af9c8dfb | ['Jorge González-Gallarza'] | 2020-12-30 13:42:42.702000+00:00 | ['Hungary', 'European Union', 'Covid 19', 'Politics', 'Poland'] |
Lessons learned when upgrading to Terraform 0.12 | Terraform 0.12 has recently been released and is a major update providing a significant number of improvement and features.
While you may want to rush and use the new features, I’ll walk you through some of the lessons we learnt while upgrading the terraform-oci-oke project.
This is by no means an exhaustive list. I’ll be updating this as we understand and use more and more new 0.12 features.
Ready? Let’s go!
Read the blog series
As a preview to the new features and improvement in Terraform 0.12, Hashicorp published a series of blog posts with code examples. Read and read them again until lambs become lions. I cannot emphasize this enough. You should also read the upgrade to 0.12 guide.
Fix breaking changes first
The most frequent code breaking changes you will likely encounter are probably the attributes vs blocks code syntax. In Terraform 0.11, they could be used interchangeably without any problem e.g. with oci_core_security_list, defining the egress security rules block as below with the ‘=’ was acceptable:
egress_security_rules = [
{
protocol = "${local.all_protocols}"
destination = "${local.anywhere}"
},
]
With 0.12, the syntax is more strict. Attributes are specified with an ‘=’ and blocks without the ‘=’. As a result, blocks such as the above need to be redefined as follows:
egress_security_rules {
protocol = local.all_protocols
destination = local.anywhere
}
Another common code breaking change you will likely encounter is when you define resources with count. The new rules as per the documentation work as follows:
If count is not set, using the resource_type.name will return a single object and its attributes can be accessed as resource_type.name.id e.g. oci_core_vcn.vcn.id
If count is set when defining the resource, then a list is returned and needs to be accessed using the list syntax e.g. the service_gateway is created conditionally and count is used as a condition to determine whether to create it or not:
resource "oci_core_service_gateway" "service_gateway" {
...
count = var.create_service_gateway == true ? 1 : 0
}
Since count is used, in order to obtain the service gateway id, you need to use the following syntax:
network_entity_id = oci_core_service_gateway.service_gateway[0].id
Since we are creating only 1 service gateway in this case, we know the list will have only 1 element and we can set the list index to 0. If you are creating more than 1 instance of the resource, then you need to use [count.index].
Start using first-class expressions
Terraform 0.12 introduced support for first-class expressions. In 0.11, every expression had to be part of the interpolation string e.g.
resource "oci_core_vcn" "vcn" {
cidr_block = "${var.vcn_cidr}"
compartment_id = "${var.compartment_ocid}"
display_name = "${var.label_prefix}-${var.vcn_name}"
dns_label = "${var.vcn_dns_name}"
}
In 0.12, the syntax is much simpler for when using variables and functions:
resource "oci_core_vcn" "vcn" {
cidr_block = var.vcn_cidr
compartment_id = var.compartment_ocid
display_name = "${var.label_prefix}-${var.vcn_name}"
dns_label = var.vcn_dns_name
}
The impact of using first-class expressions can also be seen below. With 0.11, the security rules for the worker nodes would be specified like this:
resource "oci_core_security_list" "workers_seclist" {
...
egress_security_rules = [
{ # intra-vcn
protocol = "${local.all_protocols}"
destination = "${var.vcn_cidr}"
stateless = true },
{ # outbound
protocol = "${local.all_protocols}"
destination = "${local.anywhere}"
stateless = false
},
]
ingress_security_rules = [
{ # intra-vcn
protocol = "all"
source = "${var.vcn_cidr}"
stateless = true
},
{ # icmp
protocol = "${local.icmp_protocol}"
source = "${local.anywhere}"
stateless = false
},
....
]
}
With 0.12, this is how the security rules initially looked like. It’s 0.11 compatible and consisted of multiple blocks:
resource "oci_core_security_list" "workers_seclist" {
...
egress_security_rules {
# intra-vcn
protocol = local.all_protocols
destination = var.vcn_cidr
stateless = true
}
egress_security_rules {
# outbound
protocol = local.all_protocols
destination = local.anywhere
stateless = false
}
ingress_security_rules {
# intra-vcn
protocol = "all"
source = var.vcn_cidr
stateless = true
}
ingress_security_rules {
# icmp
protocol = local.icmp_protocol
source = local.anywhere
stateless = false
}
...
}
We’ll revisit this again when talking about dynamic blocks.
Keep interpolation syntax for string concatenation
For string concatenation, ̶y̶o̶u̶ ̶s̶t̶i̶l̶l̶ ̶n̶e̶e̶d̶ it’s easier to use the interpolation syntax e.g.
resource "oci_core_vcn" "vcn" {
cidr_block = var.vcn_cidr
compartment_id = var.compartment_ocid
display_name = "${var.label_prefix}-${var.vcn_name}"
dns_label = var.vcn_dns_name
}
Likewise, if you need to combine a named variable and string as an argument to a function:
data "template_file" "bastion_template" {
template = file("${path.module}/scripts/bastion.template.sh")
...
}
Use improved conditionals
In Terraform 0.11, there were 2 major limitations when using conditional is used. The first one is that both value expressions were evaluated even though only 1 is returned. As an example, this impacted how the code for defining a cluster should be written for single-Availability Domain(AD) and multiple-AD regions.
To illustrate, in single-AD regions, the API expect 1 parameter for subnet ids for the Load Balancer subnets and 2 parameters for multiple-AD regions. As we are looking the number of ADs up at runtime and using the number of ADs as the only condition to choose whether to pass either 1 or 2 subnet ids, there was no way to know this a priori and pass 1 or 2 subnet ids dynamically. Unless we make a map of the number of ADs for each region and use the map to determine whether to pass 1 or 2 parameters. But this means writing extra and unnecessary code.
As a result, when defining the OKE Cluster, we had to manually toggle the code when choosing to deploy between single-AD and multiple-AD regions:
# Toggle between the 2 according to whether your region has 1 or 3 availability domains.
# Verify here: https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm how many domains your region has. # single ad regions
#service_lb_subnet_ids = ["${var.cluster_subnets["lb_ad1"]}"] # multi ad regions
service_lb_subnet_ids = ["${var.cluster_subnets["lb_ad1"]}", "${var.cluster_subnets["lb_ad2"]}"
In 0.12, since this restriction is lifted, the code is simplified and we can determine the number of Availability Domains at runtime based on the selected region to determine whether to pass 1 or 2 parameters.
service_lb_subnet_ids = length(var.ad_names) == 1 ? [var.cluster_subnets[element(var.preferred_lb_ads,0)]] : [var.cluster_subnets[element(var.preferred_lb_ads,0)], var.cluster_subnets[element(var.preferred_lb_ads,1)]]
Now, there’s no need to manually toggle the code.
The 2nd major limitation with conditionals in 0.11 is that maps and lists could not be used as returned values. This has also been lifted and we have used this in a minimal way. See the conditional block below.
Introduce dynamic blocks to reduce code repetition
As part of using first-class expressions, I mentioned the security list initially looked like this:
resource "oci_core_security_list" "workers_seclist" {
...
ingress_security_rules {
# rule 5
protocol = local.tcp_protocol
source = "130.35.0.0/16"
stateless = false
tcp_options {
max = local.ssh_port
min = local.ssh_port
}
}
ingress_security_rules {
# rule 6
protocol = local.tcp_protocol
source = "134.70.0.0/17"
stateless = false
tcp_options {
max = local.ssh_port
min = local.ssh_port
}
}
ingress_security_rules {
# rule 7
protocol = local.tcp_protocol
source = "138.1.0.0/17"
stateless = false
tcp_options {
max = local.ssh_port
min = local.ssh_port
}
}
...
}
I’m not showing the full gory list here but there are 6 such repetitive ingress security rules in the 0.11 version covering rules 5–11 according to the OKE documentation for 6 different CIDR blocks. With dynamic blocks, these 6 rules can be defined into 1 dynamic block only instead of the previous 6 ingress rules blocks for each CIDR.
First, we define the source cidr blocks in a local list:
oke_cidr_blocks = ["130.35.0.0/16", "134.70.0.0/17", "138.1.0.0/16", "140.91.0.0/17", "147.154.0.0/16", "192.29.0.0/16"]
Then, we use a dynamic block and an iterator to create the ingress rules repeatedly:
resource "oci_core_security_list" "workers_seclist" {
...
dynamic "ingress_security_rules" {
# rules 5-11
iterator = cidr_iterator
for_each = local.oke_cidr_blocks
content {
protocol = local.tcp_protocol
source = cidr_iterator.value
stateless = false
tcp_options {
max = local.ssh_port
min = local.ssh_port
}
}
...
}
Dynamic blocks behave as if a separate block is written for each element in a list or map. In the code above, we iterate over a list of CIDR blocks.
You can also combine a dynamic block with a conditional. In this case, we only need a list with one item. The item itself doesn’t matter, we only need the for_each to iterate once if the condition is true. If the condition is false, the list is empty and the egress_rules below is not created. Effectively, this becomes a conditional block.
dynamic "egress_security_rules" {
# for oracle services
for_each = var.is_service_gateway_enabled == true ? list(1) : []
content {
destination = lookup(data.oci_core_services.all_oci_services[0].services[0], "cidr_block")
destination_type = "SERVICE_CIDR_BLOCK"
protocol = local.all_protocols
stateless = false
}
}
With Terraform 0.11, it was not possible to do this conditionally. This impacted us particularly on the service gateway for which we allow its conditional creation. Thus, we either had to manually edit the egress rules configuration in the OCI Console or force the user to use the service gateway. Likewise, we had to manually update the routing rules for either the Internet Gateway or NAT Gateway depending on whether the worker nodes are created in public or private mode and add the routing rules for the service gateway in either the Internet Gateway or the NAT route table.
The first option is done after Terraform has run and leaves the Terraform state divergent from what’s actually configured in the cloud. As for the 2nd option, well, I don’t like forcing people down a particular path of using something they won’t need.
Using the conditional dynamic blocks allows us to do this depending on whether the Service Gateway was created e.g. we add this to the NAT Route table:
dynamic "route_rules" {
for_each = var.create_service_gateway == true ? list(1) : []
content {
destination = lookup(data.oci_core_services.all_oci_services[0].services[0], "cidr_block")
destination_type = "SERVICE_CIDR_BLOCK"
network_entity_id = oci_core_service_gateway.service_gateway[0].id
}
}
Similarly, we add these routing rules to the Internet Gateway routing table if the NAT gateway was not created. Notice the additional condition:
dynamic "route_rules" {
for_each = (var.create_service_gateway == true && var.create_nat_gateway == false)? list(1) : []
content {
destination = lookup(data.oci_core_services.all_oci_services[0].services[0], "cidr_block")
destination_type = "SERVICE_CIDR_BLOCK"
network_entity_id = oci_core_service_gateway.service_gateway[0].id
}
}
Specifically for the security list for the okenetwork module, using the dynamic blocks with an iterator also helped us reduce the security list code by roughly 25% while still allowing us to add a previously missing functionality.
As your code, especially your security rules become cleaner, take the opportunity to review and perhaps redefine them.
Note: Someone has shared the conditional block as a solution on a github issue which I adapted slightly. Unfortunately, I forgot to bookmark the issue and cannot find it anymore. If you’re reading this good sir/lady, send me a note and claim thy prize.
Upgrade self-contained modules
The terraform-oci-oke project has 4 high-level modules:
auth
base which itself has 2 sub-modules (bastion and vcn)
okenetwork
oke
The dependency graph of the modules is shown below (optional modules and dependencies with dashes):
Terraform module dependency
As you can see from the above, there are a few dependencies between the modules. The oke module depends on the okenetwork module which itself depends on the base module and its sub-modules.
As part of the process of upgrading the project to 0.12, we started with upgrading the base module and then moved up the chain with okenetwork module and then finally the oke module itself.
Remaining new features to explore
We have yet to fully explore the following new features:
For and for_each (although we have already dabbled with for_each a little bit)
Use of list and maps as return type values from conditionals
Rich types
The new template syntax
And as I mentioned above, as we move along with the upgrade, I’ll be updating this post.
Summary
Use the following process/principles to upgrade your project to Terraform 0.12:
Fix code-breaking changes first
Experiment with ‘easier’ new features first such as removing unnecessary interpolation where possible
Remember that using interpolation for string concatenation is easier and still valid code. Don’t get rid of them just for the sake of it
Gradually introduce more 0.12 improvement and features
Use dynamic blocks to reduce/remove repetitive code
Identify your module dependencies and upgrade them in order
Don’t rush into creating complex object types; a simpler solution may be possible even if it means writing a little bit of repetitive code
As your code gets cleaner, take the opportunity to review your code and improve it
If you’re interested, you can follow our progress on github.
Useful 0.12 code examples: https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples
Update: Read part 2. | https://medium.com/oracledevs/lessons-learned-when-upgrading-to-terraform-0-12-6d894d3ab20e | ['Ali Mukadam'] | 2019-10-31 23:38:32.324000+00:00 | ['Terraform', 'Oracle Cloud', 'Infrastructure As Code', 'Kubernetes'] |
Security Bug Bounty Programme for seed-node staking contract kicks off today; expert inputs needed! | Security Bug Bounty Programme for seed-node staking contract kicks off today; expert inputs needed!
Our non-custodial seed node staking programme has entered the public beta testing phase. At this time, we’re calling all smart contract security experts to aid us in our quest for identifying and destroying bugs in the staking contract. Your contribution will be invaluable as we work together to build a safe and reliable staking experience for users.
We invite you to test out and help secure our seed note staking programme — with emphasis on our staking portal Zillion and smart contract implementation. This bounty campaign, hosted by BugCrowd will accept feedback starting today, and continue on till after the mainnet staking contract has been launched.
We will be paying out bounties of up to US$6,000 per security bug found and demonstrated. Rewards will correspond to the maliciousness of the bug found, as classified by the Bugcrowd Vulnerability Rating Taxonomy, and will also be given out on a first-come-first-serve basis(i.e: We will not be rewarding feedback outlining previously discovered issues). Additionally, we will not be accepting any theoretical submissions or anything that is out of scope of the security bugs bounty programme.
To submit a security bug, head to https://bugcrowd.com/zilliqa, sign up for an account, and post your submission. We’ll take it forward from there with the Bugcrowd security engineers. We will triage the bug and issue rewards for eligible submissions.
**IMPORTANT: Please read and follow the bounty brief, rules and regulations on Bugcrowd (https://bugcrowd.com/zilliqa) before you start your assessment and submit feedback for this bounty campaign.
Documentation Types
Targets | https://blog.zilliqa.com/security-bug-bounty-programme-for-seed-node-staking-contract-kicks-off-today-expert-inputs-needed-27c81d96934b | ['Ashley Wang'] | 2020-09-22 03:43:13.344000+00:00 | ['Staking', 'Smart Contracts', 'Security', 'Bug Bounty', 'Project Updates'] |
Tips On Filling Out Your March Madness Bracket For Whose Time Is Up Next | If you want any chance of winning your office pool, heed this handy advice!
Filling out brackets can be daunting. Especially when, in industries varying from entertainment to journalism to government, so many people brought their A game to disgusting, inappropriate behavior in 2017!
Some of these individuals have experienced the spotlight of accusation in prior seasons, but the lack of results have kept on disappointing us year after year. Other entrants are fresh faces appearing in the tournament for the first time.
There are a million different combinations for the 68 individuals in the bracket who have not yet suffered appropriate consequences for their despicable actions. If you want any chance of winning your office pool, here are some tips on picking the Final Four creeps.
Never pick your winners based on history and prestige. Last year’s runaway victor, Harvey Weinstein, isn’t set to make a comeback anytime soon. It would be unimaginable if he won again this year.
A 16th ranked seed will never beat a number one seed in the first round. Sure, the physics TA who would uncomfortably stare at you in class seems like a promising choice, but it will bust your bracket!
If we’ve learned anything, it’s that surprises happen all the time. We were looking forward to placing Seacrest as our big upset to make it to the Elite 8, but then he didn’t even make it into the tournament. He was so close, it’s like WHAT HAPPENED?!
Teams with blue as their color have won the past 14 championships, making that Best Buy employee who followed you around the store for too long a contender to win it all.
Mel Gibson making it far is always an interesting choice. Expect him to either lose to a lower seed or make it to the Final Four.
A number 12 seed will always beat a number five seed in the first round. So make sure to pick that no-name comedian who crudely hits on his female colleagues, as he’ll make it further than you think!
Don’t forget about players like Kobe Bryant and Gary Oldman that made jaw-dropping tournament appearances in the early 2000s and have recently made rousing comebacks. They shouldn’t be counted out just yet.
Two number one seeds have never advanced to the finals, but this year, I can really see top picks Donald Trump and Woody Allen making it to the big dance. Both have been strong contenders in the tournament over the past 25 years, and it would be amazing to see one of them make it through to the end.
In fact, I really wish both of them could “win” this year. We’ve been waiting so long! | https://medium.com/the-establishment/tips-on-filling-out-your-march-madness-bracket-for-whose-time-is-up-next-f59303baee2b | ['Kristin Nalivaika'] | 2018-03-19 13:53:03.940000+00:00 | ['Wit Whimsy', 'Times Up', 'March Madness', 'Woody Allen', 'Donald Trump'] |
Founder’s Lessons: Robert Devlin, CEO of Metalenz | Meet Rob Devlin
Interview edited for clarity and length.
“Focus on the technology, the vision, and the people. The rest will work itself out.”
Robert introduces himself
I’m Rob Devlin, founder and CEO of Metalenz. I did my PhD at Harvard in Applied Physics. I worked in nano-photonics, so making really, really small structures that can manipulate light. Specifically, I worked on metasurfaces with Professor Federico Capasso. Metasurfaces are flat optics that allow you to manipulate light without having any curvature to the device. Normal lenses have curvature that focus light and form an image. These metasurfaces do it with a completely planar form factor. We spun out Metalenz at the end of my PhD.
His background
My grandfather was a radio operator in World War II in the Navy. He didn’t have any kind of formal training. He joined the Navy at 18, fresh out of high school. After World War II, he never went to college, but he was an electrical engineer by training on the job. My grandparents watched me while my parents were working. Growing up, I spent a lot of time in the basement with my grandfather. We would make shortwave radios together.
I remember putting together random, disparate components that didn’t seem like they would generate a voice or allow you to hear anything. But we turn it on, and all of a sudden we hear this completely foreign voice coming from seemingly nowhere. It was a realization that by studying and knowing each little component in the radio and what it did, you could hear things from across the world. You could bounce the radio signals off the ionosphere and hear radio signals from China or Russia.
That was a really cool experience: to see something very technical connect to the human side of things. You hear someone from across the world talking to you. It imprinted that science and engineering could immediately connect to a person on the other end.
You hear someone from across the world talking to you. It imprinted that science and engineering side could immediately connect to a person on the other end.
The other entrepreneurial experience was with my dad. He worked at a company called Flexitallic, which made gaskets. My dad didn’t go to college, but he worked his way up to be a global sales manager. I have a very vivid memory of walking through the factory floor and seeing these machines running, seeing people working on the machines, and knowing that this would be sold to someone across the world. My dad’s role was flying to Germany and Europe and trying to sell this product to people. I started seeing the interconnections: all the thoughts and different people contributing to this broader goal of selling something and making it successful.
Object that’s meaningful to him
I have on my desk the first set of lenses that we made with one of our high volume manufacturing partners. We make our lenses on a glass wafer rather than molding the lenses out of plastic. There’s basically a 12-inch diameter piece of glass and 10,000 of our lenses on that piece of glass.
Image credit: Metalenz
In a PhD, you’re very focused on your aspect of research. You have some collaboration with other people, but it’s not clear how that collaboration comes together to make something much bigger than your research. But this lens took the work of a ton of different people at Metalenz interacting with a ton of people from other companies.
This is why I especially love being at a startup. Very quickly, the seed of an idea grows way beyond yourself. You get to work with a bunch of amazing people to make something real. In my entire PhD, I made probably 100 different devices. But in one single shot, with the help of all these other people, we made 10,000 of these lenses.
Very quickly, the seed of an idea grows way beyond yourself. You get to work with a bunch of amazing people to make something real.
On choosing people to work with
I look for people who are very open to new ideas and discussion. Their ego doesn’t get in the way. Especially when you’re at a startup, it’s important that you can bring up new ideas without the worry that someone will push down that idea.
I also look for people who aren’t afraid to be wrong. When you’re doing something for the first time, you’re going to get it wrong a whole bunch of times. So it’s important to have a set of people around you who are willing to work in this open environment and not be scared of bringing up new ideas. They don’t want to be constantly the smartest person in the room, but they’re just open to debate and discussion without their ego or feelings getting in the way. They need to be able to step back. They don’t need to be right, they just want to get the problem right. In order to have something grow beyond yourself, that’s really critical.
His journey to Metalenz
My education was in a bit of everything. I did my bachelor’s in electrical engineering and a master’s in material science. Neither were focused on optics or optical materials. The thread that tied everything together was that I was very interested in nanofabrication, the science of making really small things. In my PhD, I decided that I wanted to do something different from what I had done previously. I liked the idea of having a broader base and background.
In flat optics, it requires you to know the materials and nanofabrication really well. It would also give me the opportunity to learn something new. So it wasn’t until my PhD that I hit on flat optics. It fit so well with my experience and I thought it was really cool, so I ran with it.
Key applications of Metalenz
Flat optics allows you to take forms of sensing normally thought of as too big, too complicated, or too expensive — mainly focused in scientific or medical labs — and shrink them down to a price point or form factor that can fit into every mobile phone in the world. Metasurfaces allow you to collect information that would normally take an entire table of optical devices from one single device.
Metalens Science Breakthrough Runner-up in 2016
For example, a spectrometer is something you’ll find in a lot of scientific, chemical, and medical labs. It tells you what chemicals are in medicine, a liquid, or an atmosphere. But it takes up a whole table. Metasurfaces allow you to take three or four different spectrometer optical functions and condense them in one single layer and one single device.
You can take forms of sensing which are very expensive and not broadly distributed throughout the world, and make them smaller, simpler, and cheaper. You can now put them into cell phones, so everyone is walking around with devices that were previously confined to a scientific or medical lab. That changes the set of information being accessed by everyone in the world.
You can now put them into cell phones, so everyone is walking around with devices that were previously confined to a scientific or medical lab.
For example, is the medicine you bought still going to work because it’s not expired? Are the chemicals correct? Or you can look at blood assays just with your cell phone. Everyone has a cell phone today, but not everyone has access to a medical lab. That’s what metasurfaces and flat optics are really enabling: a whole new form of sensing that you can distribute to the masses through cell phones.
His vision for Metalenz
Our vision is to give everyone access to entirely new forms of sensing. We want to proliferate forms of sensing that are normally esoteric or scientific, and allow everyone to have them in their pockets. We’re starting out with 3D sensing, which is pretty new to cell phones today. Apple’s Face ID is an instance of taking a new form of sensing and bringing it to small form factors. I see Metalenz putting new forms of information and sensing at everyone’s fingertips.
With the existing manufacturing infrastructure, you can manufacture metasurfaces at scale and it can be very cheap and pervasive. To me, it’s a really powerful concept of making sensing accessible and performing multiple functions in a single layer, especially when it comes to health.
This can also provide a new set of information for scientists to access as data. It gives a whole new view of the world. When cameras were first put in cell phones, you could suddenly see throughout the world and see different cultures and people. It’s the same idea with sensing.
His main challenge as CEO
The main challenge is “crossing the chasm.” You have this new technology, and people and investors have bought into your idea. But it’s a big challenge to go from having the idea to finding the best path to make sure the idea gets adopted. You need to look at specific markets and applications and develop relationships throughout the whole supply chain. You have to find the right path to get the idea launched and then also have the ability to grow to your full vision.
In research and academia, often what you’re trying to do is publish a paper. You have an idea, you do some research and collect results, they prove or disprove your hypothesis, you write a paper, and that’s the end. But trying to get something into an actual product is much more difficult. It requires you to interact with a whole lot more people, and find a set of people with very specific talents and roles. It’s not like a paper, where once you publish it you can move on. You need to keep growing, continuously improving, and realize the full vision.
Best advice he received
The best advice is to focus on the technology, the vision, and the people. The rest will work itself out. There’s a lot of pressure when you’re at a startup. You want to be successful commercially. You want to launch this product and potentially have an IPO or be acquired. Your investors want these things. They’re necessary and important to have long-term success and to get the product on the market. But if you’re focusing on that, it’s really easy to burn yourself out and get overly stressed. You probably won’t be successful if you’re focusing on “the ends.”
The technology, vision, and people are the most important things. If you’re not excited about what you’re doing and the technology, and instead you’re focusing on other things, it’s very easy to get burnt out.
His advice for founders
Don’t be afraid to ask questions. You hear this all the time in school, “there are no stupid questions.” But essentially, when you’re starting a company, it’s a very new thing for you. You often feel a sense that because you’re the founder, you should know everything.
But the reality is that it’s very new for you and there are a lot of people who have done this already and are willing to help. They’re willing to give you guidance and answer your questions. So just ask as many questions as possible. It will help you learn quicker. No one is expecting you to know everything. You should know everything about your technology and your vision, but you don’t have to know everything about starting a company or what it takes to be successful in a startup.
On hobbies and personal time
Sometimes when you’re the founder of a startup, the startup is your entire life. It’s very much all-encompassing. But it’s really important for people at startups to have other hobbies that they can do to recharge.
One of my favorites is going hiking and getting out in nature, especially with family. It’s very easy to get wrapped up and not see the bigger picture, so this helps contextualize everything else. The other thing that I really like to do is cycling. I go for really long bike rides to disconnect for a while.
The startup story gets spun back to people in a way that you only ever see the big successes, the Elon Musks and people like that. And with them, you see just one aspect of the person, which is the technology. But I think it’s really important to have other aspects of your life that help to support you and that are lasting.
3 media recommendations
Book : God Created the Integers by Stephen Hawking . It’s one of my favorite books. It’s a collection of what Hawking sees as the most important mathematical advancements, from Euclid and Archimedes to Alan Turing. It’s really interesting to see how these very specific, pure mathematical pursuits often turned into things that influence everyone’s life. You read some random mathematical theorem and wonder how it ever connects to real life. But it ends up being one of the most important concepts in information theory, and our cell phones wouldn’t work without it. This book takes things that seem very disconnected and shows their impact on the real world .
: . It’s one of my favorite books. It’s a collection of what Hawking sees as the most important mathematical advancements, from Euclid and Archimedes to Alan Turing. It’s really interesting to see how these very specific, pure mathematical pursuits often turned into things that influence everyone’s life. You read some random mathematical theorem and wonder how it ever connects to real life. But it ends up being one of the most important concepts in information theory, and our cell phones wouldn’t work without it. This book . Documentary: The Inventor: Out For Blood In Silicon Valley dir. by Alex Gibney. It’s a very deep look at Theranos. I really liked this documentary, because it helps ground you in doing something real and meaningful and not just getting caught up in the hype cycle. It helped inform me about what not to do. There were a lot of people saying “we have a problem that we need to address,” but their voices weren’t heard or were getting quashed. The founders just moved forward and didn’t listen.
When you’re at a startup, there are going to be a lot of problems that you need to solve. You need to be brutally honest with yourself in order to solve them and have a real impact in the end. This documentary showed me the importance of critically looking at everything to make sure you have an end product with real impact. | https://medium.com/tsingyuan-ventures/founders-lessons-robert-devlin-ceo-of-metalenz-6338f5c3a5d0 | ['Taylor Fang'] | 2021-02-22 16:44:17.215000+00:00 | ['Founders', 'Cameras', 'Startup Lessons', 'VC', 'Startup'] |
A fake empire | A fake empire
How to fake it until you’re found out to be a complete fraud
Photo by Aidan Bartos
One of the biggest podcast hits of the year is The Dropout, a documentary about Elizabeth Holmes’ sham billion-dollar company Theranos, which falsely promised to revolutionize health care.
The reporting talks about how Holmes — a college drop-out like her idol Steve Jobs — affected a fake low-pitched voice to sound authoritative. She also fabricated major components of their business including a partnership with the military to use their product in war zones.
Two of the most popular documentaries of the year — carried separately by both Netflix and Hulu — are about how the organizers of the Fyre Festival duped people into buying tickets to a remote island. They created a fake fantasy on “the boundaries of the impossible” with images of glamorous models and Instagram stars.
And now, the №1 story of the day by far is the “largest college admission scam ever,” which involves scheming celebrities, business leaders and wealthy parents to cheat their kids’ way into elite universities. It has 5 million+ searches on Google and counting.
According to The New York Times story on the admissions scandal:
A teenage girl who did not play soccer magically became a star soccer recruit at Yale. Cost to her parents: $1.2 million. A high school boy eager to enroll at the University of Southern California was falsely deemed to have a learning disability so he could take his standardized test with a complicit proctor who would make sure he got the right score. Cost to his parents: at least $50,000. A student with no experience rowing won a spot on the U.S.C. crew team after a photograph of another person in a boat was submitted as evidence of her prowess. Her parents wired $200,000 into a special account.
Who said cheaters never prosper? Being fake was the secret ingredient to success in each of these instances. And it worked! Until it didn’t.
When they are found out, these stories inspire massive outpourings of schadenfreude. We love seeing frauds publically fail and get exactly what they deserve. All of these storylines involve privileged and entitled people who act like they are better than us.
They faked it until they were found to be phony.
We cheer when the system works — as it should. We take satisfaction in the fake festival crashing down, the fake unicorn company going bankrupt and the indictment of fake students.
But this also provides a glimpse of why gaming the system works.
I wonder if we’d be paying as much attention to this story if celebrities weren’t involved. The top Google searches for this story relate to the famous people — Felicity Huffman, William H. Macy, Olivia Jade, Lori Loughlin— who were only a small part of a much larger scheme involving 30 parents, coaches, test proctors and a college prep business. That shows the power of celebrity.
top google searches related to this scandal relate to celebrity
It’s not just a problem involving steroids in baseball, doping in the Olympics, illicit campaign contributions in elections, or the powerful getting their kids into elite universities. Those are just where recent scandals have been exposed.
This is going to keep happening again and again. And all of us are to blame, to some degree.
So you didn’t conspire to hand over $15,000 to help your daughter cheat on the SATs. Good for you.
But we’re all part of a system that reveres, rewards and fetishizes status. We’re a culture that thinks success is indicated by test scores and job titles. We’re a society that measures happiness by followers and zeroes in your paycheck.
We’ve built a system that narrowly defines success. That incentivizes zero-sum competition. Happiness becomes a social performance.
Education reporter Erin Richards wrote a must-read article about how this scandal was a long time in coming. She wrote that parents are responding to the idea of “narrowly defined success” when they exploit loopholes to get into top colleges:
College has become such a status symbol that even celebrity parents were allegedly willing to break the rules to get their child a slot in an elite school, according to the federal complaint. “I don’t think we should be super surprised,” said Bari Norman, the co-founder and director of counseling at Expert Admissions, a Manhattan-based firm that helps teens around the world prepare for the exams and college applications that will determine the next four years of their lives. “It speaks to the desperation of parents and just how high stakes college admissions have become,” Norman added. “And unfortunately, it speaks to a lot of the messages we’re sending to kids, which is the most concerning part of this story.”
Success, in other words, becomes a popularity contest.
A year ago a report by The New York Times found that celebrities, athletes, pundits and politicians have millions of fake followers in an effort to demonstrate popularity.
One of the companies the Times exposed was Devumi, an “influencer marketing company.” Before their downfall, Devumi wrote tips about how to create “social proof signals” that “indicate the credibility of a business or lack thereof.”
For example, having a large following on Twitter, Facebook, and other similar social sites can make your business appear trustworthy. In fact, when other people follow or like your business’s social media pages, they’re actually endorsing your brand in a way. In the same way, people who share or like your content and products on social media sites become an ambassador for you. So, when new visitors stop by, they think: “Hey, this site looks pretty popular. More than 10,000 people have already subscribed. It must be worth following so I’ll do the same.” This crowd effect can continue to compel new visitors to observe what the fuss is all about when they check out your online profiles.
Devumi then sold fake followers so anyone could boost their “social proof signals,” aka fake popularity. That continues to be an effective strategy.
At the time, I wrote that it’s easy to condemn the shadowy identity thieves who create these fake follower mills. But they’re also only responding to our demand — the need for popularity, validation and more more more.
You can suspend Devumi, delete fake followers and indict cheating parents. But this will happen again as long as we crave popularity contests.
Today, that proves to still be true. | https://cigelske.medium.com/a-fake-empire-d74881bb87a4 | [] | 2019-03-13 19:20:24.726000+00:00 | ['Personal Growth', 'Social Media', 'Really Necessary', 'Culture', 'Personal Development'] |
Shake Milton Can Unlock the 76ers’ Offense | The Philadelphia 76ers have not lived up to expectations this season. They were a popular pick to contend for the Eastern Conference title, but they currently languish in sixth place. The reason for their lack of success is a poor offense. Currently, Philadelphia ranks 17th in the NBA in offensive efficiency, averaging 110.4 points per 100 possessions, according to Basketball Reference. This is not going to cut it for a contender, and is a severely disappointing number given the talent on the roster. The main culprit is a lack of three-point shooting, which has cramped their spacing.
Philadelphia ranks 21st in the NBA in 3-point field goals per 100 possessions, with a figure of 31.8. They have been average at making three-pointers, with a team field goal percentage of 36.2%. But since the team does not take three-pointers at a high rate, opposing defenses pack the paint. This makes it hard for their interior threats to score.
The situation is even worse when Philadelphia plays Al Horford, Joel Embiid, and Ben Simmons together. This is a huge problem since they are the team’s three biggest stars and are all included in the usual starting lineup.
All three players thrive on the inside. Embiid and Simmons are poor outside shooters, and Simmons does not even take threes at all. These lineups feature strangled spacing, and have struggled. According to stats.nba.com, these lineups have scored a putrid 98.8 points per 100 possessions in 429 minutes.
Philadelphia has turned Horford into a three-point shooting specialist. This season, Horford’s three-point rate (the percentage of his field goal attempts that are from three-point range) is 40.4%, a career high. But this means that he is not getting the ball inside, where he is most effective. Horford is not getting fouled, demonstrated by his 11.2% free throw rate, a career low. Just 18.0% of his shots have come from within three feet, another career low. Horford has had a poor season by his standards. He is scoring, rebounding, and assisting less than usual. It is largely because of his transformed role.
The solution to the 76ers’ problems could already be present on their roster in Shake Milton. Milton is a second-year pro who the was drafted in the second round out of SMU in 2018. He has spent much of his career bouncing between the G-League and NBA. But he has played more this season, and Philadelphia has been experimenting with starting him at point guard in exhibition games prior to the restart.
Milton brings a real outside threat to the 76ers. 48.8% of Milton’s shots have come from behind the arc this season. Defenders have to stick to him outside. If they duck under screens, he will shoot and usually make it. This season, Milton has a three-point field-goal percentage of 45.3%. He has always hit a high rate of threes, having shot 42.3% from three-point range in three college seasons. Milton is one of the best and most prolific shooters on the 76ers’ roster.
On top of his shooting, Milton also contributes ball-handling and playmaking. He can run pick and rolls and make things happen with the ball. This separates him from Furkan Korkmaz and Matisse Thybulle, two of the other options to move into the starting lineup.
In Milton’s 611 minutes, the 76ers have scored 112.7 points per 100 possessions, a rate that would be tied for 6th in the NBA. This is a small sample size, and on/off court numbers for individual numbers can get funky, but it is very promising.
Moving Milton into the starting lineup in place of Al Horford can make both the bench lineup and starting lineup better. Defenses will have to stay attached to him outside, opening up spacing for the rest of the offense. He can handle the ball some of the time, taking pressure off Ben Simmons. When he does handle the ball, he can run pick and rolls with Embiid and Simmons, where defenders can not duck under picks. Simmons should excel as a roll man, which is not something he has had a lot of opportunities to do. Secondary options like Tobias Harris and Josh Richardson will have more room to operate.
Milton is not going to be a player who dominates the ball, but he will make good decisions with it. Kevin O’Connor of The Ringer compared him to George Hill in a video that was the inspiration for this post. He should take some of the ball-handling slack, but Simmons should still be the primary ball-handler. Simmons is an excellent playmaker, and is better at playmaking than Milton. Milton can take some of the stress off Simmons.
Meanwhile, Horford can get back to what he is best at, playing inside, without Embiid and Simmons clogging the paint. He will not have to hang out at the three-point arc so much. He can have long stretches where he is the offensive focal point from the elbow. He should still play a lot of minutes, as he is a very valuable player. But the 76ers will be better served by splitting up Embiid, Horford, and Simmons so only two of them play at once.
On March 1, Shake Milton gave the NBA a taste of what he could provide for the Sixers. He scored 39 points against the Clippers, going seven of nine from three-point range and contributing five assists. The team scored 130 points without Joel Embiid or Ben Simmons in the lineup. Milton will not always be that hot, but the threat of his shooting should open up their offense. Moving him into the starting lineup can help Philadelphia reach their potential. | https://medium.com/the-sports-scientist/shake-milton-can-unlock-the-76ers-offense-3108d9e24776 | ['Andrew Lawlor'] | 2020-07-27 20:56:16.656000+00:00 | ['Sports', 'Basketball', 'Sports Analytics', 'Data Visualization', 'NBA'] |
Trends and innovations are taking shape in e-commerce, small business, content consumption, digital currencies and more. | With 2020 coming to an end, there are opportunities for advertisers and brands alike heading into the new year.
1. Standing up for small businesses
Small businesses are fighting two Goliaths, lockdown restrictions and e-commerce giants, and brands have an opportunity to give them a fighting chance.
THE FACTS:
If you search the headlines for the impact of COVID-19 on small business, the results are disheartening. According to Yelp data, nearly 100,000 small businesses in the US have permanently shut down during the pandemic.¹ In Canada, March and April had the highest number of business closures since Stats Canada began measuring in 2015.² In particular, micro-businesses with fewer than 10 employees have experienced the largest decrease in sales.³ With cities across North America currently experiencing lockdowns due to the second wave of the virus, the headlines are only going to get worse. In Canada, it’s expected that half of small businesses will experience further decreases in sales.⁴ Add to all this the fact that small businesses employ over 66% of the Canadian workforce⁵ and 33% in the United States,⁶ it’s no wonder this has been headline news.
THE OPPORTUNITY:
Small businesses are the backbone of commerce as well as community. People are rallying to support the businesses they love, such as customers at a Toronto bar who bought out their inventory to help the business stay afloat,⁷ and support for a Philadelphia bar’s GoFundMe page far surpassing its $10,000 goal.⁸ On a larger scale, the LCBO paused its partnership with Skip the Dishes in response to community backlash on how it impacts small businesses.⁹ According to a poll of UK citizens, 51% are making an effort to do their holiday shopping from small businesses, with most of them hoping to avoid large corporations entirely.¹⁰ This movement to save local businesses presents an opportunity for brands to connect with community.
OUR RECO:
At first thought one might wonder how a big brand could get involved in a movement that supports local businesses, but for the right brand with the right intention it can. Brands can use their social media channels to highlight local businesses that sell their product and use Facebook and Instagram’s in-built shopping capabilities to drive traffic to their stores. To encourage traffic to these stores, brands can also create rebate programs or discounts for purchases made at independent retailers. Similarly, financial institutions can offer guidance on how to manage their cash flow in these trying times through podcasts or short YouTube videos, and can show acts of solidarity like reducing POS fees or assistance with setting up loyalty programs. Lastly, brands can use tools like Google Maps to help consumers find the closest local retailers that carry their product.
2. The not so distant Bitcoin wave
While cryptocurrency is still in its infant stages, there are forces that are driving its exponential growth.
THE FACTS:
In 2019, cryptocurrency in the global marketplace represented a cumulative market cap of over $230 Billion, up 75% vs previous year. According to the very same report, more than 100,000 retailers now accept Bitcoin worldwide.¹¹ In the US, people can use their cryptocurrency to buy cards through Bakkt, a digital asset manager owned by the NYSE, for a variety of brands such as Starbucks, Home Depot, Best Buy, AirBnB, Nike, WholeFoods and Domino’s.¹² In Canada, luxury retailer Maison Birks has partnered with Bitty to accept cryptocurrencies.¹³ In Ontario, residents of Innisfil and Richmond Hill will soon be able to pay property taxes with cryptocurrency.¹⁴ And most recently, Visa announced a new partnership with cryptocurrency startup BlockFi to offer a credit card that rewards purchases with Bitcoin.¹⁵
THE OPPORTUNITY:
The announcement of a major financial player promoting cryptocurrency will accelerate the adoption of digital currencies. While we have seen some traction in Canadian business, we don’t expect everyday shopping with cryptocurrency to be a reality anytime soon. Not-withstanding, cryptocurrencies offer brands unchartered opportunities that will take time to explore because of the complexity of digital currencies. Brands should use this window of time between early adopters and the early majority to prepare for the opportunities that cryptocurrencies can offer.
OUR RECO:
Engage with startups who act as fund managers and brokers of cryptocurrencies as a way to get a lay of the land. Consider engaging Shopify for insights as they have already integrated Bitcoin as an accepted currency on their platform.¹⁵ In addition to integrating cryptocurrency as a form of payment, brands should explore ways to integrate digital currency into all of their marketing activities. Millenials and Gen Z are the most open-minded when it comes to cryptocurrencies; if you are looking to target these audiences, do so through sales or discounts with Bitcoin payment or incentivize with ‘cash back’ with Bitcoin. Similarly, brands could work with fin-tech startups to integrate digital currency into their reward programs.
3. Product placements set skyrocketing sales
Recently a Netflix show about a chess prodigy sent chess set sales soaring. Could product placements in content on streaming services eventually take over traditional advertising?
THE FACTS:
Netflix’s latest craze, “The Queen’s Gambit”, a story of a chess master excelling in a male-dominated world was watched by 62 million households around the world in its first 28 days of release.¹⁶ The show’s massive success catapulted inquiries for chess sets on eBay by 215%, and sales increased 125% in the U.S. alone¹⁶. The Queen’s Gambit is just one of many success stories of product placement on streaming platforms. As of 2020, Netflix, Hulu and Youtube have accumulated 191.5 million subscribers.¹⁷ And earlier this month, Warner Bros. Films announced that it will distribute its’ 2021 new releases on HBO Max in the US.¹⁸
THE OPPORTUNITY:
Streaming platforms open up many new product placement possibilities. Netflix, Crave, Amazon Prime, and other platforms gather a massive amount of data from users under a vast number of criteria. By collecting data on browsing and scrolling behaviour, they can better understand consumer interests, offering brands better planning capabilities. They track when people pause, rewind, or fast forward through a show, gaining insight into key moments of attention from a user perspective. Brands can leverage these insights to add more value to their advertising and product placement opportunities. These are few of the many possibilities streaming platforms can offer.
OUR RECO:
Product placement through streaming services is a relatively new concept. We recommend engaging platforms like Netflix, Amazon and Hulu to get further insight into how these platforms work and what potential product placement opportunities they can open up for your brand. Also, engage with social media teams and community managers to keep an ear on the ground for the shows and movies that your target audiences are talking about to better inform your product placement strategy. Lastly, if your brand chooses to do a product placement deal with a show, leverage the data to inform your promotion strategy to further enhance the effectiveness of this opportunity.
4. Social media is the new shopping mall
2020 has brought the surge of in-app purchases and a focus on social commerce from brands, influencers and social media scrollers.
THE FACTS:
This year, 42% of all online shopping was done from a smartphone.¹⁹ Estimations are putting e-commerce sales at over $735 billion by 2023.²⁰ As this year provided us with an immense amount of free time, 51% of adults have increased time spent on social media.²¹ Seeing this opportunity to grow revenue, social platforms and brands alike looked to insert themselves in the spaces consumers know and love. In fact, social commerce has sparked a 24% increase in shoppable ads on Facebook and a staggering 43% increase in adoption on Instagram.²² With a massive pool of shoppers already spending time on social media, nearly three out of every ten brands plan to leverage instant social commerce in the next twelve months.²³
THE OPPORTUNITY:
Shoppable ads, product and brand tags, product guides and creative assets that allow brands to display and promote their products in a captivating and personalized way offer brands the opportunity to use social shopping. Some brands have already created new and remarkable shopping experiences. For instance, Air Jordan partnered with Snapchat to create an AR filter of Michael Jordan’s legendary dunk from the free-throw line, while also giving them the exclusive opportunity of buying the AJ III Tinker sneaker through the platform itself.
OUR RECO:
As we enter 2021, brands must prioritize social commerce integration into their marketing strategies by developing social shopping experiences that move beyond social content creation. Brands will benefit from translating physical retail experiences like flagship stores and pop-ups into the digital realm, driving e-comm sales while creating surprise and delight moments. Brands can and should take advantage of test and learn functionalities offered by these platforms to optimize these new shopping experiences. Those brands who are imaginative and push possibilities will be more successful than those who think of social commerce only a place to post links to shop on their website.
5. The cure for the winter blues
As the coldest months of the year arrive, brands have the opportunity to leverage the power of nature to uplift consumer spirits.
THE FACTS:
Urbanization has created a disconnect between people and nature, which has had a negative impact on our mental well-being. Research shows that city dwellers have a 20% higher risk of anxiety disorders and a 40% higher risk of mood disorders.²⁴ COVID-19 restrictions in addition to the winter blues, will make the first 3 months of the year more difficult for people who experience anxiety and depression. However, there’s plenty of research that shows that spending time in nature, bringing nature in, or even watching nature content can positively affect people’s emotional states.²⁵ ²⁶ ²⁷
THE OPPORTUNITY:
Brands and employers have an opportunity to connect people to any type of nature in creative ways at a time when people will need it the most. Whether it is spending time in nature, incorporating nature into your home or experiencing it through digital platforms, research shows that it is all beneficial. With a significant part of the population working and studying at home now, ideas that bring the outdoors inside will help connect the brand on a psychological and emotional level and will contribute to the overall health of the population. This is especially important during the holidays, as people will be spending time away from loved ones this year and COVID-19 can creates even more uncertainty.
OUR RECO:
Flower and garden centres can lean into sales of indoor plants or reposition the use of flower delivery based on the need to connect with nature and also uplift mood and air quality. Even artificial plants can have a positive impact. Brands with big marketing dollars can include more nature in TV, video and audio spots as a way to bring more exposure of nature to the masses. Tech brands could partner with travel brands and lean into virtual nature experiences, as VR shows to be a very strong replacement to authentic nature. We have seen brands create a 24 hour chicken roasting channel; what if an office supplies brand created a 24 hour of nature channel for office workers? Home and bath shops should lean into nature scented candles, soaps, or bath products. Soup brands could use garden seeds or herbs as promotional giveaways. Health and wellness is already a major trend amongst urban dwellers. Health related marketing will be especially poignant and attention grabbing as we continue to navigate through COVID-19 quarantine this winter. | https://medium.com/bbomn-take5/trends-and-innovations-are-taking-shape-in-e-commerce-small-business-content-consumption-a6ffb9970b67 | ['Bensimon Byrne'] | 2020-12-18 17:06:47.142000+00:00 | ['Digital Marketing', 'Ecommerce', 'Marketing', 'Advertising', 'Brand Strategy'] |
Reflections 2020 | Reflecting is a way I check on my ‘why’, or my purpose. Why am I doing this? Whats important to me? What do I need to do to make my experience of life better? 2020 pivoted humans radically, and navigating the global and environmental impact continues, but personally it was a year of conquering an equal dose of internal and external challenges.
In a nutshell here’s what 2020 looked like for me.
NEW HOME
Moving to our new home in the Waitakere ranges gave us much needed space and tranquility. We were also closer to one of my favourite beaches, Bethells Beach. The right space gives you confidence and clarity. | https://medium.com/@design_46188/reflections-2020-416dca78ac9a | ['Ali Davies'] | 2020-12-26 22:04:04.071000+00:00 | ['New Year Resolution', '2020', 'Reflections', 'Perspective', 'Achieving Goals'] |
How Many Cats Makes You the Crazy Cat Lady? | Oliver The Majestic Cat
How many cats do you own? Are you ashamed to tell people the actual number of cats in your home? I worked with a woman who was a cat lady. You could say she was the crazy cat lady. She never would tell us how many cats she owned. At one point, we started a secret list to try and figure out how many she really had. Every time she told us a story, we would try and memorize the cat names she spoke about and add them to the list. It’s interesting when you ask a person how many cats do you own, and they say, “Oh, too many to tell you!” My mind spiraled into what does that mean. Does she have 5, 10, 20, or even 30 cats??? I know the joke at my house has always been I will be the crazy cat lady if I am single for many years and live on the corner block with too many cats. Let’s just say I’m on my way to being the crazy cat lady! I definitely have a ways to go but, it’s always possible!
The phrase ‘cat lady’ indicates you have a large number of cats, perhaps twenty or more. The lady is often involved in cat rescue or is a cat hoarder with a mental health issue. Forty-two percent of Americans own cats. The average number of cats in a household is two cats. So here are the signs you are turning into the crazy cat lady.
You buy lint rollers in bulk.
You stop wearing specific colors of clothing.
You will alter your sleeping position to cater to your cat.
You evaluate your relationships based on if the person likes cats.
Your camera roll on your phone is filled with cat photos.
You know you don’t need another cat, but you check the adoption sites just in case!
Scoring for Crazy Cat Lady is:
0–5 Cats: You’re a crazy cat lady in the making!
6–10 Cats: Family and co-workers are questioning your sanity!
11–15 Cats: You have now become one with the cats!
16–20 You have definitely acquired the crown of Crazy Cat Lady!
Where do you find yourself on the Crazy Cat Lady scoring? Right now, I’m a crazy cat lady in the making. | https://medium.com/@ashleycopywriting/how-many-cats-makes-you-the-crazy-cat-lady-7e9a554e3720 | ['Ashley Thompson'] | 2021-07-31 20:59:55.021000+00:00 | ['Cats', 'Animals', 'Blogger', 'Pets', 'Copywriter'] |
Fake News By Omission — The Mass Media’s Cowardly Distortion Tool | Western mass media have continued their conspiracy of silence on the OPCW scandal, making no significant mention yet this month of the leaks which have been emerging from the Organisation for the Prohibition of Chemical Weapons indicating that the US, France and UK bombed Syria last year in retaliation for a chemical weapons attack which probably never occurred, and that OPCW leadership helped cover it up.
If you haven’t been following the still-unfolding OPCW scandal, you can catch up quickly by watching this seven-minute video.
Mass media’s silence on this hugely important story is noteworthy not just because it has far-reaching consequences for the future of the Syrian regime change agenda and for public trust in US-led war narratives, not just because the leaks have already been independently authenticated by the mainstream press, and not just because scandalous revelations about powerful entities are normally the sort of incendiary click fodder that makes a mainstream news editor hasten behind his desk to hide his arousal. It is also noteworthy because we’re being told by people in the know that even more leaks are coming.
“Much more to come about the censoring of the facts on Douma at the Poison Gas Watchdog OPCW,” tweeted journalist Peter Hitchens earlier today. “It really is time that the Grand Unpopular Press and the BBC, realised this is a major story. Are they too proud to admit they might have been wrong?”
Hitchens, who was among the very first to publish the leaked internal OPCW email last month revealing multiple glaring plot holes in the official narrative about the alleged chlorine gas attack in Douma, Syria, has been saying this for a while now.
“More is known by the whistleblowers of the OPCW than has yet been released, but verification procedures have slowed down its release,” Hitchens wrote in his blog last week. “More documents will, I expect, shortly come to light.”
So this is still an unfolding story that is only going to get more scandalous in the coming days. Yet rather than reporting on an important news story (which it may surprise you to learn is actually supposed to be the literal job description of the news media), the mainstream press has been silent. The only times the mass media have commented on this major story has been to spin it as Russian disinformation, and Tucker Carlson’s segment on it last week which was also falsely spun as disinfo by establishment narrative managers like David Brock’s Media Matters for America.
The blog Left I on the News uses the term “fake news by omission” to describe this obnoxious yet ubiquitous propaganda tactic, where imperial media outlets deliberately distort people’s understanding of what’s going on in the world by simply declining to cover news stories which are inconvenient to the establishment narrative.
Refusing to tell people about things that did happen distorts their worldview just as much as telling them things that did not happen, yet mass media will never be held accountable for engaging in the former, while engaging in the latter forces them to print embarrassing retractions and lose credibility with the public. For this reason, fake news by omission is their preferred tactic of deceit.
People sometimes think of the mainstream media as always straight-up lying all the time, just fabricating stories whole cloth about what’s going on, but that isn’t generally how it works. A good liar doesn’t lie all the time, and they don’t even tell full lies unless absolutely necessary. What they do is far more cowardly and far more effective: they spin, they distort, they tell half-truths, they emphasise insignificant details and marginalize significant ones, they uncritically report what government officials are telling them, and they lie by omission.
However, the total disappearance of the OPCW leaks is unusual for the legacy media. The imperial press have had their ways of hiding inconvenient stories since newspapers began. Running stories late on a Friday, running them on the “graveyard page” of page 2 (so-called because stories go there to die), holding on to them until another big story breaks so they can be published relatively unnoticed, waiting for them to be broken in a disreputable publication so people will be skeptical of it (sometimes known as “fixing” a story), or running an oppositional op-ed at the same time to spin the uncomfortable facts in a more salubrious way. But in the end, they normally run the story, in one way or another, so they can be seen not to be censoring.
Not this time though. The exceptional silence on the OPCW scandal from imperial news media discredits them completely, but people won’t know about it unless they are told. Spread the word.
________________________
Thanks for reading! The best way to get around the internet censors and make sure you see the stuff I publish is to subscribe to the mailing list for my website, which will get you an email notification for everything I publish. My work is entirely reader-supported, so if you enjoyed this piece please consider sharing it around, liking me on Facebook, following my antics on Twitter, checking out my podcast on either Youtube, soundcloud, Apple podcasts or Spotify, following me on Steemit, throwing some money into my hat on Patreon or Paypal, purchasing some of my sweet merchandise, buying my new book Rogue Nation: Psychonautical Adventures With Caitlin Johnstone, or my previous book Woke: A Field Guide for Utopia Preppers. For more info on who I am, where I stand, and what I’m trying to do with this platform, click here. Everyone, racist platforms excluded, has my permission to republish or use any part of this work (or anything else I’ve written) in any way they like free of charge.
Bitcoin donations:1Ac7PCQXoQoLA9Sh8fhAgiU3PHA2EX5Zm2 | https://medium.com/@caityjohnstone/fake-news-by-omission-the-mass-medias-cowardly-distortion-tool-b80f3d0fe7e4 | ['Caitlin Johnstone'] | 2019-12-06 02:36:59.904000+00:00 | ['Journalism', 'Syria', 'War', 'Media Criticism', 'Politics'] |
How Data and Civic Engagement are Transforming Indiana County | A mySidewalk Case Study and Data Discovery Story
Of the tasks required of planners, one of the most challenging undertakings is to successfully include the voices of people whose lives are directly influenced by change — namely, the community members.
For example, what would you do if you realized university enrollment was changing the make-up of the neighborhoods in your community — a slow, but significant adjustment few had planned for? How can you connect with these students, families, and the older residents in your city about planning issues? Most importantly, how do we use good data to make stronger decisions, and how do we connect community members to these decisions?
mySidewalk’s Jill Applegate interviewed Jeff Raykes, Deputy Director of the planning section at Indiana County, Pennsylvania about how he is creating more-informed engagement efforts, using mySidewalk along with other outreach strategies to connect with a wide variety of generations and interests in local communities.
……………………………………………………………………………………….
When working with a diverse demographic, it can be difficult to make the concept of data-driven planning accessible and appealing. Raykes has experienced this challenge firsthand through multiple projects he’s worked on during his 10 years in the county offices.
“One of the big challenges,” Raykes said, “is making planning relevant. So that’s been one of our goals or core values, is that we’re trying to connect people to planning.”
Indiana County is a primarily rural county situated in western Pennsylvania, made up of 38 municipalities. The Indiana County Office of Planning and Development has five sections, including planning, housing, infrastructure, uniform construction code, and economic development. The planning section, where Raykes works, specifically oversees the transportation, land use, and long-range planning for the county.
“In our little section, we do a lot of the community planning,” Raykes said. “We’re working on comprehensive plans quite often — we’re either working on a comprehensive plan or a comp plan update almost constantly in our office.”
The county office’s main focus is community development at the local level. According to Raykes, they’ll suggest to a local government that they start a project, and then staff the project with their county-level planners to guide the process.
“If [a municipality doesn’t] have a planning department, we’ll go out there and point out that the comprehensive plan needs to be updated,” Raykes said. “That’s what we spend the majority of our time on, is doing projects like that. It helps them, and they don’t have to hire a consultant.”
When working on these projects, it can often be difficult to engage community members of varying ages in the planning process. With the advent of social media and other useful engagement strategies, Raykes is glad to be relying less on public meetings as the sole source of community input. However, the struggle to find the best way to include community voices in the decision-making process is ongoing.
“When I started here, social media wasn’t as popular as it is now and outreach efforts primarily were public meetings that nobody came to,” Raykes said. “One of the obligations of good planning is finding ways to make it relevant across your entire demographics, which is extremely challenging.”
The Indiana Community University District Master Plan is a recent example of that challenge. The county seat of Indiana County is the borough of Indiana, which is home to about 14,000 people, the majority of which attend the Indiana University of Pennsylvania. As the college has grown, the town has struggled to adapt to changing land use and housing patterns.
“Obviously there’s the town-gown divide… so you get these areas where you have very high concentrations of students, and essentially the character of the neighborhood changes,” Raykes said. “That’s a pretty massive change, slowly happening with each enrollment cycle, but no one’s thinking about it or talking about it, and so the cumulative effect is that you have very significant changes in terms of these neighborhoods.”
After seeing the lack of preparation for these ongoing changes, the county planning department decided a master plan with suggestions for land use would help the borough and the surrounding township continue to grow responsibly.
The team had to grapple with a few key issues throughout the duration of the project. First, the transient nature of the student population made data collection difficult.
“The data for this community shows that 80 percent are college students. But what does that mean? Does that mean they’re commuting? The University of Pittsburgh is an hour away, and there’s other colleges around here, so you almost have to back into that number,” Raykes said. “With this particular project, it’s unique because it takes more work to get something you can use. But if you take an average community… data there is awesome.”
Raykes also pointed out the current obstructions for obtaining data from the planning office.
“The problem is, it’s just a new thing,” Raykes said. “If you wanted to [look at data] now, you’d have to come into the planning office, schedule some time, fill out a map request. It’s just onerous. Essentially, why are we making it so hard? People need data to make good decisions. At least, that’s what I would argue.”
The second key challenge was the broad age range and interests of the constituents, which forced the team to get creative with their outreach.
“We knew we needed a mix [of engagement strategies] and we knew we were starting from scratch, so if we were to just run an ad in the paper to try and get people to a public meeting, no college students are ever going to see that,” Raykes said. “We did intentional face to face things. I went to SGA (Student Governing Association) like three times. We would do presentations, we’d ask questions, we hit Twitter. With the senior citizens, we would go to their community meetings. I’ve never been to so many meetings.”
With the master plan recently completed, the borough can now use it to help guide land use decisions as the population continues to grow and change. Raykes is excited to see the way the community develops.
“We went into this master planning process and we talked a lot about placemaking. When you think about it, it’s so exciting and cool to think that public spaces should do something to be memorable, and to generate a feeling,” Raykes said.
“That was really fun for me, to ask ‘How should the areas around the university campus make you feel?’ Great places generate a certain feeling.”
Raykes sees placemaking as a potential solution to the issues communities are facing in his county.
“Every community, especially in southwestern Pennsylvania, the population is declining. People are leaving,” Raykes said. “If we’re going to be successful and, at the very least, retain our population, we’ve got to do something different.”
Trying something new can be especially challenging when not everyone can agree on the best steps to take. A new project to construct a multimodal corridor in the Indiana borough, connecting a heavily used trail to the downtown and IUP campus, has stirred up some push back from the community.
“The problem is, if I can do a complete street, that’s what’s going to keep people here. Then, the response would be, ‘No, it’s getting a job,’” Raykes said. “We would say, ‘Well, sure. It’s not either-or. It’s and-both.”
Helping community members understand the purpose of planning and get on board with new projects and ideas may be a good step forward.
“We’ve got to think about what we’re doing right, what we’re doing wrong, and then make some really tough decisions about that,” Raykes said. “That’s where I think mySidewalk becomes part of the mix, because people can start to interact with the same stuff that I’m interacting with.”
Overall, Raykes sees the need for good data and community engagement as a tough, but exciting challenge.
“When I look at the future of planning here in Indiana County, we’re in some form of decline in most communities. How do we use good data, how do we make good decisions, and how do we connect people to making decisions about the future?” Raykes said. “I guess the challenge wrapped up in all that is: How do we get communities to think about themselves differently?”
……………………………………………………………………………………….
If you’re wanting to help community members engage with your projects, you’re in luck. mySidewalk just released a new tool that allows you to share customizable, interactive maps and gather feedback with the click of a button. Learn more about Sharable Maps here, or test them out yourself by requesting a free 7-day mySidewalk trial. | https://medium.com/community-pulse/how-data-and-civic-engagement-are-transforming-indiana-county-864fbafb1a30 | [] | 2016-12-29 19:50:39.273000+00:00 | ['Civic Engagement', 'Planning', 'Urban Planning', 'Data', 'Data Visualization'] |
I Used a Menstrual Cup for the First Time — Here’s What Happened | “I don’t want to go to the hospital!” I sobbed to my girlfriend from the bathroom.
I’m naked, sitting on the toilet. I’ve been trying to remove my new menstrual cup for two hours. It’s not a pretty sight.
“Can you try pinching the base and pulling it?” my girlfriend asks, standing in the doorway. She’s folding her arms and biting her lip.
“I can’t reach…” *Sob* “…the f*cking thing!” Snot and tears drip down my face.
Let’s hit rewind for a hot minute.
Here’s why you should buy a cup
“Did you know 4.3 billion pads and tampons go into landfill each year in the UK?” I told my girlfriend. I was reading an article on my iPhone.
Lots of women are using menstrual cups now. My friends who use cups swear by them. So I did some more research (to convince myself buying one was a good idea.)
Here’s what I found:
You can use a cup for up to 12 hours before you need to empty it.
You can swim with a cup in (without worrying water and bacteria will absorb into it like a tampon can.)
They last for up to 10 years.
You’ll save money. We all know how pricey menstrual products can be over time.
Less risk of getting Toxic Shock Syndrom (TSS.)
Cups are non-toxic and aren’t made with BPA, chemicals, or latex.
What’s not to love, right? I ordered a Saalt cup without hesitation. I was thrilled at the thought of doing my part to stop suffocating the earth with pads and tampons.
“F*ck the environment!”
I cried out to my girlfriend, still trying to pull the cup out.
She was hovering around outside our ensuite like an anxious soon-to-be dad at a hospital.
“You can do it; I know you can!” She said, chewing her nails as she peeked her head into the bathroom.
And I did.
After trying multiple positions (maybe Ariana Grande was singing about removing a menstrual cup), I got it out.
“It’s out!” I screamed, flinging the cup into the shower. My girlfriend ran in. I watched as the cup bounced off the shower wall and began rolling towards me.
“No, please!” I sobbed, grabbing onto my girlfriend's arm.
It was like the shower scene from Psycho, and I looked like Carrie.
Yes, I will try using the cup again. Inserting it was a breeze, it was super comfortable — I couldn’t feel it — and it worked for the entire day. Zero leakage.
Use these handy resources before buying a cup
Don’t let my story put you off. Lots of people who get their periods have had similar experiences. Heck, Kristen Bell fainted trying to remove hers.
I have a friend who struggled to get hers in and others who don’t have any issues at all. Remember, the cup won’t disappear up there or get lost — it can only come out.
Here are some menstrual cup resources:
Try taking this quiz to see which cup might be a good fit for you
How to measure your cervix
How to insert the cup
How to remove the cup
After some research, I discovered my cup probably didn’t get a good suction when I inserted it, which meant it moved higher up and more challenging to reach — this can also happen if you have a higher cervix. Measure your cervix before you buy one (something I didn’t do.)
I think the cup is a much better option than pads and tampons, just because it’s more convenient. Plus, it’s better for the earth. It works well and does the job.
Girl power.
*Update: Saalt have kindly offered to give me a replacement cup. | https://medium.com/write-like-a-girl/i-used-a-menstrual-cup-for-the-first-time-heres-what-happened-164a8c879ee | ['Kathrine Meraki'] | 2020-12-15 07:31:39.482000+00:00 | ['Life Lessons', 'Culture', 'Women', 'Society', 'This Happened To Me'] |
Interested in Regeneration? — Part 3 | Let´s set us for a meaningful and deep change process… This calls us to engage in a developmental path: to evolve our capacity to contribute to vitality, viability and evolution of a Living Systems.
When you allow change at the level of personal experience, you become a better instrument for Change in the systems in which you’re embedded. (As my friend Ana Gabriela Robles and I stated in a playful, but meaningful way: It is not about you, but it is always personal (doing personal inner-work). You allow yourself to embrace change and uncertainty as a characteristic part of your way of working, enabling you to increase your ability to respond to what´s required of you to make real transformative contributions to Life on the Planet.
It is all about how you choose to put in practice a new way of living, rather than intellectualizing techniques, methods, or models.
Deep change requires more than knowledge. It takes ongoing inner work to build the capability to evolve oneself and engage in and with life differently.
Are you still interested in this new way of living?
If you are interested in shifting to living from this paradigm, I would recommend that you read Carol´s book The Regenerative Life[1]. However, deep change does not happen while reading and absorbing “ideas”, nor by trying to understand others’ experiences.
If you are serious about working at the Regenerative level, it is my strong advice that you engage in one of our developmental communities — english & spanish — (change agents, CEO´s thinking development, regenerative entrepreneurs communities, regenerative business development, shifting to regenerative effect level of investment, and more). Want to know more about these communities? Please feel free to reach out HERE and ask about them.
I thank my dearest friends: Carol Sanford, Ben Haggard, and Tim Murphy for everything I continue to learn in this Regenerative way of living I have chosen and the potential effects of transformation we are able to experience while working on manifesting an Economy for Life. Also, my deep appreciation to Raul De Villafranca and Delfin Montañana for inviting me to work this way in the first place.
[1] 2020. Sanford. The regenerative life. https://carolsanford.com/the-regenerative-life/ | https://medium.com/@sidney-canom/interested-in-regeneration-part-3-eef2683ef3fc | ['Sidney Cano'] | 2020-11-23 21:53:57.197000+00:00 | ['Regenerative Economy', 'Change Management', 'Regeneration', 'Regenerative Business', 'Business Development'] |
Building a Chat application using Flexbox | Setting the size of the side bar properly
So another feature of Flexbox is being able to set the size of a flex item by using the flex-basis property. The flex-basis property allows you to specify an initial size of a flex item, before any growing or shrinking takes place. We’ll understand more about this in an upcoming article.
For now I just want you to understand one important thing. And that is using width to specify the size of the sidebar is not a good idea. Let’s see why.
Say that potentially, if the screen is mobile we want the side bar to now appear across the top of the chat shell, acting like a top bar instead. We can do this by changing the direction flex items can flex inside a flex container. For example, add the following CSS to the #chat-container selector. Then reload the page.
flex-direction: column;
Sidebar gone — It’s 0 pixels high when using column direction
So as you can see we are back to a blank shell. So firstly let’s understand what we actually did here. By setting the flex-direction property to column, we changed the direction of how the flex items flex. By default flex items will flex from left to right. However when we set flex-direction to column, it changes this behaviour forcing flex items to flex from top to bottom instead. On top of this, when the direction of flex changes, the sizing and alignment of flex items changes as well.
When flexing from left to right, we get a height of 100% for free as already mentioned, and then we made sure the side bar was set to be 275 pixels wide, by setting the width property.
However now that we a flexing from top to bottom, the width of the flex item by default would be 100% wide, and you would need to specify the height instead. So try this. Add the following property to the #side-bar selector to set the height of the side bar. Then reload the page.
height: 275px;
Sidebar with a fixed width and height
Now we are seeing the side bar again, as we gave it a fixed height too. But we still have that fixed width. That’s not what we wanted. We want the side bar (ie our new top bar) here to now be 100% wide. Comment out the width for a moment and reload the page again.
Sidebar rotated to now be on top
So now we were able to move our side bar so it appears on top instead, acting like a top bar. Which as previously mentioned might be suited for mobile device widths. But to do this we had to swap the value of width to be the value of height. Wouldn’t it be great if this size was preserved regardless of which direction our items are flexing.
Try this, remove all widths and height properties from the #side-bar selector and write the following instead. Then reload the page.
flex-basis: 275px;
As you can see we get the same result. Now remove the flex-direction property from the #chat-container selector. Then once again reload the page.
Final output using flex-basis property
Once again we are back to our final output. But now we also have the flexibility to easily change the side bar to be a top bar if we need to, by just changing the direction items can flow. Regardless of the direction of flex, the size of our side bar / top bar is preserved.
Conclusion
Ok so once again we didn’t build much, but we did cover a lot of concepts about Flexbox around sizing. We will cover more concepts in the next article where we layout out the elements in our side bar. | https://medium.com/quick-code/building-a-chat-application-using-flexbox-e506ca2bd9ff | ['Daryl Duckmanton'] | 2019-04-19 05:32:40.643000+00:00 | ['Web Design', 'User Experience', 'Software Development', 'Web Development', 'Development'] |
Spectre | Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more | https://medium.com/blueinsight/spectre-e0c56f9809d5 | ['Patrick M. Ohana'] | 2020-12-22 16:43:12.405000+00:00 | ['War', 'Monoku', 'Blue Insights', 'AI', 'Poetry'] |
Modern Finance Chain FAQ’s | You have questions, we have answers. Everything you need to know about Modern Fiance Chain in one source. For more information visit our website and review our One Pager. You can read our whitepaper here.
When is the ICO / token sale?
May 5 — June 25, 2018
What is the max supply of MFX?
521,000,000
How many MFX tokens will be available during the ICO?
301,000,000 MFX tokens will be sold during all ICO stages. This is our maximum circulating supply.
How many MFX tokens will be kept for team and organization?
Only 8% of total MFX Tokens are being kept for Devs and Advisors. 73% of all MFX tokens are being allocated to ICO and MFX Rewards pool. MF Chain strongly believes in a responsible, transparent token allocation that places the community first.
Is there a presale, and does it include a discount?
MF Chain will be holding a presale beginning on May 5th, 2018. Conversion of tokens is 1 ETH = 10,150 MFX tokens. Minimum contribution is 5 ETH with a maximum contribution of 200 ETH.
What is the price of tokens during the ICO?
MF Chain’s main ICO will begin May 25th and end June 25th. Conversion rate is 1 ETH = 8,500 MFC tokens with a minimum contribution of 0.1 ETH, maximum remains the same.
Will there be discounts offered during the main crowdsale?
MF Chain is offering bonuses for large contributors. We feel it only fair that everyone have the same opportunity to own an equal amount of tokens regardless of when you join the ICO. We also want to reward those that believe in our project enough to send large contributions. Bonus will apply for presale and ICO participants.
Bonus Offer Schedule
10 or more Ether: 10% bonus 20 or more Ether: 15% bonus 100 or more Ether: 20% bonus
Is there a whitelisting?
Yes, you can register for the whitelist beginning April 4th. MF Chain’s live token generation event is run entirely by smart contract with predetermined exchange rates and whitelisted wallet addresses. Only wallet addresses on the whitelist can participate. Transactions from wallet addresses not on the whitelist will fail and will cost you gas in the form of a transaction fee. In order to be placed on the whitelist, all participants must pass our KYC process.
Is there a Know Your Customer (KYC) or Anti-Money Laundering (AML) check for the token sale?
Yes, MF Chain has partnered with Identity Mind for our KYC/AML process. All submitted information will be reviewed on OFAC, PEP, PSFI, EU and freeze lists to name a few.
Are there any restrictions on who can participate in the token sale?
US are barred from participating in the MF Chain ICO. While MFX is, by common definition, a utility token to protect both the project and our investors our legal team as advised we follow this standard. A large number of other countries are barred from participating as well. You can view a partial list at https://mfchain.com/#faq-item
What forms of payment are accepted in the ICO?
You can buy MFX token in ETH only.
Can I send you ETH from my wallet address from an exchange?
No. Doing so will result in loss of your ETH. When you send ETH from an exchange wallet, the wallet address provided is not owned by you, as a result you will not have access to the MFX tokens sent from MF Chain. You can download a free ETH wallet from https://www.myetherwallet.com/ if you do not already have one.
If I am approved for the whitelist before the ICO, am I guaranteed a token allocation?
No. The contribution amount listed on your KYC submission does not reserve a token allocation for you. Tokens are sold on a first come, first served basis. You may send any amount between 0.1 and 200ETH as long as there are enough tokens left in the ICO round you are participating in.
What happens if the hard cap amount is not achieved?
Any unsold tokens will be burnt at the end of the ICO with no change to the conversion of ETH to MFX tokens.
What happens if the soft cap amount is not achieved?
In the unlikely event MF Chain does not achieve its soft cap, ETH sent to the smart contract will remained locked until users request a refund transaction.
Is MFX ERC20 compliant?
Yes, MFX token is on the Ethereum network and ERC20 compliant.
How do I confirm my contribution was received?
You can check your transaction on any ethereum blockchain scanner such as: https://etherscan.io/ Simply enter your wallet address in the search bar to view your transactions and token allocation.
How will I be notified when tokens are distributed?
MF Chain is holding a live token generation event run completely through smart contact. That means that shortly after contributing, you will receive your MFX Tokens in your ETH wallet address provided to us. No waiting until the end of the ICO to receive your tokens!
I only have fiat. What’s the best way to purchase MFX?
You can use your fiat to purchase ETH on a number of exchanges. Reminder, do NOT send ETH to MF Chain directly from the exchange. You must transfer the ETH from the exchange into a wallet such as Ledger, Trezor or one from https://www.myetherwallet.com/
Is there an airdrop, and how many MFX tokens will be distributed?
Yes, the airdrop ended on March 31st, 2018. 1 million MFX tokens are being given away to the community. MF Chain is also holding a referral contest adding an additional 8500 MFX tokens to the air drop! Read our airdrop blog at https://medium.com/@mfchain/modern-finance-chain-airdrop-5b95af03362d
Is there a bounty campaign?
Yes! We will update this blog with a link to the official ANN thread once it is launched!
Is the MF Chain Smart Contract Open Source?
Yes, MF Chain is an open-source project and our code can be found at https://github.com/mfchain.
What exchanges will the MFX tokens be listed on?
We have a clear roadmap and plan for exchange listing. Any finalized partnerships will be announced. We cannot and will not disclose which exchanges these may or may not be.
What decimal will the MFX token go to?
The decimal will go to 18 places.
MF Mainnet
What is the MF Mainnet?
The MF Mainnet is the greater vision of MF Chain. With a suite of solutions, the MF Mainnet is designed to connect users, merchants and the entire crypto-community on a blockchain that embraces all technology. The MF Mainnet will be a hybrid BFT/POS consensus method. More can be learned by reading our whitepaper at https://mfchain.com/wp/.
When will the MF Mainnet launch?
Currently scheduled for launch in Q2 2019, the MF Mainnet will be a unique public blockchain platform. Coinciding with the launch of the MF Mainnet, there will be a live token exchange event with MFX tokens converting to MFM tokens on a 1:1 ratio. MFF tokens will also be introduced to the market. The MFF token (Modern Finance Fuel) will be the token for all transactions and replace the MFX tokens in the MF Chain rewards program. | https://medium.com/modern-finance-chain/modern-finance-chain-faqs-d23cb2d8e5a1 | ['Modern Finance Chain'] | 2018-05-03 04:29:30.382000+00:00 | ['Mfchain', 'Crypto', 'Cryptocurrency', 'Blockchain', 'Ethereum'] |
Monitoring Drones Over Cellular Network — Part1 | 1. Prerequisites
It is assumed that you have a good knowledge of how Ardupilot works and basic knowledge on how to build a drone using Ardupilot compatible hardware.
Ardupilot Wiki: https://ardupilot.org/ardupilot/
1.1 Things We Need
UAV with Mavlink enabled flight controller (Ardupilot running on a Pixhawk flight controller is used in our case)
(Ardupilot running on a Pixhawk flight controller is used in our case) RPi with a micro SD card — any works (>3 recommended for prototyping, Pi Zero is recommended for deployment)
with a micro SD card — any works (>3 recommended for prototyping, Pi Zero is recommended for deployment) RPi camera
USB modem (3G/4G)
Server (To bridge between RPi and GCS, we are using AWS)
(To bridge between RPi and GCS, we are using AWS) Ground Control Station (Mission Planner, QGroundControl, etc.)
2. Initial Setup
Raspberry pi will be your drone's companion computer. At this point, you have to set up your Raspberry Pi. A complete guide can be found here.
Summary of how to set up the RPi
Format and flash raspberry pi OS to a MicroSD card using the raspberry pi imager Connect RPi to the internet via wifi or LAN to install the needed packages. Alternatively, you can set up RealVNC and SSH to access your RPi remotely.
* Important: If you are using a headless system like RPi Zero (NO monitor, USB, keyboard, and etc.), enabling ssh is required.
Make sure your RPi is updated to have the latest packages
$ sudo apt update
$ sudo apt upgrade
* Important: It is assumed that you are using the default user for the raspberry pi for the setup and beyond. If you are using another user, it is necessary to change the codes in relevant locations.
3. Setting Up the Internet Connectivity
When the drone is up in the air, it needs internet connectivity to transmit the data to the GCS, which can be provided by a USB modem (cellular dongle). Using a USB modem is both reliable and compact.
There are two important factors to consider when setting up the internet connection to the drone.
The modem needs to be detected and connected automatically after booting. RPi should monitor internet connectivity and should auto-reconnect if the connection fails.
A cellular modem is not supported by the raspberry pi out of the box. So it is necessary to install the relevant packages and set up the internet connectivity. The common and mostly used package for network connections in Linux is the network-manager package.
Alternatively, you can also use sakis3g. The concept is the same.
Setting Up Linux Network Manager
First, we have to install the network manager and GUI for the network manager. As the packages openresolv and dhcpd5 conflicts with the network manager (reference), we have to remove them as well and reboot.
$ sudo apt install network-manager network-manager-gnome
$ sudo apt purge openresolv dhcpcd5 -y
$ sudo reboot
Add a connection profile as below.
* Replace YourConName — with a connection profile name (ex. TrueMoveInternet, VodafoneInternet), and YourNetworkAPN — with the APN of mobile broadband connection.
$ sudo nmcli con add type gsm ifname "*" con-name YourConName apn YourNetworkAPN
Check the connection by running the following command
$ nmcli connection up YourConName
To make sure the internet connection is always alive and running, you need to make a service running in the background which will monitor the connection and reconnect if it’s disconnected.
To make it happen, write an executable script that runs at system boot to connect to the internet via the mobile broadband connection and runs always in the background to keep the connection alive.
Create an executable script named connect.sh
$ sudo nano connect.sh
Include the following commands inside the connect.sh file and save it. This will continuously check the connection and if the connection is down, it will try reconnecting.
#!/bin/sh while true; do
LC_ALL=C nmcli -t -f TYPE,STATE dev | grep -q "^gsm:disconnected$"
if [ $? -eq 0 ]; then
nmcli con up id YourConnectionName
sleep 10
fi
sleep 5
done
After saving the file, add executing permission to it
sudo chmod +x connect.sh
After running the following command, you should have a connection to the internet via the USB modem.
sudo connection.sh
Now we need a way to make the script execute automatically when the RPi boots up. We can use crontab for this task. Crontab package comes pre-installed and all you need to do is add the task to run at reboot,
sudo crontab -e
The crontab config file will open. At the end of the file, add the following command, save, and exit.
Restart and check whether everything runs automatically as expected after boot.
sudo reboot
4. Connecting Flight Controller to RPi
There is excellent documentation in Ardupilot wiki on how to connect your flight controller to RPi.
In a nutshell, you need to prepare a cable that runs between connect flight controllers’ telemetry port and RPis’ serial port. It is not recommended to power your RPi using the serial port. Use a different connection from your power distribution board to power the RPi.
In our setup, we are using a pixhawk mini as the flight controller. Refer to your flight controller's pinout diagram.
4.1 Configure Serial Port of the Flight Controller
Identify the serial port number of the flight controller you used to connect to the RPi.
In our case, we connected the one and only telemetry port of pixhawk mini to RPi. It is defined as serial1 in ardupilot. Below is the serial port numbering of common flight controllers
Connect to your flight controller using mission planner
Go to the full parameter list and change the values corresponding to the serial port number you used to connect to RPi
# to enable MAVLink 2 on the serial port 1 (the default).
SERIAL1_PROTOCOL = 2
# communication baud rate of Serial 1 with the RPi: set at 921600 baud
SERIAL1_BAUD = 921
4.2 Configure Serial Port of Raspberry Pi
Go to the raspberry pi configuration
sudo raspi-config
Goto Interfacing Options → Serial →
when prompted “Would you like a login shell to be accessible over serial?”, select No →
When prompted, “Would you like the serial port hardware to be enabled?”, select Yes.
Reboot the Raspberry Pi when you are done. The Raspberry Pi’s serial port will now be usable on /dev/serial0
4.3 Install mavproxy on Raspberry Pi
Mavproxy is a command-line lightweight ground control station (GCS) that supports the Mavlink protocol used in Ardupilot. We are harnessing its port forwarding functions to use mavproxy as an Air Control Station (ACS).
More about Mavproxy
Run the following commands to install mavproxy and its dependencies. This will take a while to install, depending on your internet speed
$sudo apt-get install python3-dev python3-opencv python3-wxgtk4.0 python3-pip python3-matplotlib python3-lxml python3-pygame
$pip3 install PyYAML mavproxy --user
$echo "export PATH=$PATH:$HOME/.local/bin" >> ~/.bashrc
4.4 Check the Communication between Flight Controller and RPi
Power on both the flight controller and RPi, then check the connection by running the following command
mavproxy.py — master=/dev/serial0 — baudrate 921600 — aircraft testDrone
It will start mavproxy and if the connection is successful, you will see something like below.
5. Setting up the Video Stream
It provides additional assurance if we can monitor the drone by using a video stream. A low-resolution low-bit rate video is sufficient for this task. We can easily do this by integrating an RPi camera.
Connect the RPi camera to the RPi using the ribbon cable. More about setting up the raspberry pi camera.
Go to raspberry pi configuration to enable the RPi Camera.
sudo raspi-config
Goto interfacing options → camera → enable.
Run the following command in your home folder to check the camera is properly working. After running the command, the camera window will open and it will take a test image and save it in your home folder.
raspistill -v -o test.jpg
For streaming the video, we are using a package named MJPG-Streamer. To install MJPG-Streamer, execute the commands in the following order. More about MJPG-Streamer
git clone
sudo apt-get install cmake libjpeg8-dev
sudo apt-get install gcc g++ cd ~git clone https://github.com/jacksonliam/mjpg-streamer.git sudo apt-get install cmake libjpeg8-devsudo apt-get install gcc g++ cd mjpg-streamer-experimental
make
sudo make install
To Test the mjpeg streamer, execute the following commands. This will create a local web server that streams the video to the default port 8080
cd /home/pi/mjpg-streamer/mjpg-streamer-experimental/
export LD_LIBRARY_PATH=.
./mjpg_streamer -o “output_http.so -w ./www” -i “input_raspicam.so -fps 15”
Open the web browser of RPi and type in the address 127.0.0.1:8080. This will direct to a demo web page of mjpeg streamer. Go to the stream section and check the video stream.
Tip: If the image is upside down or mirrored, add -vf and/or -hf tags at the end, which stands for vertical flip and horizontal flip.
Ex: ./mjpg_streamer -o “output_http.so -w ./www” -i “input_raspicam.so -fps 15 -vf -hf” | https://medium.com/drones-for-mapping/monitoring-drones-over-cellular-network-part1-d84ca9ae599f | ['Sasanka Madawalagama'] | 2020-12-12 23:07:42.443000+00:00 | ['Raspberry Pi', 'Ardupilot', 'Mapping', 'Telemetry', 'Drones'] |
Is There a Connection Between Types of Bacteria and the State of Gum Disease? — Savanna Dental | Is There a Connection Between Types of Bacteria and the State of Gum Disease? — Savanna Dental Savanna Dental Dec 13, 2021·5 min read
Recent research has found that a “bacterial signature” exists, which enables scientists to be able to mark a shift from healthy teeth and gums to gum disease. In practice, this will be able to help dentists in knowing the stage of gum disease that a patient is at. Therefore, they will be able to provide improved advice and more accurate treatment.
Japanese researchers from the Tokyo Medical and Dental University (TMDU) have found a “bacterial signature” exists. This refers to the researchers finding that they can mark the point that a patient can shift from having healthy teeth and gums to gum disease.
Gum disease continues to affect so many people. As a result, any research that can help prevent, or at least slow the onset of, gum disease, is highly desirable.
In practice, if this research is confirmed as accurate, then dentists and their patients could benefit from increased insight into gum disease and better treatment plans.
What is gum disease?
Gum disease is a very common condition. For example, statistics show that around 90% of the world’s population has a form of gum disease [1].
There are different stages of gum disease. Gingivitis is seen as a rather mild form of gum disease. Periodontal disease is a severe form of gum disease.
Gum disease involves inflammation of the gums. This is caused by a build-up of plaque and bacteria. Plaque develops over a period of time, eventually wearing down tooth enamel. Consequently, tooth decay will develop, hastening gum disease.
An eventual consequence of gum disease is tooth loss, due to the bones around the teeth being too weak. Studies have shown that many bacteria types contribute to bone loss in gum disease [2].
Bacteria is another area of concern for the health of the mouth. Some bacteria can be good for breaking down food. However, the majority of bacteria is bad, and will contribute to gum disease.
Research
Researchers from the TMDU carried out the research, which was published in the journal mSystems [3]. Their study involved 21 patients.
The researchers took plaque samples from three sites within the mouths of the 21 patients. All 21 participants had been diagnosed with periodontal disease [3].
The three sites included one area that was “healthy”, one area that showed signs of gingivitis, and another area that clearly had periodontitis [3].
Researchers then interpreted the results by using a special technique called “metatranscriptomic” analysis [3]. This determined the different types of bacteria at each site.
The results showed that there were indeed clear differences in the level of bacterial composition between the three different sites [3].
Certain types of bacteria that are known to be associated with gum disease were, as predicted, higher in the areas with periodontal disease [3]. For instance, these included eubacterium nodatum and filifactor alocis [3].
What this means
By reviewing these three different sites, researchers were able to identify the key differences between them. Moreover, they were also able to review how different bacteria types affected each area [3].
This enabled the researchers to come up with a “bacterial signature” that should help dentists to identify early signs of gum disease, and provide more accurate recommendations for those who already show signs of gum disease. Finally, it could also help to slow the onset of gum disease.
Lead researcher Takashi Nemoto commented on the findings [4]. He suggested that the changes they observed between each site showed the “shift from health to periodontitis is accompanied by changes in both the structure and the complexity of the bacterial network” [4].
Nemoto said that previous studies had “explored differences in bacterial communities in healthy mouths or those affected by gingivitis or periodontitis” [4]. However, the research from the TMDU was innovative as it analysed how such “communities changed during progression from health to periodontitis” [4].
Researchers continue to find ways to tackle gum disease. Because of how common it is, there is no shortage of effort being put in to trying to improve treatment outcomes.
Thinking points…
1) A bad habit that many people have is nail-biting or chewing on a pen lid. Did you know that these habits involve bacteria coming into the mouth? Anything that goes into our mouths will include bacteria. What habits can you cut out?
2) As this study shows, it is important for gum disease to be picked up on early. At a check-up, a dentist can help to identify if you have signs of gum disease and give advice. Consider making an appointment!
3) If you are particularly concerned about gum disease, or know that you have gum disease, are you aware of scaling and root planing? These treatments involve a deep clean of the gums, and is known to be an effective way of improving gum health.
What we offer at Savanna Dental
Savanna Dental is a dental clinic based in Calgary, Alberta, Canada. We provide our patients with a warm welcome, a comfortable experience and advice whenever needed.
We recommend that our patients attend our Calgary-based dental clinic twice a year for a regular dental check-up. When problems are detected, we have many treatments available. For instance, these include cavity fillings and root canals. We also have some cosmetic treatments too!
Importantly, we recommend brushing your teeth at least twice a day and flossing regularly. Moreover, eating healthily and trying to avoid sugary foods and drink is helpful.
In addition, all of our services at our Calgary dental clinic Savanna Dental are in line with the Alberta Dental Fee Guide.
We would love you to visit our Savanna Dental dental clinic in Calgary! You can find out more about us by visiting our website https://savannadentalclinic.ca.
References | https://medium.com/@savannadentalgl/is-there-a-connection-between-types-of-bacteria-and-the-state-of-gum-disease-savanna-dental-3b2b389e8421 | ['Savanna Dental'] | 2021-12-13 23:58:00.920000+00:00 | ['Dental Care', 'Dentistry', 'Oral Health', 'Dental'] |
Leviton Decora Smart Zigbee 3.0 Plug-in Dimmer review: Not enough features for the price | Zigbee adherents don’t have a dramatic number of products with which to outfit their smart home, so it’s always good news to see some innovation in the category. With this expansion of its Decora Smart product line, Leviton now has a collection of Zigbee 3.0-certified smart lighting products offering a variety of security and performance updates.
Leviton’s model DG3HL plug-in dimmer looks identical to the company’s other plug-in dimmers, including its Wi-Fi and Z-Wave versions. The compact device features a single two-prong outlet, positioned downward—so you’ll need to install it in the lower outlet to avoid blocking the second outlet in a standard receptacle.
[ Further reading: The best smart plugs ] Leviton The downward-facing socket on Leviton’s new Zigbee 3.0-certified smart plug is designed to keep cords tidy, but it makes it impossible to use in the upper socket.
A small button on the face of the device is illuminated in green when the switch is turned off (serving as a nightlight, since it can’t be disabled), and the button can be used to manually turn the switch on or off. The DG3HL supports a maximum load of 300 watts, and again, it’s designed with dimmable devices in mind (e.g., a lamp. If you have non-dimming needs, consider instead the Leviton Decora Smart DG15A, which offers a grounded outlet that can support small appliances, such as a fan with up to a 0.5-horsepower motor.)
Christopher Null / IDG The dimmer works well with Alexa, but Samsung SmartThings support is still pending.
I’ll start with the plusses, though there aren’t many. I attempted to pair the DG3HL with an Amazon Echo Plus and it quickly discovered the device and added it to my Alexa smart home network. (The plug also supports the Xfinity Home Touch Screen, but SmartThings certification is still in process.) The plug works well enough with Alexa, but at a range of about 50 feet from my Echo Plus Zigbee hub, performance was erratic, with the plug frequently showing a “Device is unresponsive” error. When I moved it closer, the errors went away. (Zigbee devices form a mesh network, so deploying any Zigbee device—including another smart plug—at closer range to the hub should help with longer-range deployments, because device acts as a repeater.)
And now for the bad news. First, the range of dimming is decidedly weak. The difference between 100-percent brightness and 20-percent brightness on a standard LED bulb was marginal at best, and I really had to watch closely to see much of a difference in the lighting quality at all. This is compounded by the fact that the hardware button on the plug does not support dimming at all; you can only dim and brighten via voice or the Alexa app. Typically, one would hold the hardware button down to dim or brighten the light, but with the DG3HL, this doesn’t work. The button is strictly an on/off switch.
Mentioned in this article Jasco Enbrighten Zigbee Plug-In Smart Dimmer Read TechHive's reviewSee it Lastly there’s the price. At $40, this device is priced the same as Jasco’s Enbrighten Zigbee 3.0 smart dimmer plug, which—while larger in size—can control two lamps, and its hardware button will dim those lamps.
Unless you’re taken with the relatively small size of Leviton’s product—or Xfinity Home certification is uber important to you—Leviton just doesn’t give buyers a lot of reasons to choose its DG3HL over the competition.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@Kristi34484192/leviton-decora-smart-zigbee-3-0-plug-in-dimmer-review-not-enough-features-for-the-price-f7d32f79e97f | [] | 2020-10-28 11:58:04.622000+00:00 | ['Cutting', 'Tvs', 'Chargers', 'Entertainment'] |
Monolith to Microservices with Kafka Streaming Data Connector | If you’re a team developing and maintaining a software monolith, there’s a good chance you’re considering or planning a move to an architecture based around microservices. I’m won’t go into the various trade-offs involved in that decision in this article; rather, I will focus on one specific technique that might help you make the transition: Change Data Capture (CDC)
It’s relatively straightforward to build a system around microservices if you’re starting from scratch. However, it can be difficult to plan and manage a transition from an existing monolith. The kinds of changes involved can be substantial, and it’s hard to keep a live system running smoothly while you fundamentally change how it works.
It’s a big shift from an ACID-compliant database to a distributed architecture based on eventual consistency, and keeping data consistent during a long migration, when different information is held in different parts of your system can be particularly challenging.
Change Data Capture (CDC) enables you to make minimal changes (if any at all) to your production system at first. Rather, you set up a system to observe your database, and create events whenever key data is changed, with your “new architecture” systems responding to these events.
Change Data Capture
Let’s look at an example. Say you want to add an onboarding email flow to your application, so that new users receive helpful emails over the course of several days after they create an account. Using CDC, you can create this new flow as a microservice. Whenever a new user record is added to your main users table, a new event is created. Then, your new microservice would consume that event and manage the onboarding process, without requiring any further changes to your main legacy application. Another example would be to send users an exit survey after they deleted their account, to capture data on why your service no longer meets their requirements.
I’m going to walk through one technique for achieving this, which requires literally no changes whatsoever to the “main” application; Heroku’s recently-launched Streaming Data Connectors Beta.
The way this works is that you add a managed Kafka and a “data connector” to your Heroku application, defining the tables and columns where changes should generate events. Then, you can set up your new microservices to consume events from Kafka topics.
In the rest of this article, I’m going to walk you through how to set this up. We’ll be using a trivial database-backed web application to represent our monolith, and a separate application subscribed to a Kafka topic, which will consume the events we generate by making changes to our database.
The Streaming Data Connectors Beta is only available to Heroku Enterprise users at the moment, because it only works in a Heroku Private Space (which is an enterprise feature).
Let’s look at some code.
I’m working on a Mac laptop, but these commands should work fine in any posix-compliant terminal environment.
Cleanup
Some of the commands we’ll be using create resources in your Heroku account which incur charges. Please don’t forget to remove your applications when you’re finished, or you could end up being billed for these services.
You can do this by running make clean in both the sinatra-postgres-demo directory, and the kafka-consumer directory. You can also delete the applications using the Heroku web interface.
To verify that everything has been successfully removed, you can run:
heroku apps --space ${HEROKU_PRIVATE_SPACE}
Please use the name of your Heroku private space, in the command above.
The Users Application
We’re going to use a trivial web application that manages “user” records in a Postgres database. I’ve written this one in Sinatra, which is a ruby library for lightweight web applications.
The application has a few HTTP endpoints:
get "/" do redirect "/users" end get "/users" do
erb :users, locals: { users: get_users }
end post "/users" do
add_user(params) redirect "/users"
end post "/delete_user" do
delete_user(params["id"]) redirect "/users"
end
An HTTP GET to “/users” renders a list of the users in the database, a POST to “/users” adds a new user, and a POST to “/delete_user” will delete a user.
This is the implementation of the database code:
def connection
PG.connect(ENV.fetch("DATABASE_URL"))
end def get_users
connection.exec( "SELECT * FROM users" )
end def add_user(u)
addsql = %[ INSERT INTO users (first_name, last_name, password, email) VALUES ($1, $2, $3, $4) ] connection.exec_params(addsql, [ u["first_name"], u["last_name"], u["password"], u["email"] ])
end def delete_user(id)
connection.exec_params("DELETE FROM users WHERE id=$1", [ id ])
end
The full application is available here. Let’s get it running.
I’m using a private space called devspotlight-private . Please substitute the name of your private space in the code that follows:
git clone https://github.com/digitalronin/sinatra-postgres-demo cd sinatra-postgres-demo export HEROKU_PRIVATE_SPACE=devspotlight-private heroku apps:create --space ${HEROKU_PRIVATE_SPACE}
This will create an app with a random name. To keep the code samples consistent, I’m going to read the name and store it in an environment variable APP .
export APP=$(heroku apps:info | head -1 | sed 's/=== //') echo ${APP}
We need a database for our app, and in order to use the Streaming Data Connectors Beta you need to use a specific version of the Heroku Postgres add-on:
heroku addons:create heroku-postgresql:private-7 --as DATABASE --app ${APP}
Please note that running this command will incur charges on your Heroku account.
heroku addons:wait
It can take a few minutes to create the database, so the wait command above will let you know when you can move on to the next step:
git push heroku master heroku run make setup-db
This deploys our application, and sets up the database with the users table and a few sample records.
Once this process has completed, you should be able to run heroku open and see a web page that looks like this:
Now we have an example web application, backed by a Postgres database, where we can add and remove records from the users table. This represents our monolith application. Now let's add the Streaming Data Connectors Beta to see how we could use CDC to add microservices without changing our application.
Adding Kafka
We need Kafka to act as the messaging backbone of our connected applications, so we need the Kafka add-on. Again, you need to use a specific version:
heroku addons:create heroku-kafka:private-extended-2 --as KAFKA --app ${APP}
Please note that running this command will incur charges on your Heroku account.
heroku kafka:wait
Again, this can take some time.
Adding the Database Connector
Once we have our Kafka add-on, we can set up the connector to generate Kafka events whenever a table in our Postgres database changes.
We need to install a plugin to be able to add the database connector:
heroku plugins:install @heroku-cli/plugin-data-connectors
Once you’ve done that, the syntax to create our database connector looks like this:
heroku data:connectors:create \ --source [postgres identifier] \ --store [kafka identifier] \ --table [table name]...
To get the Postgres identifier, run this command:
heroku addons:info heroku-postgresql
You should see output that looks like this (your values will be different):
=== postgresql-tapered-49814 Attachments: lit-bastion-67140::DATABASE Installed at: Sun Jul 19 2020 10:26:20 GMT+0100 (British Summer Time) Owning app: lit-bastion-67140 Plan: heroku-postgresql:private-7 Price: $7000/month State: created
The identifier we need is on the first line. In this case, postgresql-tapered-49814
The process for getting the Kafka identifier is similar, with the identifier appearing on the first line of output:
heroku addons:info heroku-kafka
Now that we have identifiers for both the Postgres database and the Kafka instance, we can create the database connector. I’m using the identifiers from my application, so you’ll need to substitute the appropriate values from yours when you run this command:
heroku data:connectors:create \ --source postgresql-tapered-49814 \ --store kafka-octagonal-83137 \ --table public.users \ --exclude public.users.password
I’ve specified the table as public.users . I used the default public schema of my Postgres database when I created my users table. If you used a different schema, you'll need to specify that instead.
Notice also that I’ve used --exclude public.users.password. This means there won't be any information about the value of the password field included in the Kafka events which are generated. This is a very useful feature to ensure you don't accidentally send sensitive user information from your main application to a microservice which doesn't need it.
The database connector can take a while to create, and the output of the create command will tell you the command you can use to wait for your database connector to be provisioned.
heroku data:connectors:wait [connector name]
Consuming the Kafka events
Now we have our original database-backed application, and we’ve added the Streaming Data Connectors Beta, so we should see an event on the Kafka service whenever we make a change to our users table.
The next step is to set up another application to consume these events. In a real-world scenario, you would want to do something useful with these events. However, for this article, all we’re going to do is display the events in a very simple web interface.
Creating the Web Application
I’ve written a very simple “kafka-consumer” application, also using Ruby and Sinatra, which you can see here. In creating this, I ripped off a bunch of code from was inspired by this heroku-kafka-demo-ruby application.
Let’s get this deployed, and connect it to our Kafka instance. Fire up a new terminal session and run these commands.
export HEROKU_PRIVATE_SPACE=devspotlight-private
Substitute the name of your own Heroku private space.
git clone https://github.com/digitalronin/kafka-consumer.git cd kafka-consumer heroku apps:create --space ${HEROKU_PRIVATE_SPACE} export APP=$(heroku apps:info | head -1 | sed 's/=== //')
Before we deploy our application, we need to do some setup to enable this application to read from the Kafka topic that was created when we set up the database connector.
To give your new application access to the Kafka instance, we need to run a command like this:
heroku addons:attach [app with kafka]::KAFKA -a [app that wants to access kafka]
The [app with kafka] is the name of your instance of the sinatra-postgres-demo application, which you'll see if you run heroku apps:info in your other terminal session.
The [app that wants to access kafka] is the instance of kafka-consumer , the application we're creating now.
We used the KAFKA label when we originally created the Kafka instance.
In my case, the command I need looks like this (substitute the values for your applications):
heroku addons:attach lit-bastion-67140::KAFKA -a boiling-sierra-18761
Be careful to put two colons before KAFKA , or you'll get Couldn't find that add-on.
The output should look something like this:
Attaching kafka-octagonal-83137 to ⬢ boiling-sierra-18761... done Setting KAFKA config vars and restarting ⬢ boiling-sierra-18761... done, v3
If you run heroku config you'll see that our new application now has several KAFKA* environment variables set, which will enable us to connect to the Kafka instance.
There is still one more thing we need though: We need to know the Kafka topic on which our events are going to be published. The topic was automatically created when we added the database connector. To find out what it is, go back to your sinatra-postgres-demo directory and run this command:
heroku kafka:topics
The output should look something like this:
=== Kafka Topics on KAFKA_URL Name Messages Traffic ──────────────────────────────────────────────────── ──────── ──────────── connect-configs-311cea8b-0d94-4b02-baca-026dc3e345e0 0/sec 0 bytes/sec connect-offsets-311cea8b-0d94-4b02-baca-026dc3e345e0 0/sec 7 bytes/sec connect-status-311cea8b-0d94-4b02-baca-026dc3e345e0 0/sec 0 bytes/sec heartbeat.witty_connector_44833 0/sec 12 bytes/sec witty_connector_44833.public.users 0/sec 0 bytes/sec
We want the topic ending with public.users . In my case, that's witty_connector_44833.public.users . If you specified multiple tables when you created the data connector, you'll see a topic for each of them.
Our demo kafka-consumer application just uses a single topic, which it gets from the KAFKA_TOPIC environment variable. So, we can set that now.
Back in your kafka-consumer terminal session, run this command (substituting your own topic name):
heroku config:set KAFKA_TOPIC=witty_connector_44833.public.users
Now we can deploy our application:
git push heroku master
As with the sinatra-postgres-demo application, you may have to wait several minutes for the DNS changes to complete.
CDC in Action
Now, we have all the pieces in place:
User List — our database-backed pretend monolith, sinatra-postgres-demo
The Streaming Data Connectors Beta which publishes events to a Kafka topic whenever our users table changes
table changes Message List — the kafka-consumer application that lets us see the Kafka events
In your browser, use the form to add a new user. A few seconds later, you should see a JSON message appear in the Message List application.
Message structure
The JSON you can see is the “value” of the Kafka event. There is other metadata in the event which you can see by tweaking the kafka-consumer application, but for now let’s just look at the JSON data.
You can use a tool such as jq to inspect the JSON, or paste it into an online JSON tool like this one.
Collapsed down to just the top level, you can see that the message has a “schema” and a “payload”:
There is a lot of metadata in the “schema” part, but most of the time you’ll probably be more interested in the “payload” which has a “before” and “after” section. This should show you the values of the database record before and after the reported change(s). There are some important caveats about “before” in the best practices document about the Streaming Data Connectors Beta.
Notice how the “after” section does not include the “password” field of the user record. This is because we excluded it when we created the data connector.
Conclusion
Let’s recap what we’ve covered.
We started with a database-backed web application, managing a users table.
table. We added Kafka, and the Streaming Data Connectors Beta to publish changes to the users table as Kafka events.
table as Kafka events. We created a separate application and connected it to the Kafka topic, and saw messages generated by changes to the database.
It’s worth emphasizing that we didn’t have to make any changes at all to our “monolith” application code to make this happen.
Cleanup
Don’t forget to remove your applications when you’re finished, or you could end up being billed for these services.
You can do this by running make clean in both the sinatra-postgres-demo directory, and the kafka-consumer directory. You can also delete the applications using the Heroku web interface.
To verify that everything has been successfully removed, you can run: | https://levelup.gitconnected.com/monolith-to-microservices-with-kafka-streaming-data-connector-89714e809b7c | ['Michael Bogan'] | 2020-10-21 02:28:02.892000+00:00 | ['Kafka', 'Microservices', 'Heroku', 'Postgres', 'Software Architecture'] |
The Need of Philosophy to refine Scientific thinking and understanding | The Need of Philosophy to refine Scientific thinking and understanding
Scientific Inquiry is an art of interrogatives, reasoning, and logic
While doing research, it was advised to follow a methodological approach that another author had used for their investigation. There are laboratory standards and protocols one has to follow in doing science. Working for weeks in the Philippine Textile Research Institute with my colleagues investigating the plausibility of Green Chemistry for developing nanotechnology, we spent a ton amount of time, money, and effort on doing something wrong. Apparently, some research upon which we laid the foundations of our methods did not agree with the actual result we had developed in the lab. My team and I, as young and curious researchers, mulled over how and why we are wrong, like someone who lost in a long and tedious engagement we demanded justification.
We were on a deadline, it’s too expensive, forget it.
We discussed the matter over some experts on that field and gathered their insights; it amounts to something like: “I’m not sure why exactly but perhaps step X is done wrong (or poorly), start with another approach”. To think of it much further, one is faced with so many variables at play. To dare to seek a satisfactory justification means to unravel the intricacies of those chemical interactions. In other words, it would have to take a lot of effort and more resources to investigate such matters on the level that would only scientifically satisfy a simple question; we were on a deadline, it’s too expensive, forget it. Reformulate new conceptual framework, recalibrate, and revise. It was a decision made considering the constraints we had, as high school students nerding out on national research laboratories. It was fun though. Thanks to our research adviser, our journey to science began — we were exposed to the culture of researchers and interesting people working on something at such a young age. But our question remained unanswered. After all, what does it mean to disagree with mere experiments? Other than mistakes, we are also faced with a deep question which is not very likely to be the case: what if the methodology was wrong? On what grounds should we settle the case of our investigation?
What is the justification of a scientific method?
A rational scientist cannot deny the possibility of falsifying a methodology, but the overwhelming consensus of the scientific community seems to agree with such a method, what forms of justification could one attempt to falsify a method? Certainly, experimental results alone cannot suffice to establish counterarguments. By the same token, one could also ask what is the justification of a scientific method? | https://medium.com/dave-amiana/the-need-of-philosophy-to-refine-scientific-thinking-and-understanding-d97dbd599321 | ['Dave Amiana'] | 2020-07-07 03:28:28.890000+00:00 | ['Science', 'Education', 'Philosophy', 'Philosophy Of Science'] |
First Decentralized Elections in Africa | First Decentralized Elections in Africa
Election illegitimacy has been a huge issues across many countries. Photo by roya ann miller
On Sunday, 15th April, 2018, African Blockchain Initiative made a significant stride in decentralizing African voting systems. For the first time in the continent, elections were run on the blockchain during the African Leadership Academy student government elections. This marked the beginning of what should be a wave of blockchain adoption in the voting/elections industry in the continent.
Public hashes used for elections. You could personally track your vote through your personal hash
Why Blockchain technology in elections?
The biggest promise of blockchain technology is trust, transparency and accountability. Unfortunately, Africa suffers most from deficiency of these three pillars of Blockchain. Most African countries have ended up in shambles and nullification of results after general elections. This is because of a lack of trust between the public and the electoral bodies which have more times than not been blamed for liaising with certain political parties to fix elections. While vote counting accounts for a huge percentage of elections nullification, online transmission of votes from polling stations to tallying centres has had a significant share of election fraudulence accusations.
One huge example is Kenya during the 2017 general elections: the Supreme Court had to nullify PRESIDENTIAL elections after accusations of hacking of the electoral commission servers (IEBC) during transmission of polling station results. This not only brought the country to a standstill for another two months, but also caused tensions between opposition supporters and government supporters which at times led to property destruction or loss of lives. In fact, till date, Kenyans are still pushing for the resignation of IEBC officials as there were accusations that some were involved in election fixing.
How does Blockchain solve election fraudulence?
In the decentralized ledger system (blockchain), there is no single point of failure. Data saved on the blockchain is stored in multiple ledgers meaning, if one was to hack the system, one would have to hack through the thousands of ledgers in the decentralized ecosystem — something that is virtually impossible. On top of that, blockchain’s cryptographic security further encrypts data such that only computers can run validations. This takes control off human beings who are more vulnerable to corruption and leaves validation and self-execution to computers which are less corruptible and run purely by the set code/instructions. It is this dual-layer of security, decentralization and cryptography that makes blockchain the perfect system for enhanced trust, transparency and accountability. Therefore, if the Kenyan elections (as cited above) were run on the blockchain, then hacking won’t have been an option for any party and computers rather than an untrustworthy electoral commission would self-execute the entire transmission and tallying processes.
Now, onto African Leadership Academy (ALA) Student Government elections
African Leadership Academy was the perfect launching point for African Blockchain Initiative’s dream of inspiring transparent voting systems. Since elections are done online in the school, African Blockchain Initiative focussed on changing the central servers into decentralized ledgers and even gave students the option to track their votes, which were encrypted into unique hashes (See image above of the voter tracking page and screen that was available to all). The decentralized system run on an Ethereum smart contract and even eliminated the administration rights that was given to the Electoral Council of the school. Initially, this council could know who voted for who but through the blockchain-based smart contract, all this data is encrypted and only results were made available after the elections. Indeed, we are talking about extreme transparency, equal rights throughout the voting ecosystem, and an unhackable election run through self-executing lines of code.
How will this be adopted in African national elections?
Adoption in National elections will be quite similar only that voters might vote from the convenience of their phones or laptops but through computer systems in polling stations. Alternatively, paper ballots could be used but after vote counting and approval by all observers, the tallies could be added to decentralized ledgers and a smart contract could self-execute the election results leaving no room for fraudulence during transmission of numbers from polling stations to tallying centres. African Blockchain Initiative hopes to inspire adoption of this decentralization particularly in countries still facing lack of electoral commission trust during elections — and there’s definitely a huge lot of them in the continent.
Wait, didn’t Sierra Leone run decentralized elections a month or two ago?
If Sierra Leone run their presidential elections on the blockchain, it would have been the first decentralized presidential elections worldwide. Unfortunately, despite the number of sources citing blockchain involvement in their elections, Sierra Leone government issued a statement denying the use of blockchain in the elections. “The NEC [National Electoral Commission] has not used and is not using blockchain technology in any part of the electoral process,” said Mohamed Conteh, the head of National Electoral Commission (John Biggs — TechCrunch).
NEC in Sierra Leone denies using blockchain in their national elections
This makes ALA elections the first of its kind in the African continent and definitely a huge stride to blockchain adoption in election systems in the continent. If a school in Johannesburg can do it, then a country with a million times the resources should be doing it too.
Learn more about Blockchain technology through the ABI blog: Stellar, Most Revolutionary Crypto 2018; Ripple, Revolutionizing Corporate Payments; What is Blockchain; Tron, Shaping Entertainment and Global Interactions
African Blockchain Initiative is now offering blockchain corporate workshops for corporations, read this articles to understand how ABI is shaping blockchain adoption in Africa through corporate workshops: Getting African Corporates on the Blockchain Leave a comment if you’d love ABI to run a workshop at your company, we will contact you.
Images are courtesy of TechCrunch, African Blockchain Initiative and Cryptotelegrafi | https://medium.com/chaptrglobal/first-decentralized-elections-in-africa-487c53a1dc6c | ['Cyril Michino'] | 2018-06-25 10:24:31.966000+00:00 | ['Blockchain Application', 'Blockchain Development', 'Decentralized', 'Blockchain', 'Elections'] |
Goodbye from PS I Love You | Photo by Maksym Tymchyk on Unsplash
A few weeks ago, we were informed by Medium that they would be pulling their funding for PS I Love You. This was not all too much of a surprise, given the larger changes that have been happening around editorial at Medium. Still, it sucks. And after some difficult conversations among our team about how to proceed, we’ve decided we have no real choice but to shut down the pub. Our last day publishing new stories will be June 30.
A few notes before we go. We loved running this publication; we loved the talented and inspiring writers we worked with; we loved our readers. We’re going to miss all y’all deeply.
At the same time, we’re also immensely proud of what PS I Love You has become, the niche it carved out at Medium, and the stories we published. We’ll be re-sharing a bunch of those stories over the next month. We invite you to read them with us over some whiskey and perhaps through a few happy tears.
Thank you, thank you, thank you, to everyone who contributed to the pub over the years. From day one our goal was to support writers, who’ve always been most responsible for making this publication what it is. Thank you, too, to everyone who’s ever read a PS I Love You story. We hope they brightened your days, and perhaps compelled you to go hug your mom.
The editorial team is still figuring out our next steps, but we’ll continue running new pieces and working with writers through the end of our contract; we still have a few more weeks on it. We also will not be deleting the pub, just closing it to new subs; the stories we published will still live here, unless writers decide to pull them and put them elsewhere, which they are of course totally able to do. We are likewise still figuring out what the future holds long-term, for example, whether PS will continue on under a new editorial team one day, or not.
Feel free to reach out to us at any time if you have questions or just want to shoot us a line. We’d love to hear from you.
Thank you again. For everything. We had a great run.
Much, much love,
Dan, Kay, Tre, and Scott | https://psiloveyou.xyz/goodbye-from-ps-i-love-you-5f7302e419b8 | ['P.S. I Love You'] | 2021-05-21 15:22:44.978000+00:00 | ['Relationships', 'Poetry', 'Fiction', 'Ps I Love You Newsletter', 'Writing'] |
A Dark & Insidious Taste | When I walked into the room
I knew that I would never know again
the peace and understanding
I’ve felt in times past
For I discovered
a dark presence, lurking
in the shadows and the corners
of the dank and musty room
That’s when it hit me
— the smell of rancor and filth.
Anguish filled my being
as it permeated my nostrils,
my clothes, my very soul
And I knew I would never be the same
Blankets and papers,
bits and bobs of string and knick-knacks
lay strewn around the room
in a random and haphazard fashion
Newspapers and books, old bottles of beer,
an upright grand piano stood stoically against the wall
keys, dusty with neglect, unplayed and untouched,
perhaps for years
A four-post bed was against the opposite wall
with gauze and linens draped down
from the bars connecting the posts
giving the light in the room an eerie, ethereal look
An overstuffed chair sat beside the bed
holding odds and ends and something
I couldn’t quite make out
As I approached the chair
the smell crescendoed like a symphony
Pungent, putrid smell, as if from the bowels of hell,
rose up as I walked towards it.
There, on an intricately painted plate
sat the reason for my disdain
The source of the smell and anguish
that filled the room and made me want to vomit
A green, football-sized pod
lay sliced open on its side
gutted like a fish, stench oozing from its shell
shimmering like heat on the highway’s horizon
Crouched and grunting, shirtless in the corner,
a decrepit man devoured a piece of the pod
as if it was his last meal, consumed by its consumption,
the fruit sliding as he slurped it,
while I looked in disgust at his mouthful of
the bright orange fruit
— Durian | https://medium.com/illumination/a-dark-insidious-taste-b69b328fa201 | ['Matt Ray'] | 2021-03-07 20:19:29.679000+00:00 | ['Prose', 'Dark', 'Poetry', 'Travel', 'Asia'] |
Latest news on next-gen tech trends in Korea (June 11, 2021) | Latest news on next-gen tech trends in Korea (June 11, 2021) RAON Follow Jun 11 · 9 min read
Ministry of Science & IT to invest over KRW 113 billion in blockchain technology
The Ministry of Science and ICT (MSIT), in which one of the missions is to develop and promote blockchain technology and related applications, announced on June 3rd that it has completed the selection of operators for the ‘blockchain technology development project for the data economy’ and initiated the development of related technologies by investing a total of 113.3 billion won within the next five years.
Among 27 consortiums (formed of universities, research institutes, companies, etc.) that have applied for the project, 9 consortiums (representing a total of 53 companies) have been selected and will receive support from the MSIT on R&D over the next five years. It is expected that the technologies resulting from this project will be used in the Korean industry to enhance technological competitiveness.
Through this initiative, the Ministry of Science and ICT plans to support 9 technologies divided in four main areas:
(1) Fully decentralized high-performance consensus technology
(2) Smart contract security technology
(3) Personal information processing and identity management blockchain
(4) Data sovereignty guaranteed blockchain and data management technology.
(1) To solve the scalability problem and allows the blockchain to scale in a linear fashion as the number of participants in the network grows, four tasks will be developed regarding technology that can secure stable performance of services even if the number of participants increases while maintaining decentralization effective.
(2) To prevent vulnerabilities linked to the execution of smart contracts that can lead to critical damage to the end-users, the MSIT will support technologies that automatically detect and defends security vulnerabilities in advance while simulating them in virtual environments.
(3) To protect personal information on the blockchain, the MSIT will promote two tasks: one is linked to a DID management technology and the other one will emphasize on privacy protection technology in the process of data utilization based on blockchain data encryption.
(4) As for the data sovereignty, a Korean government-funded research institution will develop a technology that uses a blockchain platform to manage large amounts of data and analyze them to strengthen the utilization of blockchain services.
The Ministry of Science and ICT will strive to improve the competitiveness of blockchain technology in Korea with the launch of this ‘blockchain technology development project for the data economy’.
Our view: With the launch of this project, the Ministry of Science and ICT wants to lay the groundwork of a secure blockchain ecosystem in Korea. Looking at the projects (or tasks) that the government is targeting, it seems that the government wants to enhance (and create?) a new blockchain technology that can ensure not only a high level of scalability so that blockchain applications can run smoothly even with million users but also a high level of security so that smart-contract enabled blockchain can be leveraged for improving everyday life. In this regard, the CBDC project could be a reason for the government to enhance security as the adoption of such services will require high level of scalability and security. This is also worth noting the importance the MIST is giving to data protection and DID researches, which reflects the willingness of Korea to be at the forefront in terms of technologies providing high level of protection for user’s data.
Source: https://www.edaily.co.kr/news/read?newsId=02358326629078112
DID-based technology to be actively supported by the Korean Government
The 2nd Vice Minister of Ministry of Science and ICT (MSIT), Kyung Shik Cho, met with major blockchain companies on June 3rd and reiterated the willingness of the Korean government to look for representative blockchain applications that the public can experience. This declaration reflects the policy of the Korean government that is planning to invest, through the Ministry of Science and ICT, over 113 billion won in security and authentication technology using blockchain (including decentralized identity-related technology) within the next five years, as mentioned in the previous topic.
Vice Minister Cho recently visited the headquarters of RaonSecure in Seoul where he discussed blockchain technology development and industry nurturing plans with leading companies in the blockchain and DID (Decentralized Identifier) space such as RaonSecure.
RaonSecure is a pioneer in terms of DID deployment as the company is providing its DID technology OmniOne to a variety of government pilot projects ranging from DID-based simple authentication to blockchain-based digital credentials while collaborating with leading institutions. Thus, the leading IT security and authentication company in Korea is collaborating with the Ministry of Security and Public Administration (MOSPA) in the development of a nationwide mobile drivers’ license. It also collaborates with the Military Manpower Administration for building a blockchain and DID-based digital wallet project promoted by the MSIT and the Korea Internet & Security Agency (KISA) under the ‘2021 blockchain pilot project’s program. The Mobile ID Card service for public officials introduced earlier this year was also built by RaonSecure. Once all the projects abovementioned will be operational, related services using the underlying DID technology developed by RaonSecure could be used by over 30 million drivers and people who have served in the military or will perform military service in Korea.
More recently, RaonSecure has partnered with SK Telecom to participate in the vaccine passport development project hosted by KISA. While discussions on the Korea Disease Control and Prevention Agency (KDCA) ‘s vaccine passport project and the interoperability with this project are still ongoing, it is highly likely that RaonSecure technology will be deployed to vaccine passports that will be used by the public in the future.
The underlying blockchain and DID-based authentication platform used for such public services is an authentication technology that binds Personally Identifiable Information (PII) with a public key that is stored in a DID Document (a set of data describing the DID subject). Here is where DIDs come in as they resolve to DID Documents containing public keys and service endpoints. In this scheme, only DIDs and DID Document will be stored on-chain after being encrypted, not the PII. In plain English, this new authentication technology enables users to authenticate themselves without losing control over their PII that will stay on their devices. Also, the authentication platform enhances security, in particular it prevents from hacking threats given that the PII are only stored on the secure element of devices that are only controlled by end-users, who own and control the private key (such as password) associated to them.
During the meeting with the Ministry of Science and ICT, RaonSecure asked the vice minister to encourage the government to more actively support the discovery of businesses that use blockchain authentication for the development of the blockchain industry. Although the scale of the current pilot projects led by the government is not small, a broader adoption of the technology can be further fostered by a constant increase of the number of deployments.
“Continued support is needed so that the pilot project promoted by the government can be activated as an actual service that the public can experience, and not just remains a PoC” claimed Jung A Lee, the COO of RaonSecure. She added that she “hope[s] that the government will provide benefits when companies apply for new authentication technology.”
The Vice Minister Cho assured that the Korean government will look for representative applications that can be applied to real life while proactively adapt regulations to help the industry flourish.
Our view: The visit of the Vice Minister Cho in the headquarters of RaonSecure is a great sign of the involvement of the government in supporting Korean innovative SME. This meeting has shed some lights on the ambition of the government that will keep encouraging companies looking for enhancing security and convenience of authentication technologies. It also provides some hints in the stance of the government vis-à-vis the blockchain industry, which has been recently criticized due to the high level of volatility impacting digital assets-based blockchain projects. Even in such context, the regulations should stay favorable to the adoption of blockchain. In this regard, RaonSecure will continue providing innovative DID solutions and services closely connected to our daily lives while collaborating with public institutions.
Source: https://news.mt.co.kr/mtview.php?no=2021060314552510750
[Bonus] Overview of NFT market in Korea
As global auction companies such as Sotheby’s and Christie’s have entered the Non-Fungible Tokens (NFT) market with highly mediatized sale of CryptoPunk NFT or the one of the Beeple’s collage, Everydays: The First 5000 Days, NFT has become a familiar word to the public.
Source: Beeple’s collage, Everydays: The First 5000 Days — the most expensive NFT up to now
According to NonFungible.com, an NFT market analysis company, NFT transactions worth $2 billion were made in the first quarter of this year. This is a 22-fold increase from $93 million in the previous quarter.
NFTs are blockchain-based tokens with intrinsic and unique value that are not mutually interchangeable. While a Bitcoin is a means of payment that is interchangeable with another bitcoin like a dollar, NFTs are mainly used to prove intrinsic value and ownership. In particular, NFTs enhance the process of ownership and transfer of ownership that can be permanently recorded on the blockchain. Based on standards such as the ERC-721 released by Ethereum, NFTs enable the digitalization of any kind of assets such as game items, limited edition souvenirs, autographs of famous people, physical or digital painting, sports cards, etc. It can also be applied to large assets such as buildings.
Provided that NFTs can be applied to a variety of use cases, Korean blockchain companies are also entering the NFT market by developing services targeting various industries such as games, sports, real estate and art. In particular, companies considered as the first-generation blockchain companies in Korea are starting developing NFT-related services.
Among the NFTs-related applications developed in Korea, there is a collectible project based on NFT card storage that has been released since last year and that can be bound with a digital asset wallet, enabling user of the wallet to manage NFTs issued by services based on an Ethereum-based blockchain network. The company that developed this service also launched a new service that allows anyone to easily mint (create) NFTs on the blockchain, enabling the Korean NFT ecosystem to grow. Based on the same blockchain, an NFT project related to sport has also been released. The latter enables the digitization of collectibles (such as autographs or items used by the players) of active football players that can be purchased in a global NFT market place.
In addition, the growth of the Korean NFT market will be supported by the launch of a platform for NFT issuance and peer-to-peer NFT trading within this month by another blockchain project. NFT trading also starts being supported by some Korean virtual asset exchanges that are supporting the NFT industry growth by enabling the purchase of NFTs on their platforms.
There are also various initiatives taken in Korea in order to apply NFT to entertainment and art. In this regard, a Korean company is developing a project enabling artists to communicate with their fans through NFT. Another one is building a project to develop NFT products for artists, with the aim to allow fans to form a new kind of fellowship by owning NFTs related to their favorite artists online.
Our view: While NFT is not a new technology, the first quarter of this year put a spotlight on it, through its applications on digital art. That said, NFTs are not only about digitization of art or painting but also about digitization of any kind of assets with intrinsic and unique value. In other words, NFTs scope is huge and it can be applied to a variety of industry. Since the Korean government is promoting the applications of blockchain while supporting the discovery of new business models related to the technology, it’s odds on that related public institutions will support the development of a use case based on NFT in the future.
Source: https://www.fnnews.com/news/202106071356158309 | https://medium.com/raonsecure/latest-news-on-next-gen-tech-trends-in-korea-june-11-2021-5f1450e36a53 | [] | 2021-06-11 06:53:08.500000+00:00 | ['Nft', 'Blockchain Application', 'Omnione', 'Blockchain', 'Techinkorea'] |
With Strings Attached — The case for welfare with expectations | What are the right strings and how can we workably attach them?
Tax is not love[7], but we can make the end result of tax more loving if we make governments love like real people do. Real people love unconditionally but do not enable. Real people love reciprocally and with mutual respect. Real people love by making choices that maximise mutual happiness, not individual happiness. Real love is a mutual binding. It is simultaneously unconditional but bound by voluntarily accepted “strings attached” that keep it strong and unbroken. A loving government simply must apply the same expectations of reciprocity, respect, and mutual benefit that individual people do.
As noted above, this is not simply drug testing or job seeking requirements or sanctions on undesirable behaviours. These types of expectations have a place but (a) they tend to be perceived as punitive and so have an out-sized negative impact on recipients’ feelings of social obligation and respect[8], and (b) they are prone to being tested and broken. A requirement to return to job seeking once a dependent child reaches a certain age will be defeated by continuing to have children who’re under that age. A sanction applied when the father of a child is not named will be decried as hurting the child rather than the parents seeking to maximise the taxpayer contribution to that child.
So, we must also have strings of the type that tug at the heart, not just the head. We must have strings that acknowledge the basic human desires to avoid shame or disapproval and to seek acceptance, approval, or praise. Public welfare can be delivered in ways that better tap into this human element.
Firstly, the welfare system can behave more like a human would. The Social Investment policy[9] of the previous National government is an example. It intended to better combine government data about individuals and families to provide preventative assistance and interventions rather than enabling assistance. This was a government mimicking loving interpersonal relationships in the same way you might say to your daughter “I love you, and I don’t want you to have another fatherless child” or your close friend “I love you, and I don’t want you to choose cigarettes over decent school lunches for your kids”.
Secondly, the welfare system can facilitate genuine human connections e.g. by adopting proven social marketing techniques. Adopt-a-child charities make a connection between recipient and donor so that the donor may feel more responsible and the recipient more cared for. This creates a more rewarding and sustainable relationship for the benefit of both. Though social welfare via government is not voluntary, the concept can still be applied. Households could be matched (voluntarily) such that recipient households are connected to the household(s) supporting them. In this way, recipients can put faces to what would otherwise be a faceless bureaucracy and can experience strings of expectation, support, and praise from them. Both parties can share life experiences for the benefit of the other.
Third, the welfare system can devolve the application of strings to community-based agencies that can mimic the methods that worked in the past — a personal look-them-in-the-eyes relationship between provider and recipient, values-based expectations tailored to the individual, and so on — without the downsides[10]. There have been steps in this direction, from the regionalisation of decision-making and community involvement of Labour’s ‘New Opportunities’ programme in the early 2000s to National’s transfer of social housing to registered community providers in their 2014–2017 term.
Finally, the welfare system could incorporate greater tax deductibility of contributions to directly to agencies delivering devolved public welfare, thus enabling the taxpaying public to both directly reward success and to have a more direct stake in it.
Our welfare system must enable unconditional love without being an enabler of self-destructive behaviours. With strings attached, we can enable positive behaviours and empower people to make their own better choices.
Camryn Brown is an Associate Partner in the management consulting practice of a large professional services firm and the Northern Regional Policy Chair for the New Zealand National Party. The views, thoughts, and opinions expressed in the text belong solely to the author, and not necessarily to the author’s employer, political party, or other group or individual. | https://medium.com/@camrynbrown/with-strings-attached-the-case-for-welfare-with-expectations-2fac3a046fd5 | ['Camryn Brown'] | 2019-06-09 02:00:36.091000+00:00 | ['Economics', 'Values', 'Politics', 'Social Welfare', 'New Zealand'] |
Lorn — Acid Rain | Lorn — Acid Rain
The ancients believed that when Death comes, you have a chance to dance your final dance. Death has no choice but to watch.
“If a dying warrior has limited power, his dance is short. If his power is grandiose, his dance is magnificent. But regardless of whether his power is small or magnificent, death must stop to witness his last stand on earth. Death cannot overtake the warrior who is recounting the toil of his life for the last time until he has finished his dance.”
— Journey to Ixtlan — Carlos Castaneda | https://medium.com/htmv/lorn-acid-rain-bf6688973b0 | ['Hogan Torah'] | 2020-10-20 17:54:42.367000+00:00 | ['Dance', 'Music', 'Death', 'Electronic Music', 'Music Video'] |
Adding Hybrid PoS-Rollup Sidechain to Celer’s Coherent Layer-2 Platform on Ethereum | The Celer Network team is excited to announce that we are adding a hybrid PoS-Rollup sidechain to the Celer’s coherent layer-2 scaling platform with EVM-compatibility on Ethereum. This new addition complements Celer’s existing state channel network technology and allows Celer to offer dApp developers and users a wide range of tunable tradeoffs in terms of trust level, supported features, transaction throughput, latency and cost, all under a single unified framework. Celer’s State Guardian Network (SGN) is shared among different layer-2 technology pieces as a cornerstone to serve different crypto economics functionalities: watchtower for state channel network component, block validator for PoS side-chain component and block producer for optimistic rollup component.
Specifically for Celer’s Optimistic Rollup component, we introduce a new approach to use Celer State Guardian Network as a rollup block producer to solve the incentive design problems around tx aggregation, ordering and state storage. We have open-sourced the first-stab of our Proof-of-Concept based on Optimism’s work with additional functionalities added for this new SGN-as-block-producer architecture. With a track record of delivering production-grade software and pushing adoption via user-facing applications, we are committed to contributing to and being a part of the fast-evolving Optimistic Rollup open source community without reservation alongside other pioneering teams.
In this blog post, we review the complex tradeoffs space of existing layer-2 techniques, discuss the need for a coherent layer-2 solution and finally describe Celer’s unified framework and SGN-as-rollup-block-producer approach in a bit more detail.
Existing Techniques and Tradeoffs
At Celer we always believe that layer-2 scaling is the only way to enable Internet-scale adoption by real-world applications. We have also envisioned that individual layer-2 technology pieces need to work together as a coherent framework. Towards this goal, we have been building and delivering some of the most advanced layer-2 systems including Celer’s generalized state channel network and State Guardian Network. Now we are extending our scope beyond state channels and will complement the Celer layer-2 ecosystem with a new hybrid PoS-Rollup component. The first step of building a coherent and unified layer-2 scaling platform is to understand existing techniques and their tradeoffs.
State Channel
A state channel allows involved parties to execute a smart contract off-chain and quickly settle on the latest agreed states, with instant finality guaranteed by on-chain bond contracts. Parties involved in the off-chain transactions cooperatively maintain a multi-signature fraud-proof replicated state machine, and only resort to on-chain consensus when absolutely necessary (e.g., when channel peers disagree on a state). State Channel provides real-time interaction latency, lowest costs and horizontal scalability (whole system does not slow down as user number increases). However, it trades off the capability to support a large set of users with asynchronized interactions (e.g. NFT marketplace).
DPoS Sidechain
The core idea of DPoS sidechain is to run another distributed ledger system with independent consensus from the main chain and can bridge assets and states to and from the main chain. It provides much higher throughput compared to mainchain, similar latency and lower costs. PoS sidechain trades off main chain level security and composability to other mainchain states easily. For example, if the sidechain uses a BFT-PoS consensus, then the user should accept a security assumption that no more than ⅔ of the sidechain voting powers belong to misbehaving validators.
Optimistic Rollup
Optimistic rollup uses an entity (can be a single party or decentralized system, like in Celer, State Guardian Network) to order and batch layer-2 transactions and computes state transitions off-chain and then puts all calldata and disputable state hashes onchain. Optimistic Rollup provides higher throughput (with a limited multiplier), lower costs, full mainchain security and main-chain-like execution environment (e.g. “EVM-compatible”) with on-chain tx evaluators. Optimistic Rollup trades off transaction latency even in the optimistic case and has significantly longer latency for finality.
ZK-Rollup
ZK-Rollup is similar to Optimistic Rollup but relies on on-chain verification of ZK proof for block validity instead of on-chain disputes. ZK-rollup additionally trades off-chain computation costs to compute the proof and currently has significant limitations to generalized applications. We keep a keen eye on the evolution of ZK-Rollup but do not include it in our scope.
Plasma
Plasma is similar to Optimistic Rollup but does not have an easy solution for data availability challenges and therefore fundamentally hard to generalize and prevent the mass-exit issue. As there are many fundamental limitations, we do not consider Plasma in Celer’s unified layer-2 platform.
Celer’s Coherent Layer-2 Platform
With such a tradeoff space, it is clear that
No single layer-2 scaling technique can handle all the use cases.
Even a single complex real-world application may require multiple techniques to work together as a whole.
Just consider an example of an MMORPG blockchain game. When players are battling against each other, real-time interaction and 0-cost transactions for State Channel is a must to ensure smooth UX. When users are doing auctions of artifacts in NFT market places, high transaction throughput and low cost of a DPoS sidechain is ideal since not that high value is involved but the frequent transaction is needed. When users are transferring ownership of in-game real properties permanently or merging multiple high-value weapons into a rare artifact, the layer-1 security of Rollup chain is required. In addition, multiple technology pieces may work together to address a single user flow: PvP state channel battle may happen in a channel opened in DPoS sidechain to ensure fast dispute resolution if a player drops offline.
Therefore, a unified platform for dApp developers to easily leverage and deploy different underlying layer-2 scaling techniques to address complex application requirements is crucial to achieve blockchain mass adoption. Celer is extending to such a platform that supports generic EVM transaction with tunable options on different trust-level, feature, and performance tradeoffs. We call the combined construct Celer Sidechain.
Next we discuss two key designs in Celer Sidechain: 1. The novel use of Celer State Guardian Network as block producer in Celer Rollup, a sidechain with EVM-compatible optimistic rollup semantic; 2. The integration between Celer DPoS, a DPoS-based EVM sidechain, and Celer Rollup.
Optimistic Rollup with Celer State Guardian Network as Block Producer
From an architecture point of view, Optimistic Rollup sidechain has two main components: the VM execution environment and the block producer infrastructure. Different VM execution environments are on-chain anchoring points that ensure eventual security via different challenge frameworks and support generalized transactions via different “hypervisors” contract onchain. There are different flavors of VM execution environments today and we plan to contribute our engineering resources and innovation power to the most open and fast-evolving EVM-compatible execution environment.
Block producer infrastructures determine how transaction call data and state hashes are received, stored, ordered, batched and finally posts to on-chain rollup contracts. Block producer infrastructures also provide incentive structures regarding who, how and how much the users pay to use the rollup chain to execute transaction computation and to store rollup chain states. Block producer is a critical component as it largely determines the usability of rollup and barrier for mounting malicious attack attempts in optimistic cases.
Celer proposes a novel block producer infrastructure that leverages the State Guardian Network, which is as a DPoS sidechain currently functioning as “watch towers” for Celer State Channel Network, as a “logical” block producer. SGN provides the following functionalities as rollup block producer and solves several crypto-economics challenges:
Transaction ordering . The rollup transactions are ordered by the DPoS consensus on the sidechain, therefore the rollup transactions have a consistent total order. As a result, no single validator has the power to consistently censor certain transactions. In addition, there is no need to introduce additional crypto-economics to determine the transaction ordering like in some other proposals.
. The rollup transactions are ordered by the DPoS consensus on the sidechain, therefore the rollup transactions have a consistent total order. As a result, no single validator has the power to consistently censor certain transactions. In addition, there is no need to introduce additional crypto-economics to determine the transaction ordering like in some other proposals. Rollup transaction fee and storage fee. Users will directly pay transaction fees in layer-2 to reward the sidechain as a whole and sidechain will internally split the transaction fee based on staking. There is no need for clients to pay the layer-1 gas fee for including their transactions in layer-2 rollup like in some other proposals.
Users will directly pay transaction fees in layer-2 to reward the sidechain as a whole and sidechain will internally split the transaction fee based on staking. There is no need for clients to pay the layer-1 gas fee for including their transactions in layer-2 rollup like in some other proposals. Producer scheduling . SGN validators are scheduled to act as rollup block producers to post transactions on-chain in accordance with the PoS consensus schedule. For each rollup block generated in SGN, there will be an order assigned to each validator to post data on-chain and if a validator misses his own posting window due to system fault or attempted censorship, the validator following him in the order can post and slash his DPoS stake directly on-chain. This posting order is very similar to the guardian assignment of the state proof dispute response process in the state channel network. As long as one validator stays non-censoring, the block will be posted on-chain with censoring party’s stake slashed and the right to propose block lost.
. SGN validators are scheduled to act as rollup block producers to post transactions on-chain in accordance with the PoS consensus schedule. For each rollup block generated in SGN, there will be an order assigned to each validator to post data on-chain and if a validator misses his own posting window due to system fault or attempted censorship, the validator following him in the order can post and slash his DPoS stake directly on-chain. This posting order is very similar to the guardian assignment of the state proof dispute response process in the state channel network. As long as one validator stays non-censoring, the block will be posted on-chain with censoring party’s stake slashed and the right to propose block lost. Block validity challenge. SGN validators also act as monitors for all on-chain state posts. If the chosen block producer posts invalid blocks, as long as there is still one non-malicious SGN node, it can challenge the chosen one’s block proposal and slash DPoS stake directly on-chain.
SGN validators also act as monitors for all on-chain state posts. If the chosen block producer posts invalid blocks, as long as there is still one non-malicious SGN node, it can challenge the chosen one’s block proposal and slash DPoS stake directly on-chain. Light client state querying. Light clients will be able to query the full storage state of rollup sidechain from SGN directly without needing to compute or incur any other third-party “state storage host” to get the state and immediately after DPoS consensus is formed. This solves significant rollup UI/UX challenges where users need to wait for a very long time to get a transaction even posted (let alone confirmed) on the main chain. With Celer Rollup, users can almost immediately get “good enough” confirmation with the security guarantee of DPoS consensus. This property also significantly benefits UX when the layer-1 blockchain is congested and optimistic rollup blocks cannot be posted to layer-1 blockchain promptly.
We would like to stress that even though we use SGN as a DPoS sidechain for the above functionalities, the Celer Rollup still has full main chain security and censorship resistance even under complete DPoS consensus failure because the SGN CELR staking slashing condition is more than BFT consensus rules. As long as there is a single SGN node that is truthful, even when some validators post invalid rollup blocks on layer-1, a single truthful validator can challenge and slash the stake of SGN malicious validators beyond the normal security assumption of BFT consensus.
Even if all validators collude maliciously, it is still not “ the end of the world”. After the challenge period designated for the SGN nodes expires, an additional window will be opened for public dispute. With the very high slashing rewards, the eventual security of this construct is ensured. Finally, as we construct this as a DPoS system, the CELR holders who delegated their stake have strong incentives to ensure their selected validator will not behave maliciously by additionally monitoring the operation of SGN nodes.
When it comes to censorship resistance, SGN as a decentralized construct is naturally more censorship-resistant than single-validator constructs. Even if all validators collude to censor certain transactions, the user will have the option to initiate an on-chain censorship challenge by posting the calldata on-chain and this would require SGN block producers to either include this transaction within the challenge period or get slashed of CELR stake.
In sum, we propose the use of SGN as a decentralized block producer for optimistic rollup with clear incentive structure, main-chain security, better censorship resistance and better UX.
DPoS VM alongside Optimistic Rollup VM
As we envision that there are a lot of lower value and relatively transient transactions and application sessions that do not require main-chain-level security, we further extend the functionality of State Guardian Network to not only provide watchtower service for state channel network and block producer service for optimistic rollup sidechain but also introduce a VM execution environment on top of SGN directly to form Celer DPoS sidechain.
This comes almost for free with Celer’s coherent layer-2 architecture: we already have an EVM-compatible execution environment in SGN to maintain state storage and produce main-chain blocks for Celer Rollup. We can easily add a much faster and lower cost DPoS sidechain with EVM support by simply removing the data availability relay to the main chain.
With this additional capability, users can deposit their assets on the main chain and choose to let it appear on the Celer Sidechain either with Optimistic Rollup security guarantee or DPoS security guarantee. Users can also allow assets to migrate between Celer DPoS and Rollup.
Continue with the MMORPG game as an example: a user can deposit his in-game asset with limited value on Celer DPoS sidechain to participate in an auction market place with multiple rounds of open-bid auction. In the end, he would exit the proceeding he got from this auction session to Celer Rollup directly. With the coherent scaling platform, this asset migration and semantic change can be done with only one transaction on the main chain initiated by the DPoS block validators. The reverse-path works similarly.
In sum, the above describes how Celer Rollup works alongside Celer DPoS sidechain both riding on the staking infrastructure of State Guardian Network. Augmenting State Channel Network on either of these two semantic is as straightforward as augmenting it on the main chain so we will not discuss it in more detail.
When mainnet?
With this clear vision and significant benefits of a coherent layer-2 framework, we have started to work on an initial PoC and tentatively expect to have a Testnet version towards the end of Q1 2020. With the current scope, we expect to launch the initial version of mainnet for Celer Sidechain (including Rollup and DPoS) with EVM compatibility within Q2 2020. We are committed more than ever to bring mass adoption to blockchain technology with Celer as a strong and coherent layer-2 scaling platform!
Expect we deliver. | https://medium.com/celer-network/adding-hybrid-pos-rollup-sidechain-to-celers-coherent-layer-2-platform-d1d3067fe593 | ['Celer Network'] | 2020-02-19 21:41:58.610000+00:00 | ['Blockchain', 'Ethereum', 'Scalability', 'Celertech', 'Blockchain Layer 2'] |
Where You Cannot Generalize from Knowledge of Parts (continuation of the Minority Rule) | As we saw, complex systems are characterized by the interactions between their components, and the resulting properties of the ensemble not (easily) seen from the parts.
There is a rich apparatus to study interactions originating from what is called the Ising problem, after the physicist Ernst Ising, originally in the ferromagnetic domain, but that has been adapted to many other areas. The model consists of discrete variables that represent atoms that can be in one of two states called “spins” but are in fact representing whether the state is what is nicknamed “up” or “down” (or can be dealt with using +1 or −1). The atoms are arranged in a lattice, allowing each unit to interact with its neighbors. In low dimensions, that is that for every atom you look at an interaction on a line (one dimensional) between two neighbors one to its left and one to its right, on a grid (two dimensional), the Ising model is simple and lend itself to simple solutions.
One method in such situations called “mean field” is to generalize from the “mean”, that is average interaction and apply to the ensemble. This is possible if and only if there is no dependence between one interaction and another –the procedure appears to be the opposite of renormalization from the last chapter. And, of course, this type of averaging is not possible if there are nonlinearities in the effect of the interactions.
More generally, the Übererror is to apply the “mean field” technique, by looking at the average and applying a function to it, instead of averaging the functions –a violation of Jensen’s inequality [Jensen’s Inequality, definition: a function of an average is not an average of a function, and the difference increases with disorder]. Distortions from mean field techniques will necessarily occur in the presence of nonlinearities.
What I am saying may appear to be complicated here –but it was not so with the story of the average water consumption. So let us produce equivalent simplifications across things that do not average.
From the last chapter [Minority Rule],
The average dietary preferences of the population will not allow us to understand the dietary preferences of the whole.
Some scientist observing the absence of peanuts in U.S. schools would infer that the average student is allergic to peanuts when only a very small percentage are so.
Or, more bothersome
The average behavior of the market participant will not allow us to understand the general behavior of the market.
These points appear clear thanks to our discussion about renormalization. They may cancel some stuff you know. But to show how under complexity the entire field of social science may fall apart, take one step further,
The psychological experiments on individuals showing “biases” do not allow us to understand aggregates or collective behavior, nor do they enlighten us about the behavior of groups.
Human nature is not defined outside of transactions involving other humans. Remember that we do not live alone, but in packs and almost nothing of relevance concerns a person in isolation –which is what is typically done in laboratory-style work.
Some “biases” deemed departures from “rationality” by psycholophasters interested in pathologizing humans are not necessarily so if you look at their effect on the collective.
What I just said explains the failure of the so-called field of behavioral economics to give us any more information than orthodox economics (itself rather poor) on how to play the market or understand the economy, or generate policy.
But, going further, there is this thing called, or as Fat Tony would say, this ting called game theory that hasn’t done much for us other than produce loads of verbiage. Why?
The average interaction as studied in game theory insofar as it reveals individual behavior does not allow us to generalize across preferences and behavior of groups.
Groups are units on their own. There are qualitative differences between a group of ten and a group of, say 395,435. Each is a different animal, in the literal sense, as different as a book is from an office building. When we focus on commonalities, we get confused, but, at a certain scale, things become different. Mathematically different. The higher the dimension, in other words the number of possible interactions, the more difficult to understand the macro from the micro, the general from the units.
Or, in spite of the huge excitement about our ability to see into the brain using the so-called field of neuroscience:
Understanding how the subparts of the brain (say, neurons) work will never allow us to understand how the brain works.
So far we have no f***g idea how the brain of the worm C elegans works, which has around three hundred neurons. C elegans was the first living unit to have its gene sequenced. Now consider that the human brain has about one hundred billion neurons. and that going from 300 to 301 neurons may double the complexity. [I have actually found situations where a single additional dimension may more than double some aspect of the complexity, say going from a 1000 to 1001 may cause complexity to be multiplied by a billion times.] So use of never here is appropriate. And if you also want to understand why, in spite of the trumpeted “advances” in sequencing the DNA, we are largely unable to get information except in small isolated pockets of some diseases.
Understanding the genetic make-up of a unit will never allow us to understand the behavior of the unit itself.
A reminder that what I am writing here isn’t an opinion. It is a straightforward mathematical property.
I cannot resist this:
Much of the local research in experimental biology, in spite of its seemingly “scientific” and evidentiary attributes fail a simple test of mathematical rigor.
This means we need to be careful of what conclusions we can and cannot make about what we see, no matter how locally robust it seems. It is impossible, because of the curse of dimensionality, to produce information about a complex system from the reduction of conventional experimental methods in science. Impossible.
My colleague Bar Yam has applied the failure of mean-field to evolutionary theory of the selfish-gene narrative trumpeted by such aggressive journalists as Richard Dawkins and Steven Pinker and other naive celebrities with more mastery of English than probability theory. He shows that local properties fail, for simple geographical reasons, hence if there is such a thing as a selfish gene, it may not be the one they are talking about. We have addressed the flaws of “selfishness” of a gene as shown mathematically by Nowak and his colleagues.
Hayek, who had a deep understanding of the properties of complex systems, promoted the idea of “scientism” to debunk statements that are nonsense dressed up as science, used by its practitioners to get power, money, friends, decorations, invitations to dinner with the Norwegian minister of culture, use of the VIP transit lounge at Kazan Airport, and similar perks. It is easier to take a faker seriously, since science doesn’t look neat and cosmetically appealing. So with the growth of science, we will see a rise of scientism, and my general heuristics are as follows: 1) look for the presence of simple nonlinearity, hence Jensen’s Inequality. If there is such nonlinearity, then call Yaneer Bar Yam at the New England Complex Systems Institute for a friendly conversation about the solidity of the results ; 2) If the paper writers use anything that remotely looks like a “regression” and “p-values”, ignore the quantitative results. | https://medium.com/incerto/where-you-cannot-generalize-from-knowledge-of-parts-continuation-to-the-minority-rule-ce96ca3c5739 | ['Nassim Nicholas Taleb'] | 2016-09-18 10:50:50.999000+00:00 | ['Economics', 'Markets', 'Complexity', 'Fraud', 'Psychology'] |
Twitch Case Study — “Twitchlets”. As the premiere live-stream platform… | Lo-Fi Concept
Going into the study, and into building the prototype, I had a general idea on what I wanted to accomplish. Twitch needed a service native to their site which would a higher-level of interaction between the stream and the user participating. I emulated Mixer’s “MixPlay” feature for the project because Mixer was one of the higher-standing competitors against Twitch until its collapse in July 2020. Though Mixer is no longer around, a lot of its features and services have since been implemented by Twitch, Youtube, Facebook, and more including their technology to develop a near-zero delay between the live-stream and users. This was imperative for MixPlay to work as MixPlay was developed as a tool for users to interact with a stream with a simple press of a button. I have taken that idea to Twitch with “Twitchlets.” Twitchlets are a Twitch-applet system where users can pay Channel Points or Bits (monetary value) to affect the gameplay of streamers. Such mechanics would include disabling crouching in a shooter, invert steering controls in racing, drops of loot in an RPG, etc.
MixPlay logo for Mixer
The most basic form would have a variety of options shown for users to browse during a streamer’s broadcast. In the example AnneMunition is playing Escape From Tarkov, a survival first-person shooter. The options shown are “Invert Aim,” “Disable Crouch,” and “Play Sound.” There more options hidden that can be shown by expanding the Twitchlets area, but the highlighted options are place at the top and are never hidden. After making a selection, a confirmation window would appear for the user to double-check if this is the selection they wanted. This window would included the action selected and the amount of Bits or Channel Points a user would spend to perform the action. After confirm the selection, the action would be performed on the stream for a fixed amount of time. “Invert Aim,” for instance, could be set for two minutes (at the streamer’s discretion). This would also be a feature not every user or streamer would be required to participate in and could disable it through their respective settings.
Version .5
Users always want to be involved in some capacity with the community and the stream they are watching. Currently Twitch has crowd-sourcing events, “cheering” with bits, gifting subscriptions, viewers clipping the best moments from a stream, and much more. However, this has a slight disconnect as they are not directly affecting the streamer or the stream they are watching. With “Twitchlets,” this system will bridge the gap to almost nothing between users and a stream. Users will be able to help or hinder games on a real-time basis native to Twitch without the need to use third-party systems and setups. | https://medium.com/@lrothfork/twitch-case-study-twitchlets-3d4e5ff6263c | ['Lynden Rothfork'] | 2020-12-23 01:02:46.437000+00:00 | ['Twitch', 'Projects', 'UX Design', 'Gaming Culture'] |
How To Write A Killer Application For Your First Remote Work Hunt? | How to write a killer application for your first remote work hunt?
To land yourself that dream remote job, you have to jump through a few hoops. It’s a slightly different process than working in a normal office job, with different skills required and a new approach. The first step for most people is crafting a great application for a job. However, when you’re coming from a normal office job, it can be a bit difficult to begin thinking remotely. You may think that the application is probably the same as with any other job. The difference is though that you need to emphasize a lot of new things to make sure you have some success. Have no fear though, we’ve teamed up with Remote-how to share with you some great tips to make sure that your application is top of the pile!
Emphasize your soft skills
One of the best things you can do when writing your killer application is to emphasize the qualities that make you great for a remote job. As you’re not in an office, there’s a lot of different things that you need to have in order to work successfully. This includes skills like problem solving, self-determination and good communication. These kinds of qualities are known as soft skills. You’ll probably have a list of all your technical skills on your CV or application, but make sure to include these soft skills as well to show that you have thought a bit about how the remote work process goes. When you have to work with people who aren’t located in the same office as you, sometimes your soft skills will have to be what you rely on! It’s also great to demonstrate these skills during your interview process. Being prompt in your communications and proactive in any tasks set will really help you in the application process.
Set yourself up for success in the interview
Most interviews for a remote job will be done… remotely! You more than likely won’t have to go to a physical location to go through the application process. This can of course work to your advantage, as there are some benefits here. You won’t need to commute, so there is probably less of a chance of being late (however be mindful of possible timezone difference!). You also can choose your location as to where you conduct the interview, so pick a nice quiet place that you feel relaxed in. Pick a place with a plain background and a natural source of light. Set up your monitors on eye level and make sure that you have tested your webcam and audio to avoid any technical difficulties. This is probably the most important tip, as your video call cutting out can really interrupt the flow of your interview. Also important is making sure that you have a good internet connection. Test everything first and you’ll feel a lot more confident going into the interview.
If you’ve got some remote work experience, make sure to mention it!
While you may not have had a fully remote job before, a lot of people have by now done some home office days or experienced some work flexibility. Have you had clients in different cities? Have you worked with contractors who were located in different offices? Have you worked in a regional, distributed team? Have you occasionally worked remotely before, like from your home? Make sure to put these in your application and emphasize what you learned from the experience.
If you haven’t had the chance to work from a different location, then read up on best practices for working outside the office. You can include any training or relevant education on remote work also in your application. Going for a job without mentioning anything about your previous remote experience, or how you have trained for the remote environment, probably won’t give you much success.
Talk about your remote work environment
To make your application really stand out, boast about your remote work setup! Employers like to hear that you are bucking the stereotype of working remotely whilst in bed. Having a professional home office or dedicated space is a great tool to show that you are making the most of being remote and you are taking it seriously. Some people are of course living more of a digital nomad lifestyle. This is also important to put in your application, as this can affect the decision-making process for an employer. Some companies really value this kind of culture, whereas others will be more interested in an applicant who will consistently be in the same time zone.
Don’t make it all about remote, you need to be a great company fit
We get it, you want the flexibility but that is not what you should be betting on. In fact, you should explain your potential future employer why you want to work for them and what benefits you can bring to their business. Also, make sure to present yourself as a great fit to the company culture. Without an office space, you really need to feel part of the team by having a common goal and ideas. To make your application easier for a potential employer to evaluate, talk about your ideal company culture. This helps in two ways. First of all, it stops you from getting a job where you find out that you don’t actually like or fit into the culture. Second of all, your application will be much more personable and stand out in comparison to another application that just lists qualifications and skills.
Want some more tips on how to land the remote job of your dreams?
These are just some of the things the team of Remote-how recommends for people wanting to create that killer application and dive headfirst into the remote work world! If you want more tips, tricks, and lessons on how to prepare your killer application check Remote-how Advisory. | https://medium.com/the-making-of-whereby/how-to-write-a-killer-application-for-your-first-remote-work-hunt-8fe27e2d7541 | [] | 2020-04-20 07:31:57.770000+00:00 | ['Remote Working', 'Resume', 'Application', 'Job Hunting'] |
Why Stochastic Gradient Descent Works? | Crazy paths often lead to the right destination!
Optimizing a cost function is one of the most important concepts in Machine Learning. Gradient Descent is the most common optimization algorithm and the foundation of how we train an ML model. But it can be really slow for large datasets. That’s why we use a variant of this algorithm known as Stochastic Gradient Descent to make our model learn a lot faster. But what makes it faster? Does it come at a cost?
Well…Before diving into SGD, here’s a quick reminder of Vanilla Gradient Descent…
We first randomly initialize the weights of our model. Using these weights we calculate the cost over all the data points in the training set. Then we compute the gradient of cost w.r.t the weights and finally, we update weights. And this process continues until we reach the minimum.
The update step is something like this…
J is the cost over all the training data points
Now, what happens if the number of data points in our training set becomes large? say m = 10,000,000. In this case, we have to sum the cost of all m examples just to perform one update step!
Here comes the SGD to rescue us…
Instead of calculating the cost of all data points, we calculate the cost of one single data point and the corresponding gradient. Then we update the weights.
The update step is as follows…
J_i is the cost of ith training example
We can easily see that in this case update steps are performed very quickly and that is why we can reach the minimum in a very small amount of time.
But…Why SGD works?
The key concept is we don’t need to check all the training examples to get an idea about the direction of decreasing slope. By analyzing only one example at a time and following its slope we can reach a point that is very close to the actual minimum. Here’s an intuition…
Suppose you have made an app and want to improve it by taking feedback from 100 customers. You can do it in two ways. In the first way, you can give the app to the first customer and take his feedback then to the second one, then the third, and so on. After collecting feedback from all of them you can improve your app. But in the second way, you can improve the app as soon as you get feedback from the first customer. Then you give it to the second one and you improve again before giving it to the third one. Notice that in this way you are improving your app at a much faster rate and can reach an optimal point much earlier.
Hopefully, you can tell that the first process is the Vanilla Gradient Descent and the second one is SGD.
But SGD has some cons too…
SGD is much faster but the convergence path of SGD is noisier than that of original gradient descent. This is because in each step it is not calculating the actual gradient but an approximation. So we see a lot of fluctuations in the cost. But still, it is a much better choice.
Convergence paths are shown on a contour plot
We can see the noise of SGD in the above contour plot. It is to be noted that vanilla GD takes a fewer number of updates but each update is done actually after one whole epoch. SGD takes a lot of update steps but it will take a lesser number of epochs i.e. the number of times we iterate through all examples will be lesser in this case and thus it is a much faster process.
As you can see in the plot there is a third variant of gradient descent known as Mini-batch gradient descent. This is a process that uses the flexibility of SGD and the accuracy of GD. In this case, we take a fixed number(known as batch size) of training examples at a time and compute the cost and corresponding gradient. Then we update the weights and continue the same process for the next batch. If batch size = 1 then it becomes SGD and if batch size = m then it becomes normal GD.
J_b is the cost of bth batch
Implementation from scratch
Here’s a python implementation of mini-batch gradient descent from scratch. You can easily make batch_size = 1 to implement SGD. In this code, I’ve used SGD to optimize the cost function of logistic regression for a simple binary classification problem.
Find the full code here.
Still curious? Watch a video that I made recently…
I hope you enjoyed the reading. Until next time…Happy learning! | https://towardsdatascience.com/https-towardsdatascience-com-why-stochastic-gradient-descent-works-9af5b9de09b8 | ['Sujan Dutta'] | 2020-10-29 06:24:00.311000+00:00 | ['Gradient Descent', 'Machine Learning', 'Data Science', 'Optimization', 'Artificial Intelligence'] |
DreamTeam Digest #28 | Here’s a quick DreamTeam update. Since the release of our last digest, we’ve added a new game to the platform, DreamTeam has been named the “Best Blockchain Startup”, and has been mentioned in several crypto publications.
Check out the details below.
DreamTeam adds Call of Duty: Modern Warfare to the platform
We know the gaming world has been waiting for Call of Duty: Modern Warfare to drop for a long time. And it’s finally here! With that, DreamTeam is thrilled to announce that CoD has been added to the DreamTeam family and is now live on the platform! Read exactly what players get with DreamTeam CoD here.
DreamTeam wins the EASAwards regional round
DreamTeam has been named “The Best Blockchain Startup” on the regional round of the EASAwards. In the regional round, DreamTeam competed against startups from Euroasian countries: Armenia, Azerbaijan, Belarus, Georgia, Moldova, the Russian Federation, and Turkey. See winners here.
Esports and Cryptocurrency Payments
As gamers are typically tech-savvy, crypto and esports go hand in hand. Check out CoinPoint’s view on esports, crypto, how they are made for each other, and how DreamTeam fits into the equation here.
Crypto and Esports
Can tech-savvy gamers help push crypto into mass adoption? Hedgetrade.com seems to think so. Find out what was written about DreamTeam and which other projects are helping make the push here.
Sincerely,
The DreamTeam Crew
About DreamTeam:
DreamTeam — infrastructure platform and payment gateway for esports and gaming.
Stay in touch: Token Website|Facebook |Twitter|LinkedIn|BitcoinTalk.org
If you have any questions, feel free to contact our support team any time at [email protected]. Or you can always get in touch with us via our official Telegram chat. | https://medium.com/dreamteam-gg/dreamteam-digest-28-e0f2d08cc976 | [] | 2019-12-12 14:05:30.319000+00:00 | ['Apex Legends', 'Gaming', 'Esports', 'Startup', 'Game Development'] |
JavaScript: Inserting a Node at the Head of a Linked List | JavaScript: Inserting a Node at the Head of a Linked List
Explained Solution to a HackerRank Problem
For today’s algorithm, we will insert a node at the head of a singly linked list. Here is the challenge that I picked from HackerRank:
Given a pointer to the head of a linked list, insert a new node before the head. The next value in the new node should point to head and the data value should be replaced with a given value. Return a reference to the new head of the list. The head pointer given may be null meaning that the initial list is empty.
The problem is straightforward, let’s move to the solution:
Initialize a node class. Create a new node with the given value. If the list is empty, set the new node as the head and return it. If the list is not empty, set the next of the new node as the head and then change the head pointer to point to the new node. Return the new head of the updated linked list.
The above steps are illustrated as follows:
Here is my code in JavaScript:
Given a reference to the head of a list, we are asked to add a new node to the front of the list. The method takes two arguments; the head of the linked list and a value to insert. We first create a node from the input value and the newly added node becomes the new head of the linked list after setting its link to the previous head of the given linked list.
As you see in this example, it is very easy to add a node at the head of the linked list and it can be done in constant or O(1) time complexity. I hope you find this article helpful, thank you for reading! | https://medium.com/swlh/javascript-inserting-a-node-at-the-head-of-a-linked-list-160b68e7abcb | ['Bahay Gulle Bilgi'] | 2020-10-25 21:38:34.158000+00:00 | ['Technology', 'JavaScript', 'Algorithms', 'Programming', 'Linked Lists'] |
The Insurance Trap of Prenatal Care | I’d like to tell you a story. It’s a story about how yet again, the US healthcare system is messed up.
This is about prenatal care. Because I am pregnant. And this is something that with each appointment, I learn about how the system is built against patients. What do I mean? Let me explain.
Firstly, this isn’t about the care that I receive. My doctors have been great. I’ve been comfortable at appointments. I’ve had my questions answered. I haven’t felt judged by asking any questions or with any decisions that I have made. So this isn’t about doctors.
As so many stories on healthcare in the US are, this is about insurance. And I am someone who is extremely fortunate. I am employed fulltime by a company that offers pretty good benefits. My coverage is mostly good. I say mostly good because I have learned, through being pregnant, that insurance companies really, truly, honestly want to do all they can to not pay for services.
So what am I talking about?
Before becoming pregnant, I looked up within my plan information all of the coverage I would receive being pregnant. Admittedly, I didn’t dig deep into what I found, which wasn’t a lot. There was language about routine prenatal care being covered at 100%. Ok, great. That seems good. Anything routine — anything my doctor deems as routine, regular care-would be covered 100%. Anything that wasn’t routine would be covered differently, which for me for in-network services would be 80%-20% until deductible was met.
Ok, sounds simple enough.
However, what I learned after receiving my first bill, which was for the appointment that confirmed by pregnancy, was that my insurance company thinks of routine very differently than I do. Different than my doctor does. Different than patient advocacy groups do. WHICH IS A PROBLEM.
I get it, to make the already complicated world of insurance and billing less complicated, contracts are in place between providers and insurance companies where coverage is determined for certain care. I work in healthcare — I probably know more than the average person does about the system. Which is why I understand how it works and what to look for when I get bills.
So I get a bill, and it looks…wrong. Like I’m being charged for an ultrasound (necessary to confirm the pregnancy and confirm location of the pregnancy, because if ectopic and it ruptures, I could die.). I’m being charged for urine and bloodwork? What the hell! All of this is needed. All of this is routine care, especially at the beginning of the pregnancy.
Anyway, I start to ask questions. And I learn that my coverage is actually NOT GREAT. I compare the standard of care recommended by multiple organizations (example — American College of Obstetricians and Gynecologists, a very reputable organization), and I see that much of my care won’t be covered. And so far, I have a routine, non-complicated pregnancy.
What’s hilarious is that when I started asking questions, anyone I would talk to on the phone would point me to the member area of the website. I’d say I checked that area-no information. They’d look and find…also no information. They’d look at their materials, and find nothing more. As far as I know, No one seemed to know anything about coverage, or why coverage for routine care wasn’t considered routine.
So I wrote a long complaint about how the company was contributing to the US maternal mortality crisis by not covering routine care. That their system is hurting patients. I am fortunate to be able to afford these bills, but it’s not about me. This is about me knowing that so many others cannot afford care, and that I feel I must speak up and demand answers. Demand that there be explanations for why the standard of care isn’t covered. So far, no response, but I will keep asking questions and keep digging.
Would it be different if I had a different plan from my employer? Potentially, but that’s not the point. (I could write another piece, and might, on how employer-sponsored coverage, the overarching way our US healthcare system works, is also NOT GREAT, but I won’t get into that now).
This story isn’t over. One day I’ll post the letter I wrote, but since this is ongoing, I won’t — yet.
Learn how to read your bills and ask questions. Because our healthcare system isn’t built for patients. It’s not built for you to receive affordable care. It’s not about ensuring you get the care you need. It’s not about you.
It’s about them and making more money. Making more money despite record profits in a year of a pandemic. Making money despite so many health crises going on. It’s about them hiding information and not knowing answers to questions so you give up instead of fighting to get what you need to get.
Insurance can be a scam. I learned this through getting pregnant, and learning about the trap of insurance in prenatal care. Protect yourself, because you’re worth protecting! | https://medium.com/@cq1990/the-insurance-trap-of-prenatal-care-86311fbc1061 | [] | 2021-01-23 22:35:36.980000+00:00 | ['Health Care Reform', 'Insurance', 'Womens Health', 'Healthcare', 'Pregnancy'] |
Create a Persistent Tableau Server Docker Container in 30 minutes | This is a quick post with the step-by-step I used with a client to quickly create a persistent Tableau Server container image for testing purposes. The whole process takes roughly 30 minutes.
This post is not meant to teach you how to create this at scale and in a “production-capacity”, but rather how to jump-start your first attempt. It was tested on both Amazon Linux 2 and Ubuntu (including Ubuntu on WSL!). Do check Tableau’s official documentation to understand all options and best practices, as well as toubleshooting tips.
If you are new to Tableau Server running on Containers, check out this blog post by Bernhard Damberger.
Pre-requisites:
A linux machine you can use to build the container image. I have tested on both Amazon Linux 2 (RHEL/CENTOS based) and Ubuntu (incl. Ubuntu on Windows Subsystem for Linux — thanks Tim Payne!). Docker is installed and running on that machine A valid Tableau Server license ssh there and download the following:
Tableau Server (.rpm) installer, e.g. https://downloads.tableau.com/esdalt/2021.3.0/tableau-server-2021-3-0.x86_64.rpm. Note: .deb installer not supported, because the base image inside the container is Centos7. But you can still create and run images on Ubuntu for example :D
Tableau Server Container Builder tool , e.g. https://downloads.tableau.com/esdalt/2021.3.0/tableau-server-container-setup-tool-2021.3.0.tar.gz
Whatever database drivers you need, you can find them all here: https://www.tableau.com/support/drivers (Linux/64-bit)
Mysql odbc driver: https://dev.mysql.com/get/Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.26-1.el7.x86_64.rpm . Very important: Make sure these drivers are compatible with RHEL7, not RHEL8 !!!
PostgreSQL jdbc driver: https://downloads.tableau.com/drivers/linux/postgresql/postgresql-42.2.14.jar
Build the container:
Uncompress the Container builder tool:
tar -xzf tableau-server-container-setup-tool-<VERSION>.tar.gz
2. Navigate to the new builder tool root folder and copy the drivers (.jar and .rpm) to the customer-files sub-folder
3. Copy Tableau Server installer .rpm to the Builder tool root folder
4. Edit the registration file (e.g. vi reg-file.json), which something like this (you must accept the eula):
{
"first_name" : "John",
"last_name" : "Smith",
"email" : "[email protected]",
"company" : "Example, Inc",
"title" : "Head Cat Herder",
"department" : "Engineering",
"industry" : "Finance",
"phone" : "123-555-1212",
"city" : "Kirkland",
"state" : "WA",
"zip" : "98034",
"country" : "United States",
"eula" : "accept"
}
5. Let’s customize the Builder tool to install the db drivers, by editing the setup script (e.g. vi customer-files/setup-script)
#!/bin/bash
# Driver installation and other artifact installation script
mkdir -p /opt/tableau/tableau_driver/jdbc cp /docker/customer-files/postgresql-42.2.14.jar /opt/tableau/tableau_driver/jdbc/postgresql-42.2.14.jar yum install -y /docker/customer-files/mysql-connector-odbc-8.0.26-1.el7.x86_64.rpm
Note that everything you copied to the local customer-files folder, will be copied to the container image's equivalent folder /docker/customer-files by the builder tool
6. Optionally, set up some variables. I’m going to set the tableau admin username and pw, as well as a remote tsm user, so I can access the TSM GUI from my normal laptop (e.g. vi env.txt):
TABLEAU_USERNAME=admin
TABLEAU_PASSWORD=admin
TSM_REMOTE_UID=1010
TSM_REMOTE_USERNAME=tsmadmin
Note, you can use a password file and a proper pw, much more secure than the above, but for this quick test I don’t mind :)
7. Finally, create the Container Image!
./build-image --accepteula -i tableau-server- <VERSION> .rpm -e env.txt
8. Check that Docker really got a new image:
Note: if you don’t see the Repository Name or Tag, something went wrong on the build process, I’d not even try to use this image. For me, this happened when I tried to install mysql odbc driver version built for RHEL8.
Running the container
Run this command to prepare the docker env that will be running the container:
sudo ./configure-container-host -u 999
2. Create a Docker volume to persist Tableau Server data (there are many ways to create persistency with docker, you can choose a different one if you like):
docker volume create ts_container_data
3. Set the password for the TSM Remote User by creating a secrets file (e.g. vi remote-user-secret). It must be called exactly like this. The contents of the file will be just the strong password:
my-craz1-sTrong-k€y-noOne-Can-gUes$
Important: It must be a strong one, otherwise initialization will fail!
4. Run the container image.
As this is the first run, we’ll specify the initial Tableau Server admin user and pw, as well as the TSM Remote user password via the secrets file created in the previous step.
You will also need your Tableau Server License Key and newly created container Image-id.
Additionaly, Tableau Server GUI runs on port 8080 (inside the container) by default, so I’m going to map this to 8080 outside. I’ll do the same with TSM GUI port 8850.
docker run \
-e LICENSE_KEY=<lincense-key> \
-e TABLEAU_USERNAME=admin \
-e TABLEAU_PASSWORD=admin \
-v ts_container_data:/var/opt/tableau \
-v <absolute-path>/remote-user-secret:/docker/config/remote-user-secret \
--hostname=localhost \
-p 8080:8080 \
-p 8850:8850 \
-d <container-image-id>
Note 1: you can get the container-image-id with the command “docker images”.
Note 2: replace the <absolute-path> and <lincense-key> by your own values.
BE SUPER PATIENT, THIS WILL TAKE ~20 mins.
Note: If for any reason this first run fails and you need to kill the container and try again, remember to clear your persistent volume, as you probably got a corrupted installation there.
5. Monitor the status of this initialization/installation in different ways:
Sign in to the TSM Gui via browser: https://<container-ip-address>:8850
docker ps → show all running containers
docker ps -a → show all containers
docker logs <container-id> → did anything go wrong?
docker stats → see all the cpu/mem/etc from the varius running containers. If Tableau Server is initializing, you will see a lot of activity here.
docker exec -it <container-id> tsm status -v → check if Tableau is initializing or has initialized properly
docker exec -it <container-id> bash → now you are logged on the console of the container, so you can run any linux command, e.g. tsm status or even install db drivers.
docker exec -it <container-name> bash -c ‘cat $DATA_DIR/supervisord/run-tableau-server.log’ → this helps you check for errors during a fresh startup
Other files and troubleshooting ideas here on Tableau’s official manual.
6. Confirm your Tableau Server is hopefully up-and-running:
7. Connect to Tableau Server from your laptop:
http://<ip-address>:8080
https://<ip-address>:8850
8. (optional) Check your persistent volume (this is what you mapped to your Tableau Server’s $DATA_DIR env value):
9. Re-starting the container image if you have stopped it (i.e not need to re-initialize it from scratch):
Whenever you start this container again, remember to mount the same persistent data volume and pass the same hostname as the previous run. Then, simply run it with:
docker run \
-v ts_container_data:/var/opt/tableau \
--hostname=localhost \
-p 8080:8080 \
-p 8850:8850 \
-d <container-image-id>
10. (Optional) You may also reset the TSM Remote User password in every subsequent start of the image. Simply change the secrets file locally (e.g. vi remote-user-secrets) and pass this file to the docker run command. Note: Remember expose the TSM internal port 8850 as well, so you can connect to it from your laptop’s browser.
docker run \
-v ts_container_data:/var/opt/tableau \
-v {absolute-path}/remote-user-secret:/docker/config/remote-user-secret \
--hostname=localhost \
-p 8080:8080 \
-p 8850:8850 \
-d <container-image-id>
Enjoy 🙂 | https://medium.com/@alexeskinasy/persistent-tableau-server-docker-container-images-34625c45208 | ['Alex Eskinasy'] | 2021-09-10 11:31:15.789000+00:00 | ['Linux', 'Kubernetes', 'Tableau', 'Docker', 'Power Bi'] |
Magnetic Video Mastery Review | Magnetic Video Mastery
High Quality, Evergreen Video Marketing Training You Can Sell As Your Own and Keep All Profits!
If you’re looking for a top-level view of the web video marketplace and the way you may locate your footing in it, this direction is for you. The direction may be finished in only more than one hour and is quality proper for novices who want a little assists breaking into the arena of video mastery.
Why The High Demand On This Training is Bound To Make You Money…..
The Magnet Video Mastery & Lead Magnet niche is ONE of the most Evergreen niches, which is expected to be worth $12,581.9 MILLON by 2021.
Thousands of marketers are looking for updated training on The Magnet Video Mastery & Lead Magnet.
Hundreds and even thousands of dollars per day in profit are Very possible!
One of the top niches to be in, that consistently sees sales month after month is the Magnet Video Mastery & Lead Magnet niche.
Continued growth, including online sales, is expected to be strong over the coming years.
The Magnet Video Mastery & Lead Magnet System will give you the kick-start you need to launch into the billion-dollar$$$$.
Click here to Grab Full Master Resell Rights Before Anyone Else! Start Making Profits Now By Getting
Affiliate Disclosure: This content may contain affiliate links. Where I will earn commission when someone buys the product through the link. | https://medium.com/@jadetechreview/magnetic-video-mastery-review-78791f2e3cb7 | ['Jade Thomas'] | 2021-12-21 08:35:02.733000+00:00 | ['Magnetic Video Marketing', 'Video Marketing', 'Magnetic Video', 'Magnetic Video Mastery'] |
Thoughts About LIFE. | Well I Don’t think that I can describe about life in any way possible. I am just sharing my thoughts about life (from my point of view of course :P ). I am just a College Kid from the country of Himalayas (Nepal) writing / sharing my thoughts inspired by Some Foreign Writers.
Life according to me is a journey. We take every step in a unique way. This journey might be very hard , bumpy and even take you down by great storms but it really doesn’t matter, What matter the most is how many times you stand up to Continue. Life is a Game and the only rule is that there is no mistakes but only lessons to be learned. Life is a Beautiful struggle. Life is a journey with so many options; A world to Discover, Dreams to change , Goals to Achieve and many more. There may be many restrictions in your life that makes you want to Quit but the best option is to stand still and keep Fighting. Live your life to the Fullest. | https://medium.com/@samirbogati/thoughts-about-life-d158ea1222cb | ['Samir Bogati'] | 2020-12-27 06:01:41.394000+00:00 | ['Journey', 'Life'] |
234 days of interning at Unilever in a pandemic | It has been 234 days and counting since I started interning with Unilever, and it has been an experience filled with growth and creative challenges. As a globally renowned organisation, it is far from a traditional multinational corporation. It is agile, interconnected, digital savvy and offers every individual high levels of autonomy to pursue their quest for purpose.
Standing by the commitment to sustainability on reducing and reusing plastics in the Unilever Singapore Office
As a business undergraduate from Singapore Management University (SMU) majoring in Operations Management and Analytics, I embarked on my journey in Unilever with the Unilever Leadership Internship Programme (ULIP), a three months summer internship programme, followed by a Gen-Next Internship, a flexible six months internship programme. Currently, I am working in the Rigids & Plastics Team under the Global Procurement (Packaging) function. I am also excited to be a part of U Faces, a group of passionate Unilever campus ambassadors, so to anyone who is interested to learn more, do reach out or connect with me at LinkedIn.
Click here to find out more about the exciting internship and career opportunities with Unilever
My Virtual Recruitment Experience
Truth be told, getting into the highly coveted internship programme in Unilever was unexpected for me. Having no prior internship experience during my time in University, I took a leap of faith and sent out countless applications but was met with numerous rejections.
Last year, I applied for ULIP 2020 (Procurement), and I was thrilled to have received an email in January this year, notifying that I was shortlisted for the next stage of the process. As I was in the United States on an exchange programme, I was disappointed as I thought that I was unable to proceed with my application and would have to give this great opportunity a miss.
A few months later, the COVID-19 pandemic struck globally. As unfortunate as a pandemic is, it became one of the very reasons I was able to witness the agility and digital adaptability that Unilever has, as it transitioned and developed a virtual Discovery Centre experience. I embraced this unexpected twist of events as it eventually allowed me to secure a placement in ULIP 2020 even while I was overseas.
Throughout my virtual Discovery Centre experience, the HR Talent team was extremely supportive and receptive to feedback. It also provided me with additional opportunities to interact with the team outside of the interview. The team was understanding of my situation and provided great advice, guidance, and flexibility.
Virtual Graduation for ULIP batch of 2020
A New Normal: My Virtual Internship Experience
Despite the fact that the entire course of my internship was conducted virtually, I’ve never felt disconnected in any way. There were three key factors of the internship that contributed to my positive working experience here at Unilever.
Building relationships empowered by technology
It was easy and convenient to connect with one another, with the help of technology. As a new hire, I was initially worried about the limitations of delivering and connecting with people in a virtual setting, while adapting to a new work environment.
These worries were quickly dispelled. In fact, it made it easy to reach out directly to senior members of the team for their advice on projects. I greatly enjoyed the virtual meetings as they were always two-way conversations — connecting with stakeholders at every level became way more accessible and convenient.
Beyond the functional and learning aspects of the role, regular townhall sessions were conducted to ensure that everyone is kept in the loop on the latest news and updates. Additionally, the packaging team also had weekly huddles to allow the team to connect and engage in casual conversations, such as learning about each other’s strengths, hobbies, and coping strategies during circuit breaker. I really enjoyed this inclusive environment and felt that I could connect well with my colleagues, which elevated my internship experience.
I was able to organise one such session, where we simply shared our comfort food recipes for cooking during the challenging circuit breaker times. Through this session, I got to learn more about my teammates in a fun and lighthearted manner!
Connecting with the team over Microsoft Teams while working from home
2. Growth and learning opportunities with impactful projects
I was entrusted with a major role in driving the digital agenda forward in the Rigids & Plastics team. This translated into a ton of learning opportunities to create meaningful change and impact at various levels of the project. I was involved in strategic planning and learned how to adapt to the mechanism, tools, and high-level plans at an extremely fast pace.
As an Operations and Analytics major, I was keen to explore the field of analytics and tools that leverage data to deliver high levels of efficiency and effectiveness. Within the first month, I was granted the opportunity to implement two of my proposed process improvements.
Firstly, I created a regulatory validation to harmonise the quality of data being uploaded right at the root stage of the process. This grew to be a key foundation of future standardisation that were rolled out over the course of my internship and improved the quality of data and analysis significantly by reducing up to 80% of the time required on data preparation and cleaning.
Secondly, despite being new to analytics, my manager gave me the opportunity to put my theoretical knowledge to practice. I had the chance to hone my skills in translating business requirements into technical objectives and co-create the solutions as a module to complement the existing digital structures. It was a daunting endeavour to work on various aspects of coding and understanding subtle nuances of business needs, and it was also one of the most fulfilling aspects of the internship where it provided me the chance to land long-term tangible developments in the project pipeline. I felt a great sense of achievement to have been part of the team that built the module that is being used today.
3. Understanding and supporting the sustainability agenda
As part of the digital agenda, one of the key focus was on identifying opportunities in terms of savings and improving sustainability efforts. With the data captured and analysed, the process to identify opportunities to reduce the usage of virgin plastics were made far more visible. Over the course of my term, our multiple efforts in #RethinkingPlastics brought about a reduction in the use of plastics and simultaneously increased the push towards incorporating post-consumer resins!
Overcoming Challenges
As with every journey, there are no doubt challenges that we will face along the way.
Having to work entirely from home, the social and personal aspects were being challenged. We are unable to hold personal conversations in the office over a cup of coffee or Ben & Jerry’s ice cream. Although it was easy to communicate and connect with people virtually, a lack of physical interaction does affect camaraderie to a certain extent. This could be draining and impact our mental well-being. Another concern I had was being unable to meet my colleagues in person. I was worried that this meant a lack of trust in my work capabilities and the delegation of challenging tasks may not be entrusted to a newcomer on the team.
However, multiple efforts and fronts championed by the teams in Unilever helped to alleviate such stress. One such instance was the appreciation month held in October, where numerous activities were planned — ranging from discovering your purpose workshop, reigniting and rediscovering motivations and goals in life, to mental well-being advocacy and tips. Furthermore, I am thankful for all the little acts of welfare such as virtual yoga and fitness sessions and welfare care packs to ensure that my mental and physical health are in check while working from home!
Countless care packs and welfare to aid the work-from-home fever were a great bonus of the internship! Photo credits to my fellow intern, Nathaniel Lim!
#WFH Tips
To sum up my virtual internship experience, I believe it is nothing lacking compared to an in-office internship. Working from home has its own set of perks and downsides — knowing how to leverage and work around them will help to make the most out of your experience anywhere. It is important to adopt a growth mindset, and never be afraid to reach out and ask questions for clarifications! Manage your time and set up a process that works best for you. Be it a routine or a gear-starting cycle that springs you off to a great start of the day!
To follow my lead and embark on a quest for purpose with Unilever, click here to find out more about the exciting internship and career opportunities with Unilever | https://medium.com/@unileversgcareers/234-days-of-interning-at-unilever-in-a-pandemic-eaed9e781c3f | [] | 2020-12-14 04:56:27.394000+00:00 | ['Work From Home', 'Unilever', 'Internships', 'Procurement', 'Singapore'] |
14 Day Rapid Soup Diet | Lose Weight Fast — How to Lose Weight Fast With the Cabbage Soup Diet
Eating as much cabbage soup as you like is one type of diet that many people have use to lose weight fast these days. There is a diet plan called “7 days cabbage soup diet”, supposed
to help you lose 10–15 pounds in 7 days if you really stick to it. It is called “7 days cabbage soup diet” as it is intended to be used for a 7 day period following the instructions
exactly.
The cabbage soup diet is based on a fat-burning soup that contains insignificant amount of calories. The diet itself is combined with eating strict combinations of food that almost force
you to starve. During the 7-day period, you are actually depending on cabbage soup to fill the need of eating. The idea of eating cabbage soup as much as you want is to fill the need of
hunger without worrying about taking in more calories but your body still burning as much calories as normal. It means more fats are burnt each day and causes a rapid weight loss.
Here is how to prepare a cabbage soup:
The ingredients for the soup are 1 head cabbage, 2 red peppers, 3 medium fresh tomatoes, 6 carrots, 6 onions, 6 spring onions, 5 stalks of celery, ground black pepper and salt to taste.
The preparation is simple, cut the vegetables into small pieces and bring it to boil in a large pot filled with enough water. When the soup boiled, lower the heat and leave it to simmer
until the vegetables are soft. Not more than 15 minutes simmer time is recommended to retain the vitamins.
The 7 days cabbage soup diet plan:
Many people believe following the cabbage soup diet does help you to lose weight fast. And if you want to see if it works for you too, here is what you suppose to eat during the 7
day-diet.
Day 1- Eat more soup. You can eat as much fruits except banana, drink a plenty of water and no sweetened drinks.
Day 2- Eat more soup. You can include more vegetables in your meal today and may have baked potato for dinner.
Day 3- Eat more soup. On this day you can eat much vegetables and fruits as you like.
Day 4- Eat more soup. This day you can have bananas and some glasses of skim milk.
Day 5- Eat your soup at least once. You either can choose to have beefsteak (500g), skinless chicken (baked) or grilled fish. Drink a plenty of water.
Day 6- Eat your soup at least once. You can have up to 3 beefsteaks with salad but no baked potato. You can take vegetables too.
Day 7- Eat your soup at least once. You are allowed to have brown rice or whole meal bread today. You also can eat more vegetables and fresh fruit juice.
Well-hydrated body is important if you want to burn more calories and lose weight fast. Thus it is important for you to drink a plenty of water not only while you are on diet but every
day, this is to allow the work of eliminating waste and extra fats from the body runs efficiently.
Although the 7 days cabbage soup diet can help you lose up to 15 pounds, it does have some downside. This diet is not the best choice for a permanent weight loss and it does not provide
enough nutrition of a healthy way of eating.
In order to lose weight fast and keep it off for the rest of your life, I highly recommend the best fast way to lose weight that has proved to work for many people all over the world.
Check below… | https://medium.com/@fb2afp/14-day-rapid-soup-diet-748c14572c46 | [] | 2020-12-28 04:08:42.659000+00:00 | ['How To Lose Weight Fast', '14 Day Rapid Soup Diet', 'Weight Loss', 'Lose Weight'] |
Siblings During Covid-19 | Photo by Kelly Sikkema on Unsplash
“Mumma! He is watching YouTube!” “No Mumma, he is the one who is playing games rather than attending class!” “No Mumma….”
And this is how we siblings start our day during the pandemic. Interesting right!? But you know what, this is not all! So today, we will talk about all the problems we siblings face while staying at home together, oh wait maybe I should say, stuck at home together..!!
Responsibilities
Okay, so someone please tell me this, that why elder siblings are expected to be more responsible and perfect, than the younger ones! Do you realize, that how much frustrating it is to see your younger sibling laugh at you while you are being scolded…!!? I mean we are kids too, so why this discrimination?
Homework
Doing homework simultaneously is nearly impossible because we are too busy fighting with each other! We want pin drop silence while doing work and if one of us makes even the tiniest of sound, we would start to fight like cats and dogs.
While Eating Food
Alright, so you might be wondering that why we fight while eating? The reason is not big, it’s just we don’t want a single moment of peace at home. We want constant entertainment! And for that, we keep on provoking each other.
Playing
So, what can we play during a lockdown? Of course, mobile games! This is the only time in the entire day when there is peace, as we both are busy with our mobile phones.
Online Class
The online class is too troublesome for us! First, we need to attend our class and then look after younger sibling too, while he/she attends class. I mean it’s not fair, we need to do a lot of different things too!
This is all that I can think of right now, but there are still a lot of small things that are troublesome. Let’s catch up in the next article! | https://medium.com/@vartikablogs/siblings-during-covid-19-9d7362786e7d | ['Vartika Rawat'] | 2021-07-06 13:11:55.235000+00:00 | ['Lockdown', 'Siblings', 'Covid 19', 'Lifestyle', 'Routine'] |
The Spirituality of Retail? When Sales Sounds Like Faith — and Vice Versa | You’re no longer just competing with your neighbor down the road — now you’re going up against rivals from around the world.
What do car dealers and churches have in common? After “owning’’ their regional markets, both now face splintered numbers and new national and global competitors. Their own kind of brand evangelists grow numbers.
“‘Evangelism’ became a business buzzword during the internet boom of the late 1990s,’’ Guy Kawasaki told Harvard Business Review. “The idea is simple: Derived from a Greek word that means, roughly, ‘to proclaim good news,’ evangelism is explaining to the world how your product or service can improve people’s lives.’’
He added: “My job at Apple was to proclaim the good news that Macintosh would make everyone more creative and productive. I wasn’t just marketing a computer; I believed in it so much that I wanted others to experience it too.”
Both face the same strengths, challenges, and opportunities:
For decades, both car people and church people have relied on brand evangelists, who believe in their product or service fervently, aggressively sharing the love with others.
believe in their product or service fervently, aggressively sharing the love with others. 54 percent of younger Americans, Millennials and Gen Z, said in a recent study for Vice Magazine and the Virtue agency, they seek brands that “enhance their spirit and soul.” The same study found 77 percent of young buyers saying they “want to buy from brands that align with their values.”
Their customers have long debated whether to “shop local’’ or go online. Online car services like Carvana and even Costco offer to bring cars right to their door. Local churches similarly deal with global online ministries.
The pandemic accelerated every trend, including the desire for home delivery with as few physical contacts as possible. The desire to “order online’’ without leaving home became more desirable than ever.
Both marketing and religion answer a fundamental question: why?
Both neighborhood car dealers and churches have emphasized service, sales, and service staff offering personalized service, who “know you by name,’’ addressing every concern of “their people.’’
“We should be more concerned with reaching the lost than pampering the saved.”― David McGee.
Gallup: 85 percent of employees not engaged in the workplace
A Gallup study shows 85 percent of employees aren’t “engaged” in the workplace. When your local service providers don’t seem engaged, buyers are more likely to opt for the ease and affordability of shopping online.
Tempe, Arizona-based Carvana, known for its multi-story “car vending machines’’ and ease of ordering online, was the fastest-growing used car operation in the United States in 2018, selling nearly $4 billion worth of vehicles in 2019. Sales have shot up from just 2,105 cars sold in 2014 to a177,549 cars last year
Ford Motor Co. recently responded to Carvana by launching a new digital platform, Ford Blue Advantage, allowing Ford’s 3,100 dealerships to sell their used-vehicle inventory in a single digital marketplace.
Consider the turnaround of Audi
Audi entered the U.S. market 50 years ago, selling just 7,691 vehicles in 1970. In just four years, sales leaped seven-fold to more than 50,000.
But numbers were essentially flat until 1984 when they topped 71,000. Numbers began to slide again over the next decade, crashing as low as 27,349 by 1996. The company said that if things didn’t turn around, Audi would exit the U.S. market.
Today, Audi sells nearly 250,000 vehicles per year, close to 10 times the number it sold in 1996. What changed?
The core of a spiritual relationship: Americans have learned to love — and desire — the Audi brand. | https://medium.com/leadership-culture/the-spirituality-of-retail-when-sales-sounds-like-faith-and-vice-versa-b55731127ef3 | ['Joseph Serwach'] | 2020-12-15 15:07:23.487000+00:00 | ['Marketing', 'Retail', 'Spirituality', 'Cars', 'Business'] |
Why the #KHive Won’t Switch to Warren, and You Shouldn’t, Either. | Warren’s Problematic History/Relationship with Indigneous People
Elizabeth Warren pretended to be a woman of color for years. https://web.archive.org/web/20140428203447/http://www.bostonglobe.com/metro/2012/04/29/elizabeth-warren-was-listed-minority-professor-law-directories-and/yBZTdrH3Qt8xRu6KZkLDlO/story.html?camp=pm She listed herself as “American Indian” with the AALS (American Association of Law Schools — the organization that a person applies with in order to try to get a position as a law professor), but she later unchecked the box, because she said she didn’t get “invited to events & groups” she’d hoped for.
3. In 2012, her own staff begged her to stop calling herself Native American because genealogists had proven her only tie was through an in-law, but she refused, and she KNEW all this years ago. https://nypost.com/2012/05/21/a-recipe-for-trouble/
4. She took a DNA test because of a dare made by Donald Trump, without asking indigenous leaders of the tribes involved, and she didn’t publicly apologize.
5. Trump said he’d give $1M to the charity of her choice if she took the test & proved she was Native American. She used that test, which is not how tribal citizenship is determined, to “prove” it, & she asked for the money. https://www.snopes.com/fact-check/trump-warren-million-offer-dna/
6. BUT — her being an Ivy League professor and pretending to not be aware that tribal citizenship is not determined by a DNA test taxes credulity.
7. AND — She asked for the $1M but didn’t really pursue it much — did it not matter that much to get that amount for a needed cause/charity?
8. Her “myths debunked” website only talks about why it’s offensive for Trump to call her Pocahontas, not anything about her role.
9. She said she thought she was indigneous because her grandfather “had cheekbones like an Indian.” (This is an horrible stereotype — imagine if she had claimed to be a member of another racial/ethnic minority because of a physical feature — it would be patently offensive to claim to be a member of a particular racial or ethnic group because a relative had “lips like a x,” “eyes like a y,” etc. Also, the term “Indian” historically inaccurate, and it’s hard to believe a highly-educated person was so naive for 3+ decades as to use this term and claim “cheekbones” as a basis for tribal identity). https://www.theatlantic.com/politics/archive/2012/05/is-elizabeth-warren-native-american-or-what/257415/
10. She contributed to a cookbook called Pow Wow Chow (a supposed collection of Native American Recipes). https://www.dailymail.co.uk/news/article-2146628/Elizabeth-Warrens-Pow-Wow-Chow-Cherokee-recipes-word-word-COPIES-famous-FRENCH-chefs-techniques.html
11. Warren’s “contributions” to Pow Wow Chow were recipes she stole from a French chef. https://www.dailymail.co.uk/news/article-2146628/Elizabeth-Warrens-Pow-Wow-Chow-Cherokee-recipes-word-word-COPIES-famous-FRENCH-chefs-techniques.html
12. She signed “her” recipes as “Elizabeth Warren, Cherokee.” https://nypost.com/2012/05/21/a-recipe-for-trouble/
13. Warren’s husband, Bruce Mann, also contributed to the “Pow Wow Chow” cookbook. His recipe, Oriental Beef Stir Fry, was plagiarized, word for word from the Oswego (NY) Palladium Times on January 24, 1983. He listed himself as a “Cherokee Indian” in the cookbook. http://www.fultonhistory.com/Process%20small/Newspapers/Oswego%20Palladium/Oswego%20Palladium%20Jan-Feb%201983%20pdf/Newspaper%20%20Oswego%20Palladium%20Jan-Feb%201983%20-%200282.pdf
14. “Pow Wow Chow” was the work of her cousin — a work that was offensive (and the cousin should have/did know better, because she worked for an indigenous museum,) and that was sold for a profit. https://www.dailymail.co.uk/news/article-2146628/Elizabeth-Warrens-Pow-Wow-Chow-Cherokee-recipes-word-word-COPIES-famous-FRENCH-chefs-techniques.html
15. Warren contributed a recipe of Crab with Tomato Mayonnaise Dressing to this cookbook. WTF? We should believe a traditional Native recipe used Mayonnaise??! https://www.dailymail.co.uk/news/article-2146628/Elizabeth-Warrens-Pow-Wow-Chow-Cherokee-recipes-word-word-COPIES-famous-FRENCH-chefs-techniques.html
16. The book is still being sold, all the recipes are offensive, & they are not indigenous recipes (one is for “Mexican soup.”). She has not recanted it, and she used the (non-indigenous) cousin to support her claim of supposedly being Cherokee. https://web.archive.org/web/20120521091038/http://allthingscherokee.com/cart/catalog/cooking-books/pow-wow-chow-cookbook-60.html
17. Warren made things worse by taking a DNA test, which has NOTHING to do with determining who is Cherokee, because of a DARE with Trump, which was a form of violence to indigenous people. https://www.hcn.org/articles/indian-country-news-elizabeth-warrens-half-mea-culpa-on-native-ancestry
18. She’s a hypocrite — she checked and unchecked the “Native American” box (that identifies a person by race/ethnicity) when it was convenient. https://www.google.com/amp/s/www.wsj.com/amp/articles/elizabeth-warrens-american-indian-claim-11549576347
19. She compared the accusations against her native heritage to the birther accusations against Obama.
20. Harvard touted her as its first woman of color in 1996, and it was mentioned in a 1997 Fordham Law Review article. She never corrected this.
21. She identified only as Native American for her bar registration and to get a job as a law professor.
22. Her non-apology in a bowling alley to Native Americans is gross.
23. Warrren claimed at one time that part of the basis for believing she/her family were indigenous is because she had “family photos” that confirmed this belief. This is beyond insulting and strongly suggests that her idea of “confirmation” of tribal identity (which anyone, especially someone with a doctorate degree, should know is not determined by “photos”) was to have pictures of people who supposedly “looked indigenous.” In other words, either she or someone in her family most likely took pictures of themselves appropriating stereotypical and/or rarely worn indigenous attire, such as headdresses, and/or that these people were in “red face.” Not only are such photographs/behavior nothing to brag about, they are beyond degrading and are worsened by the fact that she used them as a partial basis for her claims to be Native American.
24. The Delaware tribe, which is one of the tribal nations she claimed to be a member of, wants nothing to do with her. Why should a white woman let herself off the hook instead of doing something to truly make this up to them (if it’s even possible)? https://www.msn.com/en-us/news/politics/delaware-tribal-council-to-skip-meeting-with-elizabeth-warren/ar-BBYeuXm?ocid=st | https://medium.com/@Kattitude/why-the-khive-wont-switch-to-warren-and-you-shouldn-t-either-774839447a87 | ['Smart Ass Dem-Inist'] | 2020-02-21 16:53:36.291000+00:00 | ['Elizabeth Warren', 'Democrats', '2020 Democratic Primary', 'Progressive', 'Kamala Harris'] |
He Won Because … | by Donald J. Trump
He won because Election Booths were Rigged
NO VOTE WATCHERS, OBSERVERS WERE ALLOWED
My loyal Red State fans are super ticked
On this alone, Joe is DISQUALIFIED!!
The Lamestream Media wants me to croak
They won’t report on hordes of evil dead
Unearthed just long enough to cast their vote
Recall, without them I was WAY AHEAD!!
That shady firm Dominion is to blame
Bad reputation, tallies gone awry —
In the pockets of the Left — NO SHAME
Without them, I won Texas by a mile!!
Don’t hold your breath, I’ve only just begun
Concessions are for pussies — I still WON!! | https://medium.com/resistance-poetry/he-won-because-11e344cb4cfa | ['Joe Váradi'] | 2020-11-16 17:48:02.977000+00:00 | ['Poetry', 'Satire', 'Humor', 'Sonnet', 'Iambic Pentameter'] |
Using GitHub Actions to speed up CI/CD in data science projects | Inserting Data Science in the CI/CD process
The Data Science field is full of different frameworks, dependencies, and different languages that can be used according to the need and abilities of the data scientist, but a common truth amongst them is that they all have the possibility of being encapsulated by a containerization process, helping to ensure the reproducibility of the project.
With that in mind, the example used by me to deploy the automation tool of GitHub Actions involves the development of a web application using R’s Shiny library. Nevertheless, the same workflow implementation could be used to deploy APIs developed using Python’s FastAPI, i.e., or any other framework that can be encapsulated in a Docker container.
The project can be found here: paeselhz/ghActionsDockerShiny. I won’t be entering in details of the development of the application, because the example used by me is relatively simple and have no elaborate development. The focus of this article is the containerization of the project, and the workflow automation to build the image and store it in Docker Hub, making it available for further downloads.
Creating the Dockerfile
For those familiarized with the Dockerfile and its syntax, the execution is the same as expected in a project that will be developed, built, and ran locally with Docker. In it, we declare the base image that will be used for further installation of libraries and dependencies, as well as the configuration of the project, file copy, and other steps that usually can be added to a Dockerfile.
FROM rocker/shiny:4.0.0 RUN apt-get update \
&& apt-get install -y \
libxml2-dev \
libglpk-dev \
&& install2.r \
--error \
dplyr \
shiny \
purrr \
highcharter \
shinyWidgets \
shinycssloaders \
devtools \
xml2 \
igraph \
readr
RUN R -e "devtools::install_github('wilsonfreitas/rbcb')" COPY . /srv/shiny-server RUN chmod -R 777 /srv/shiny-server
This script, which is located in the project root directory, is responsible to gather an image that already has Shiny and its dependencies installed, and the installation of libraries that will be used by the app developed within R.
Creating the Workflow file
To GitHub Actions know which steps need to be taken for the workflow automation, it becomes necessary to create a file within the project that will be located at .github/workflows/main.yml, the file syntax is the same as any YAML file, being easy to code. In case the user does not want to do this process locally and commit the changes, GitHub itself has an online code editor for the creation of the workflow.
In this file are declared a few steps such as the name of the workflow, the triggers that will be used to deploy the workflow execution, and the jobs that it will be responsible for executing. The name and trigger parts of the file are highly customizable, and the user can change it in many ways, moreover, in the part of the job, there are a few steps that are needed for the job to login in Docker Hub, configure BuildX (a tool that will be used to build the image), configure QEMU (a tool that will be allowing multi-platform builds), deploy the built image to Docker Hub, logout and clean the machine to ensure that no processes are still running.
# Setting up a Workflow to work with Github Actions name: ci # Controls to when trigger the GH Action
# Below are configurations to the following triggers:
# - commits at master branch
# - tag commits at the project
# - scheduled to run at 01:00GMT
# The user can also configure triggers at pull requests
# as well as remove branches from triggering GH Actions
on:
push:
branches: [ master ]
tags: [ '*.*.*' ]
schedule:
- cron: '0 1 * * *' # Below there is the job configuration to build the image
# and push it to a DockerHub repository
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v2
-
name: Prepare
id: prep
run: |
DOCKER_IMAGE=<USER_NAME>/<REPOSITORY_NAME>
VERSION=noop
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
elif [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
VERSION=edge
fi
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
MINOR=${VERSION%.*}
MAJOR=${MINOR%.*}
TAGS="$TAGS,${DOCKER_IMAGE}:${MINOR},${DOCKER_IMAGE}:${MAJOR},${DOCKER_IMAGE}:latest"
elif [ "${{ github.event_name }}" = "push" ]; then
TAGS="$TAGS,${DOCKER_IMAGE}:sha-${GITHUB_SHA::8}"
fi
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
-
name: Set up QEMU
uses: docker/setup-qemu-action@v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
-
name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
platforms: linux/amd64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}
labels: |
org.opencontainers.image.title=${{ github.event.repository.name }}
org.opencontainers.image.description=${{ github.event.repository.description }}
org.opencontainers.image.url=${{ github.event.repository.html_url }}
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
org.opencontainers.image.version=${{ steps.prep.outputs.version }}
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=${{ github.event.repository.license.spdx_id }}
The workflow code has almost none external dependencies, given that the creation of the Docker image name and its tags are within this code, however, it needs a pair of Secrets to log in at Docker Hub, in this case, the username used by Docker, and a Token to log in at Docker Hub (which can be generated here). With the username and Token, the user just needs to go at their repository, in the Settings tab and add the token in the Secrets subpage, as seen in the image below:
With these steps, the project should be able to be executed using GitHub Actions to allow the automation of the build, test, and deploy processes.
In the example used by this article, the final image can be found at Docker Hub here, and tested locally by running the command:
`docker run -p 3838:3838 lhzpaese/ghactions_docker_shiny:latest` | https://towardsdatascience.com/using-github-actions-to-speed-up-ci-cd-in-data-science-projects-77d9b1c88228 | ['Luis Henrique Zanandréa Paese'] | 2020-09-28 13:12:41.466000+00:00 | ['Docker', 'Data Science', 'Github', 'Github Actions'] |
When My Son Asked “Why’d You Even Have Kids Mom?” Here’s What I Wanted To Tell Him. | Photo by Taneli Lahtinen on Unsplash
Why’d you even have kids Mom? My son asked.
I gave him a cliche, but still honest reply. Because I’ve always loved kids…and I love the idea of life growing inside of me, then watching it continue... Plus parenting has been good for my writing, I laughed. (My kids tease me for writing poetry daily, on Twitter.)
He grimaced. But we fight so much… Did you want that?
He referred to the fact that his siblings had just stormed to their respective bedrooms after battling over a ball. They both wanted to bounce the ball in the house, and so my husband had distributed consequences (phones taken) along with a few unfriendly words, and removed the ball.
Washing dishes in the kitchen, I‘d been able to stay out of the racket, and so I only sighed, not meaning to be heard. But my son, my sensitive son, had heard me. He had read my sigh as a defeated sound, and his question made me feel a little guilty. I’ll admit, I’m worn out. Almost 50 years worn. I’m worn from this parenting thing, this trying to be a decent wife thing, this attempting to be a writer and artist and refugee helper and cook and house manager thing.
But I’m worn like a baseball glove that wants, needs, dreams of catching the ball, holding it, again and again. I’m worn under an enormous space of sky, among millions of other worn moms, and actually the game I’m in is pretty spectacular.
Have you ever watched your family, from the outside? Have you observed how many exchanges happen in a single scene in your home? Steven Taylor who wrote Out Of The Darkness calls this act of observing life “transcending the taken for granted syndrome.” I guess I’ve sort of been transcending lately. But maybe not exactly how Steven meant it. I’m not higher up. I’m lower down, actually. I’m no just looking at the blessings, but I’m feeling the grit. My family has been through tough stuff dealing with special needs challenges, health challenges, online bullying, schoolyard bullying, depression, changing schools, changing continents. It hasn’t been pretty.
Photo by Brooke Lark on Unsplash
So when we’re eating carrot soup, bread and oven roasted potatoes and the kids are shooting their sarcasm, their little comments sideways over crumbs about friendship and learning and hurting and loving and surviving; I see something cool in the mess. I see each member of my family living. I see us exerting needs, trying to unbend, to uncover the truth about self, about each other. This is why I’m a mother. It is my only time to actually mother. It is my only time to live.
Maybe I do this transcending thing because I actually am writing a book about a fictional family. My novel is about a fictional mother and child finding and recognizing the truth, about figuring out stuff beyond the generalizations, the misperceptions of one another. And so when my kids say things like Mom has gone to her world again. They don’t know that I’m not going to another world, exactly. I’m trying to enter the world, to be in the world we’re actually in.
The Child Mind Institute explains that “mindful parenting” insists we must let perfectionism go. Sure, I can recognize accomplishments. But real life is flawed. Real life is about living within what’s true, right in front of me. I don’t want to miss the arguments that reveal my children’s soft spots, the mannerisms that shift, the changes in voice, the evolving taste in clothing, in food preferences. I don’t want to forget how we all teach one another about style, soul, about pain, about faith, about getting over crap, so that we can become people who argue and struggle and make this world better.
I don’t want to overlook the time that my son taught us at the dinner table why the bizarre YouTuber PewDiPie makes a lot of money, the time my other son taught his brother a new soccer skill, under the table. I don’t want to forget the expression of gratitude on my daughter’s face because my husband simply walked in the door after a long trip; the look on my husband’s face when our son asked him detailed questions about his job, his football career.
Perhaps we have our children to fulfill a dream. The Atlantic’s Olga Khazan wrote about reasons to have or not to have children after polling parents online. There are many answers, of course. But maybe we have our children to actively join the question of existing, to struggle, and then identify the beauty of small triumphs, beyond ourselves.
We’ll sweat and worry in our parenting. We’ll collapse sometimes. And then we’ll get up again. If we’re aware, we’ll see that tiny accomplishments are monumental. We’ll notice the smile that wasn’t anticipated, the boy coming home happy when we didn’t think he ever could again. We’ll hear our children express their own gratitude some day, in the small things, and it‘ll seem as if the moon can rise right inside of a home.
I wanted to tell my son this. But he wouldn’t understand, yet. | https://amyaveschallenger.medium.com/when-my-son-asked-whyd-you-have-kids-mom-here-s-what-i-wanted-to-tell-him-c2eace2905d5 | ['Amy Aves Challenger'] | 2020-02-06 17:08:31.461000+00:00 | ['Special Needs', 'Mommy Bloggers', 'Motherhood', 'Parenting', 'Moms'] |
Hold a Safe Open House: | Holding an open house is a selling tool that real estate agents use to reach more potential buyers. Some buyers have not chosen a real estate agent and frequently start their house search by visiting open houses and looking at homes on real estate websites on the internet. A good open house should be supported by streets signs, internet ads, newspaper and marketing to local real estate organizations. There are no chances for Amarprakash chennai complaints because of their product quality.
Many individuals cannot qualify for mortgage financing due to their profits and they may be self-employed, or have had a recent insolvency, foreclosure, or excessive derogatory credit. Many neighbors are loud; mainly, if they have never seen the inside of your home and how it is decorated. If you have the same model home, they may merely want ideas on how they can decorate their house.
Open houses are the ideal chance for someone to check out your home and what valuables you have. If the real estate agent is not prudent in watching every person that shows up to look at your assets, it will give potential thieves the opportunity to steal proceeds. If the real estate agent is not practical in watching every individual that shows up to look at your property, it will give probable thieves the opportunity to steal valuables. | https://medium.com/@amarprakash/hold-a-safe-open-house-6e21349b7614 | ['Kartherine John'] | 2015-10-14 11:11:18.942000+00:00 | ['Real Estate', 'Real Estate Investments'] |
Building the Tree of Knowledge — Exploring the Use Case for Educational Chatbots | Building the Tree of Knowledge — Exploring the Use Case for Educational Chatbots aXcelerate Follow Apr 11, 2018 · 3 min read
With an increasing number of tech giants getting into the business of the smart home assistant, modern life is edging closer and closer to a sci-fi movie’s depiction of future home life.
“Hey Siri, add bananas to my shopping list.”
“Alexa, buy more deodorant.”
“Ok Google, set my alarm for 7 AM.”
Beneath the dazzling home control functionalities and fancy voice interactions lies the essence of a chatbot program, which maps human utterances to intents, and from intents reaches programs to carry out the intended tasks:
Utterance → intent → task procedures → output
This is the exact model of Amazon’s Alexa, and its outputs carry out hundreds of services: ticket booking, food ordering, hotel room reservation, and more.
Smart Assistants in Education
While the use case for an education industry chatbot may be slightly different, its primary purpose is to retrieve the knowledge required by the student — be it disambiguation of concepts, clarification of contexts, or fetching resources. Like home assistants, it too is only bound by one intent: information retrieval.
The assistant may solely depend on a single or a combination of several NLP (natural language processing) algorithms to handle the query, but without the proper structuring of data, NLP algorithms alone will only generate a fixed percentage of wrong outputs.
Sensible curation of data is essential for effective information retrieval. Data is organised as an index list for a search engine, but for a domain-specific knowledge base, information is adapted into a tree structure. There is a reason we use the phrase knowledge tree — the tree branches out as knowledge goes from general to specific, from low-resolution to high-definition.
The Tree of Knowledge
Knowledge points are put together as tree nodes, and the identification of the most relevant topic is simply a traversal of the tree, each step being guided by a particular filter function.
The transformation of Wikipedia’s knowledge base for “Anthropology” into a tree structure. Answering a query takes log(n) time in complexity.
Organising the data in a way that’s optimal for the intended task is essential to information retrieval. A tree structure will enable a fast and accurate query, which makes it an ideal way to structure a chatbot’s knowledge base. | https://medium.com/vetexpress/building-the-tree-of-knowledge-exploring-the-use-case-for-educational-chatbots-fbb860d81dde | [] | 2018-04-11 23:48:09.801000+00:00 | ['Edtech', 'Artificial Intelligence', 'Chatbots', 'Technology', 'Education'] |
Introducing CryptoBall | CryptoBall: A Fully Decentralized Lottery on the Ethereum Blockchain
What is CryptoBall?
CryptoBall is the first truly decentralized and trustless alternative of PowerBall. Without central intermediary, ticket, prize pool, winning numbers and payout are controlled entirely by Smart Contracts.
How To Play CryptoBall (illustrative, game parameters to be finalized)
Complete Decentralization and Trustless System
Inspired by CryptoKitties and EtherDelta, we envision a new breed of decentralized software that has no central ownership (think Satoshi Nakamoto) so the value is derived entirely from the trust of existing code base and network value from active network participants.
New token model allows us to solve chicken-and-egg problem traditionally faced by normal startups where the value of the network is accrued from the number of users on the platform. It is extremely difficult to get early adopters to join the nascent network and this is where normal startups fail to take off. By creating the right token economics, the network can incentivize potential users to join since day 1 and help grow the network. We will describe in later blog post on how our token economics work.
We believe in the power of decentralization to provide censorship-resistant solution with no particular owner yet has the power to bootstrap “the network effect” necessary in many businesses. A peer-to-peer game-of-chance will be one of the many applications that heavily rely on network effect and thus is most suitable to be built with the new decentralized business model.
Moving Forward
We will be releasing our whitepaper for community review soon. Please stay updated on our social media channels as we release more updates including whitepaper, development roadmap, beta product and ultimately the launch of our platform in 2018.
Please check out our website at cryptoball.tech and subscribe to our mailing list to receive regular updates. We will post updates through our mailing list, Reddit, Telegram (announcement, public channel), and Twitter.
Most importantly, check out our public Github for actual development update. We welcome every feedback. | https://medium.com/cryptoball/introducing-cryptoball-1d80527e65ab | [] | 2018-03-23 08:53:04.063000+00:00 | ['Blockchain', 'Token Sale', 'ICO', 'Ethereum', 'Bitcoin'] |
Re: Tonya Graham, Ashland City Council | Sigh. One more time. I tell myself, I can do this one more time.
In response to my 11/11/2020 newsletter: A Pocket Full of Solutions, I received the following email from a locally elected political official.
Kokayi,
I’m sorry you have to deal with this given all you are already managing given your justice work. Racial justice work will continue in Ashland with the new council and I’d like to talk with you at some point before the new year about what that might look like.
Take care,
Tonya Graham
Ashland City Councilor
Context is everything. So, please, let me put the previous words into proper perspective.
Here in Southern Oregon, I developed a political relationship with Ashland City Councilperson, Tonya Graham, spanning three years. We have one issue between us: police misconduct.
I first became imitated alarmed at the behavior of the Ashland Police Department when the national news picked up a story of a Black teenager being unlawfully arrested in November 2018. Performative measures were quickly executed in order to minimize public outrage. Police Chief Tighe O’Meara apologized and made a personal visit to the young man and his family. The public was denied access to the personnel files of the three attending officers and closure to the incident has occurred.
Meanwhile, in the Spring of 2019, O’Meara wrote a controversial ordinance, using language crafted up North, in Beaverton. He submitted it and Tonya Graham supported it from the beginning.
By summer of 2019, the City Council was in a fierce battle with its residence over a Stop and ID ordinance , as the local activist community named it. I, myself, was the second of over 50 persons to make public testimony, insinuating the racist nature of the proposed ordinance. I have a vested interest in the issue of police misconduct.
This is the work before me. I am exhausted, though. It is called racial battle fatigue. So, I must tell myself to breathe and choose, one more time, push through and find the space within me to be gracious and kind with my words and ideas.
From:[email protected]
Re: Tonya Graham, Ashland City Council
Peace. Thank you for sending this email. I will the following words find you in space where your personality can accept them.
I find that I suffer emotionally and psychologically whenever I deviate from my cultural standards. Proper education always corrects errors. I say that to myself nowadays to remember the purpose this life is eternal growth. Otherwise, I may find myself wasting time, searching for that which does not exist.
Thank you for your step towards growth when you responded so graciously to my celebration of Police Chief Tighe O’Meara’s growth after the SOBLACC/BASE Police Forum on October 21, 2020. The personal testimonies of two Black men, who live in the Rogue Valley, expressing a rational fear of Ashland police officers finally pierced the intellectual shell the invited law enforcement officers maintain in order to perform their jobs.
One Black man publicly stated he was stopped nine times in six months by Ashland Police officers while driving his vehicle within the city limits.
There is no legal evidence or police record of those stops being linked to racism, other than the psychological and emotional damage experienced by Black bodies from them.
That is what is takes. For those in white bodies, whether aligned philosophically to the Left or to the Right politically or spiritually, to accept bringing race into the room, it takes a personal of color sincerely bleeding out emotionally for race to be something the white body can feel. Race is thought of in the white mind; it is not felt in the white body. Race is felt through the Black or Brown bodies the white body comes in contact with. The white body itself, refuses to experience itself in racial terms, as a white body, and enters the white mind to manage the discomfort of BIPoC viewing them in white bodies.
To be most effective, when talking about race, the Black, Indigenous, Persons of Color (BIPoC) has to soothe the white bodied persons, continuously, by being deferential. They must act as if race “affects” the white mind in a manner the BIPoC can easily dismiss.
“You are safe,” the BIPoC must radiate. “We are going to have the following discussion, using the frames of white supremacy, about the American racial caste system. We will not come to any ultimate conclusions which make a demand anything really change. The goal is not going to make you feel bad about being white. You are safe. Breathe.”
The race talk has to feel like a conversation. The BIPoC has to entertain the lie of white racial innocence. The BIPoC must imagine how it is a mystery and wonder through white bodied eyes that racism is happening. The white person speaking before the BIPoC know no white person in their lives who ever acts racist.
A story about racism has to be presented as off screen violence, like during an Oregon Shakespeare play, yet carry an emotional punch. The individual white persons enjoys acknowledging racial boundaries being rigid, but does not enjoy the benefits of white supremacy, or the negative social position that it puts the white bodied persons in at all times.
In your email, Ms. Graham, you are being professional. You causally ask me to once again emotionally bleed out, for the sake of a conversation on how the Council can support anti-racism efforts.
That’s a big ask from you right now. This is an amazingly audacious ask right now. It is an ask that is beyond the normal ask of the white bodied who request a racial discussion.
You are asking me, specifically, to not view you as an open oppressor, and have a conversation with you no differently than I would inside a university classroom.
You are asking for this interview as if you have not actively upheld white supremacy. I wish for me to speak to you as a human being.
At this time, I choose to reduce the amount of racial battle fatigue I am experiencing and I must decline, for my own mental health. You have no idea how much emotional labor it took to find the appropriate words to compose this email and not overtly try to harm your ego.
Thank you reading these few words. Thank you for choosing to accept my no. | https://medium.com/@royalstar907/re-tonya-graham-ashland-city-council-645e80bb3bed | ['Kokayi Nosakhere'] | 2020-11-18 23:36:49.903000+00:00 | ['Southern Oregon', 'Local Politics', 'White Women', 'Racism', 'White Supremacy'] |
Hey are you Looking for Hotel near dehradun. | End Your Search here With Pench o the resort The resort is strategically designed and built by the JDA Infra Limited. It is a decade old name in the market. The JDA Infra Limited is the trendsetter and pioneer, who introduced the concept of private colonies and townships to Jaipur (Rajasthan ) & Dehradun Highway. Continuing its dream journey, now it has come up with another new trend in the market i.e. youth focussed townships with 24*7 parties and an option to rent your plot to earn extra money Nested in scenic beauty area the Pencho Resort spreads over in 25 acres of area and strives for a wonderful experience. The resort is located on the Delhi Dehradun Highway, Hotel near Dehradun, Hotels in Dehradun, suiting any type of groups, be it family, corporate, friends, wedding or for just some honeymoon vibes. An ideal location to spend some time with your beloved ones. It is also another luxury resort that serves as a retreat for corporate. Away from the honking of vehicles this resort is a perfect pick for overwhelming tourists. The ideal and serene location of this resort park, ample open space available, the lush gardens, modern youth vibes, tree house, open MP3 screen and a expanded vibrant pool makes it an exciting destination for all age people.
https://www.penchotheresort.com/ | https://medium.com/@isolssaurabh/hey-are-you-looking-for-hotel-near-dehradun-aa88eb4eed17 | [] | 2021-12-22 11:32:28.310000+00:00 | ['Hotel', 'Hotel Booking', 'Events'] |
WTF is Two-Way Binding in VueJS? | Vuejs: the Progressive Javascript Framework
In Vuejs, you can use the v-model directive to create two-way data bindings on form input, text-area and select elements. Two-way binding allows you to update the element based on the input type automatically. v-model is essentially syntactical sugar for updating data on user input events.
<input v-model="message" placeholder="edit me">
<p>Message is: {{ message }}</p>
The property is set between the quotation marks. The property is the name we want to bind in both directions.
How is this different from v-bind ?
Remember v-model is syntactical sugar.
<input v-model="something">
is syntactic sugar for:
<input
v-bind:value="something"
v-on:input="something = $event.target.value"
>
or shorthand syntax:
<input
:value="something"
@input="something = $event.target.value"
>
In summary: v-model is a two-way binding for form inputs. It combines v-bind which brings a js value into the markup, and v-on:input to update the js value. | https://medium.com/zero-to-code/wtf-is-two-way-binding-in-vuejs-3f53780acb93 | ['Christopher Agnus'] | 2018-10-29 01:32:14.017000+00:00 | ['Vuejs', 'JavaScript', 'Software Development', 'Software Engineering', 'Reactjs'] |
The Audience Takes Charge | “Ad blocking is not something we control; it’s something the consumer controls.” Mike Donahue, ad agency veteran and former executive vice president of the American Association of Advertising Agencies, is talking to a roomful of leading marketers at the Wharton School of Business.
“If we don’t start to change this business,” Donahue continues. Then he pauses for a moment and takes a different tack. “If you don’t like change, you’ll like irrelevance a lot less,” he concludes.
Ad blocking is just one sign of the recent popular rebellion against advertising. Such signs suggest irrelevance is where much of the ad business has been headed for the past 20 years.
Donahue was one of many industry leaders expressing deep concern at the recent annual meeting of Wharton’s Future of Advertising Program, whose global advisory board includes academics, agency executives, clients, experts from the major digital platforms (like Google and Facebook) and others. The program is one of the country’s most important forums for marketing thinking.
“Ad blocking is just one sign of the recent
popular rebellion against advertising.”
Blocking ads is the most visible and (to the industry) most terrifying symptom of the powerful phenomenon at the heart of the Internet: audience control. The Internet has exploded across the globe primarily because it gives audiences unprecedented and irreversible control to choose the media they will consume — how, when, from whom, and in whatever form they wish.
The Internet has thoroughly revolutionized the media business. Now it’s doing the same to everything else, giving people more control over their cars, homes, offices, refrigerators, thermostats, and so on. Such control is the addictive gift the Internet gives.
An embarrassment of audience antagonism
Audience control has created a uniquely embarrassing moment for adland. The audience (formerly known as “consumers” or “users”) has a stunning set of digital ad-avoidance tools that includes DVRs, streaming audio and video, news-aggregation widgets, ad blockers, browser extensions that disable the ad industry’s privacy-invading, data-gathering trackers and lots more.
This puts advertising in the same boat as “real” media companies — entertainment and news outfits like NBC Universal, Disney, Netflix, The New York Times, Def Jam, Random House, and so on. If you don’t create stuff that really matters to people — stuff they actually want to see and hear — you will be ignored, avoided, and blocked.
It was not until late last summer, with the steady rise of ad-blocking software, that the ad business was finally forced to admit it had a problem.
Digital advertising’s trade group — the Interactive Advertising Bureau — first blamed everyone but the ad business, declaring ad-blocking “highway robbery.” In adland’s self-deluding narrative, “consumers” signed an unwritten, perpetual contract in the 1950s requiring everyone to tolerate annoying, interruptive ads in exchange for free content. The audience, however, can’t recall having made such a stupid deal. The IAB soon turned tail, declaring the ad industry had “messed up” by ignoring the audience’s needs and desires. The confession sounded hollow, frankly. (If you’re curious, judge it for yourself.)
IAB chief Randall Rothenberg later doubled down on IAB’s hubristic message, accusing ad blockers of trying to “constrict … freedom of speech.”
Waking up decades after the alarm goes off
There is, of course, no excuse for this mess. A hint to the audience’s insurrection actually arrived some 17 years ago with the Cluetrain Manifesto, a declaration of the sweeping social and commercial revolution the web was spawning. Cluetrain’s authors thought they were stating the obvious, but their manifesto and subsequent book created a sensation.
The manifesto set forth 95 theses — new rules of digital media and the new audiences being collected by the Internet.
Thesis 74: “We are immune to advertising. Just forget it.”
Thesis 75: “If you want us to talk to you, tell us something. Make it something interesting for a change.”
This was one of the first of an uncountable number of warnings issued over time to the media industry, including the ad business.
It was 2001 when Yoram Wind, a globally known marketing expert, first wrote about the rise of “empowered and skeptical” audiences online. Wind, known to everyone as Jerry, is the senior Wharton professor and consultant to industry who founded and leads the Wharton Future of Advertising Program.
“The thing they want to avoid doing is trying to block the ad blockers. It’s the dumbest thing they can do.”
— Professor Yoram Wind, Wharton School of Business
Wind sees ad blocking as the audience’s reasonable response to “dumb, destructive ads that are meaningless.” He believes the industry must welcome ad blockers and try to make them smarter so audiences can still choose to see marketing messages that meet their personal interests. He has a low opinion of one industry response, which has been to encourage technology that defeats ad blocking so people can be forced to see ads. “The thing they want to avoid doing is trying to block the ad blockers,” Wind says. “It’s the dumbest thing they can do.”
The rest of the media business has been struggling longer to cope with the consequences of advancing audience control. Half the newspaper business has disappeared because the audience learned to curate its own news online. The music business failed to sell music in the form the audience wanted; digital streaming took over by allowing people to compile personalized playlists, one song at a time.
The wake-up calls keep arriving. But the backers of traditional ad-supported TV, the lifeblood of the old ad industry, seem to remain holdouts, firmly believing TV spots are largely immune to the consequences of audience control. They remind me of climate-change deniers on a hot winter day.
During a keynoter at CES2016, NBCUniversal CEO Steve Burke called advertising without TV spots “unthinkable,” Advertising Age reports. Burke added, “People are going to want to watch great television on a great television set.” Yes, Steve, but that doesn’t mean they’ll much longer tolerate having the great experience continually interrupted by Viagra, GEICO, and even stupider advertisers.
The latest news is that ad-supported TV and arbitrary bundles of paid programming on cable are under heavy assault from the web. To make up for falling ratings and rates, both cable and broadcast increased ad time per hour. Now the audience is forcing a retreat to fewer ads. The revolution is being led by Netflix, Amazon, and the like, all of which give people what they want: Complete control. No interruptions. No stupid TV spots. No ads at all, in fact.
Hey, kids, what time is it?
The news media business got theirs. Then the music business; the book business. Now it’s advertising’s turn.
This is not a positioning, messaging, or PR problem. This is a fundamental product problem. Translated into the language of advertising, “The consumers are rejecting our products.”
“If you want to serve your clients, you must be a ferocious advocate for their audiences.”
As everyone with any sense is saying, the time is past due to put the audience first. That may sound easy; it isn’t. It means that it’s far more important to find out what really matters to the audience than it is to ask a client what message it wants to deliver. Ad blockers exist because too many clients and agencies want to deliver too many messages that don’t matter to a single real person.
If you want to serve your clients, you must be a ferocious advocate for their audiences.
The Internet uncorked the genie of audience control. It is never going back in the bottle. It’s time to deliver really valuable experiences to “empowered and skeptical” audiences. It’s time for compelling stories, honest information, standing for something more than the next sale or the next election, and being something more than a series of talking points, “messages” or product claims.
Welcome, as I always say these days, to the Post-Advertising Age. | https://medium.com/a-more-perfect-story/the-audience-takes-charge-c57d78b65e74 | ['Kirk Cheyfitz'] | 2018-02-23 14:41:58.046000+00:00 | ['Media', 'Ad Blocking', 'Innovation', 'Advertising', 'Digital Marketing'] |
Why We Should Know About The Best Baby Food | Baby is the best Achievement in life. Evey Parents Careful about their baby. A baby can’t talk and doesn’t decide their food. Parents are paly a main role here to choose baby food. Good and healthy food make a baby well. If we want our baby’s health always good we Should Know About The Best baby food.
First, we know about A day food menu for baby.whene we serve food our baby. And its healthy and testy for a baby. Now we discuss something new about baby food.
A day food time for baby age 2–4 year
We served our baby food at first something healthy before breakfast. Some milk with honey, Froite juice, etc lite food. Then the time of breakfast we served some delicious and tasty food that our baby prefers to eat. Lince time we keep our menu some rice fool boil and soft that’s great for our baby. Afternoon we served our baby some nutritious food such as like half boil egg, fruits, etc, at night we served our baby healthy food milk is perfect but before eating soft heavy food because they passed a night and babies relax to sleep.
Babys food chert
Breakfast food
Breakfast is the first eating time in a day. Because we must carefully breakfast food for our baby. some of the food list given below:
Milk with honey.
Pen cake with egg.
Rice with some vegetables.
Some Fruit juice.
Such as:
o Apple juice
o Banana
o Mango
o Orange
o Grapes
o Pomegranate
o Watermelon
o And some seasonal fruits
Lunchtime food
Lunch is very special for a baby because lunch food must have protine, Airon, vitamin, nutrition. Such as the food we served:
Some rice with vegetables.
meet.
Soft rice with pulses.
Fish.
Chicken.
Soup.
This type of food served a different way to change every day. And take care of our baby food that which type of food the baby mostly likes. We served that food most of the time. We do not pressure our baby to eat food. It’s not good for a baby and their health.
Dinner food menu
Dinner is also important for the baby. At night baby’s growth and brain mostly increase at night. That’s why we keep our baby’s dinner menu some nutritious food.
Such as :
Pulses, rice, and carrot khichuri
Rice Pages
Vegetable with pulses
Rice with one slice fish
Potato with rice and some egg.
Tomatoes carry with some lite rice
After dinner complete and before sleep we have eaten our baby some milk, readymade sheaks. That helps our baby Sharper, healthy, Stronger, and intelligent.
Summery
Last thing it’s we should very carefully our baby food because food has many effective roles in a babby’s life. Good food helps the baby healthy and well. If we want our baby is lead a healthy life we must maintain this type of food and chert. | https://medium.com/@nayonazizul/why-we-should-know-about-the-best-baby-food-daa7e552e8d7 | ['Azizul Islam'] | 2020-09-28 10:35:54.668000+00:00 | ['Baby', 'Food Timing', 'Baby Food Meals', 'Baby Care', 'Baby Food'] |
How I Deal With Dissociation as an Abuse Survivor | How I Deal With Dissociation as an Abuse Survivor
I can’t function when I’m not present
Photo by RF._.studio from Pexels
I have a hard time feeling my feelings. I often don’t know they are there, until several of them gang up and perform an intervention on me — and then suddenly it’s all too much.
Abuse survivors are well known to dissociate. It’s a fairly common maladaptive response to trauma.
For me, dissociating from early on in life was what saved me. I could leave my body, and exist in a dialogue-only part of my mind, where I could comprehend what was going on, but it didn’t break me.
I could stay quiet and small, and that kept me safe.
While it was an essential life skill for me as a child in an unsafe home, today as an adult trying to make it in the ‘normal’ world, I find myself lost from time to time.
I miss important cues that most people would get from their feelings, because mine are locked away.
I can go for days, weeks, and sometimes months without connecting with my feelings, especially the negative ones.
I’ve been hearing the term ‘toxic positivity’ a lot lately, in relation to other people forcing their need for positivity onto others. But personally, that’s something I’ve been doing to myself for as long as I can remember.
When something goes wrong, my immediate response is “It will be fine.”
Then I tell myself to suck it up, and I press on.
At the root of it all, I’m scared that if I acknowledge an emotion it will carry me away, and I will stop being an independent and functional adult.
In my early 20s I slipped into a period of low functioning depression where I didn’t get out of bed for many months. I don’t remember how I got out of it, but I just know that I can’t go back.
For abuse and trauma survivors, our fear of feeling our feelings can be related to our fear of losing control and efficacy over our own lives. We know that our grief will engulf everything if we let it, and so we don’t.
We march on, chanting our mantra of “It will be fine,” because that is the safest way for us to be. But this is only effective in the short term.
So what happens when we get so emotionally constipated that we can’t breathe anymore? What if life has become black, white and grey because all of the joy has slipped away?
And most worryingly — what happens when we fail to read the warning signs that our life is going in the wrong direction.
Take 2020 for example. I think most people accepted that something was very off about this year long before I did. While our economy was shutting down, I ploughed on with my self employed freelance lifestyle, while friends of mine who are more rooted in reality were applying for any jobs they could get.
Looking back, I wish I’d started that process sooner. Now my freelance work has dried up, and I still don’t have a job.
I also haven’t been processing my anxiety about my lack of income, at least I wasn’t until it started coming out as anger.
How to feel feelings again
So now that I am finally awake to the fact that I have been floating about in a dissociated state for most of this year, I intend to do the work to get unblocked emotionally, process my feelings, and join the rest of the population on planet Earth — and also to sort out my lack of income.
I need to be present, aware and feeling everything to make better decisions.
Step 1 — getting back into my body
The thing about dissociating is that it takes me out of my body, which is full of aches and pains, bad memories and horrible feelings. I don’t really want to be in there.
But I know that my body can be a good place too. It’s also where I feel joy, love and light. But I have to be prepared to take the rough with the smooth.
We can’t cherry pick the emotions that we want to feel.
So in order to begin feeling — all of it — I first have to get back into my body.
I have to remind (or convince) myself that today I am physically safe, and so my body is a safe place to be too.
In a previous post, The Law of Attraction for Abuse Survivors, I said this:
Getting into an abused and painful body is like stepping into a freezing cold and rapid river. The current is strong and it will take your breath away!
I want to raise this point again just as a reminder not to force yourself to get back in to your body, or do it too quickly. Take your time and stay calm.
As for the how part, I find anything tactile can help me with this, from petting my cats to having a hot bath. Touch is a great way to reconnect with ourselves.
But it has to be done mindfully — and this really is the key. If you are attempting this process too, then just focus on being in your physical form, and notice sensations like hot and cold, soft and rough.
You might find that a mantra helps too — try a few things and see what works for you.
Step 2 — Let a little out first
Whether at this point you are still reaching for any scrap of emotion within you, or you find yourself now suddenly holding back the flood gates, it’s again important to go slowly.
Processing your feelings isn’t another task or chore for you to power your way through. This is deep work and should be done carefully.
Try just letting a little out at a time. Try to focus on just one feeling, or just one situation in your life that you had been avoiding, and not the whole bigger picture.
Have a trusted friend on standby if you need somebody to support you. This is always a smart move.
If you are trying to feel but still nothing comes, then maybe music will help you. This usually works for me. I go for a walk or a run with a well chosen playlist of songs that make me feel, and I usually manage to cry a little.
It feels great to finally vent some of the pressure.
Final step — have a way forward
So, if you have reconnected with your body, felt your feelings and vented them a little, you will now need to know what comes next.
This is important for everyone, but perhaps feels even more important for abuse survivors. We can’t stand not knowing what to expect — for obvious reasons.
So making yourself a plan to deal with whatever you have been bottling up or avoiding is the smartest thing you can do now.
Don’t just think about it, writing it down and making it your official plan will give you a sense of security that you will likely need to avoid dissociating and floating off again.
Try and stay down here on Earth for as long as you can. You are more effective at solving problems and keeping your life on track while you are here, in your actual life.
Finally, remember that any habits we form in our early years, such as dissociation, are incredibly hard to break. So keep working at it, but don’t beat yourself up when you drift off again — because you likely will.
We all need reminders to stay on track. For me, I remind myself best when I write or talk about my dissociation. Maybe that would help you too?
Be gentle with yourself. You have come a long way and this year has been especially challenging for us all. But if you read to the end, then you are clearly thinking about processing your own emotions and that’s a great first step.
Just take one step at a time and keep being brave. | https://medium.com/the-virago/how-i-deal-with-dissociation-as-an-abuse-survivor-8dfff09f2919 | ['Sarah K Brandis'] | 2020-11-25 14:56:15.302000+00:00 | ['Mental Health', 'Abuse Survivors', 'Psychology', 'Life Lessons', 'Women'] |
Mulan Musings | Why the 2020 character is a bummer
Photo by 张 学欢 on Unsplash
Imagine an Asian girl in the 1990’s who grew up in a patriarchal society, observing that gender roles are strongly embedded. It is so deeply embedded that when a girl prepares a meal for the first time, it is not uncommon to hear a remark “you can now be wedded to a husband” (in Filipino language, “pwede ka na mag-asawa.”)
Then imagine that same girl watching Mulan, the animated film. She sees that the princess joins the army, fails several times at training, yet strategises and perseveres. She thinks outside the box. In the end, she succeeds at her goal. The girl’s heart races, rallying for the princess, as she breaks the status quo. She thinks Mulan is awesome.
Needless to say, that girl was me and I’m probably not alone with this experience. Now we fast forward to 2020, when we have even better production technology and see Mulan portrayed by real people. Among the several critical changes they made to the animated story, there are two points that saddened me most. | https://medium.com/the-innovation/mulan-musings-bb83f621cf7f | ['Valerie Dela Cruz'] | 2020-09-21 21:31:19.170000+00:00 | ['Mulan 2020', 'Movies', 'Mulan'] |
Credit Kudos named in Credit Connect’s Top 20 Company Power List | Credit Connect has named Credit Kudos in their newly announced Credit & Collections Technology company power list. The power list is the first edition of what will be an annual round-up of the most prominent innovating companies within credit and collections technology. Within the list, Credit Connect has identified the top 20 ‘Premier’ innovators highlighting the achievements and successes of the top-performing companies.
Colin White Founder of the Awards and the new Power List said “The annual guide will provide a snapshot of the technology innovators within credit and collections — it will showcase who is leading the way with innovations and have backed up their status by entering the awards. Finalists and winners have provided measurable data within this process. The companies that have made the top 20 ‘premier’ list are now highlighted by their dedication to innovation. All the companies listed have provided solutions that have helped to enhance the best customer outcome through lending or collections processes.”
The annual company power list will act as an index of technological innovation achievement recognising companies for the progression of industry standards and excellence. The list has been compiled from the Credit & Collections Technology Awards results from the past three years and is a culmination of research undertaken by Credit Connect to analyse the performance of finalists and winners of the Awards.
You can see the full list here. | https://blog.creditkudos.com/credit-kudos-named-in-credit-connects-top-20-company-power-list-ac0c7c1fcf69 | ['Phoebe Allen'] | 2019-11-27 12:23:04.876000+00:00 | ['Credit', 'Fintech', 'Open Banking', 'Startup', 'Finance'] |
The Complete Guide to Bonus Bets | What Are Bonus Bets?
Today, foreign operators and even startups have flooded the bookmarking industry by storm. The bonus bethas become part of the punting landscape. You get the bonus bet for signing up, signing in, having a bet, not having a bet, and also contacting the customer service.
You also get a bonus bet for when your horse runs second or when your team loses by a kick. You also get one when your multi bombs out. For the bookies, this is a simple business because the chances of you to stake with them is higher than with other dozen options.
Whether they have to give you the bonus, you will still be back to use nonetheless, whether they flog you with more offers or not. Even when your bonus beta actually wins, they have turnover requirements on the winnings that restrict you getting the cash and being done with.
In some cases, you even have to stake the winnings more times before you can withdraw the money. They are also always ahead of you when it comes to the betting slots by putting minimum prices for the turnover bets.
Bonus bets seem to look like free money. However, that’s not always the case because the odds of your average mug turning into real cash in your pocket is very low. However, the beauty of bonus bets is the fact that you won’t risk any of your capital.
With bonus bets, instead of hoping that the bet comes in, you can use your edge to cover both a bet and still guarantee yourself of a positive result.
How Do Bonus Bets Work?
Compared to free but stakes bonus stakes are returned. When a free bet you placed wins, you may receive he winnings, but you won’t acquire the free bet stake. With a bonus bet, the bonus stake is returned along with your winnings when you place a bet on something, and it wins.
However good that sounds, there is a catch! Before you can withdraw the bonuses, they have to be rolled over a number of times. This is referred to as the wagering or rollover requirement. Only after you meet the wagering or rollover requirement or when you lose all your bookmarker funds into the exchange account will the rollover be complete.
Bookmakers use the bonus bet money to attract customers and always to offer promotions when you join or when conditions like losing money are met. The bonus bets are meant to encourage betting. However, the ultimate goal here is to turn the bonus money into real cash.
So the real question is, how you can make most of your bonus bet which is the bookmarker welcome bonus. When you sign in you, receive a free bet or a deposit bonus. On most betting sites, you can’t withdraw the bonus until you bet the entire amount of the bonus at a set minimum odds and a number of times.
How to Use the Bonus Bets Effectively
How Can You Convert Your Bonus into Real Money
Most people will tell you that creating a sure bet from the bookmaker bonus is one of the surest ways to win the bet. So how can you convert the bonus bet?
Have an Account with a Betting Exchange
Opening an account with a betting exchange (e.g. Betfair) will allow you to act in the role as that of your bookmarker. You will not only be able to place the bets yourself, but you will also lay bets. Choose a betting exchange you prefer.
Once you select your betting exchange consider the amount of commission you will have to be paying on the exchange bets. To be on a safer time, go for the lowest commission rates.
Place a Surebet with Your Bookmaker
In theory, the sport you choose doesn’t really matter, however, place a bet with your bookmarker on a major football league like the Premier League.
Make the Opposite Bet with Your Betting Exchange
Once you place the bet with your bookmaker, put the opposite of the bet using your betting exchange. For example, if you bet on Liverpool to win in an upcoming Premier League game with a bookmaker, bet against Liverpool with your betting exchange account.
Just as a bookmaker does, you will be able to lay a bet with another customer. The advantage of placing a bet with a major league or a major sporting event is because you can be assured the ability to lay your bet on the betting exchange of your preference.
You may not find someone willing to take your lay bet on your betting exchange if you choose an obscure tournament or league. Even worse, the particular tournament or league may not even any listed markets on the exchange.
You may also need to offer slightly higher odd on your betting exchange that the odds you get with your bookmarker for the bet. The closer your betting exchange odds are to the bookmarker’s betting odds, the better for your winning odds.
Bonus Bet Example: Arsenal vs. Liverpool
Let’s assume that you have opened an account your preferred sporting account and you have deposited a certain amount of money like $100. This means that you are expected to get a 100% bonus, an extra $100. You are paying $100 in, but now you have $200 to play with.
Bonus Small Print
As we said, you can’t cash out the bonus immediately. The number of times you must bet your bonus before withdrawing it as cash is different from bookmaker to a bookmaker. There are those that demand a single bet. This means that you need to bet the total of your bonus only once. There are also those that require you to bet your bonus a number of times.
The fewer times you bet your bonus, the better. If you bet your bonus upwards more than five times, it will have a massive impact on the total amount you will withdraw as cash after you bonus bet is complete. Whether or not you are betting conservatively, every round of betting you are risking your bonus, and you will also allow the bookmarker commission to eat into your bonus.
This is the reason why most bookmakers set 1.50 odds as the minimum.
So, if your betting exchange demands that you need to bet the total of your bonus three times, it means that you require a total bet of $600. Since you can bet below 1.50 odds, the risk of losing all the bonus will be much higher in the traditional way.
If you need help with staking be sure to check out our staking guide in our blogs section
Select Your Bet
Using our prior example, betting on Arsenal home win against Liverpool.
Say the odds of Arsenal are winning the match are at 1.75. Should Arsenal win, you may have a profit of $150 if you bet all your deposit plus bonus on those odds, the total return will be $350? Remember, bet the opposite on your betting exchange.
Betting the opposite would mean that you lay bet against Arsenal. Remember, you want your liability to be the same as the amount that you can profit from the bookmarker’s Arsenal bet. So in this case, $150($350 return — $200 bet stake).
The best way to enter the amount you want as your ‘Backer’s Stake’ is to enter the amount until the liability equals the same amount you stand to win or profit from your bookmaker bet. In this case, if you lay Arsenal at 1.776 odds, the maximum liability is $150. This means that it’s the amount you might lose with the bookmarker if Arsenal wins.
What if Arsenal Draws or Loses?
Bookmaker
Balance
Net Profit
Points
Account A
$100 + ($100 bonus)
$0
-$100
Account B
$150
$343.52
$193.52
Total
$250 + ($100 bonus)
$343.52
$93.52
In this case, you have no cash left in Account A. Instead, you now have more money in your Account B, so efficiently you have transferred most of your Account A bonus ($100) to your Account B ($93.52). And you can cash out the money anytime.
What if Arsenal Wins?
Bookmaker
Balance
Net Profit
Points
Account A
$100 + ($100 bonus)
$250 + ($100 bonus)
$150
Account B
$150
$0
-$150
Total
$250 + ($100 bonus)
$250 + ($100 bonus)
$0
Even if this is the least desirable outcome, it’s not a disaster. Even though you lost your $200 with Account B, you made a $200 profit on your Account A so essentially you broke even, and there was no harm done.
If you are up for it, you can repeat the 3 step process with another bonus bet until the bonus finally ends up in your Account B account, or you meet the wagering requirement for Account A and can withdraw your bonus from there.
Stick to The Same Pattern
You can always repeat the routine with every bonus bet as long as you keep in mind that the lay bet with your betting exchange needs to be pretty close to your bookmaker’s odds. Anything other than that is a waste of money.
You also need to make sure your maximum possible loss with your betting exchange needs to match the amount you may profit from the bet you place with the bookmaker.
Low Odds or High Odds?
There is never really a right or wrong answer in this case because the two options have their own pros and cons.
When the odds are low, the advantage is that it may not eat into your bonus bet as much as the high odds. However, the disadvantage that there is an increased chance of a complete rollover because of a winning streak.
When the odds are high, there is a high chance of losing your bonus straight into your exchange account. The downside is it may eat into your bonus bet more than the low odds would and if your bet wins you’ll have more funds tied up at your bookmaker.
This means that it’s only a matter of your personal preference. It’s incredible when you place a bonus bet at high odds, and it loses because your lay bets will win and you’ll have managed to transfer your bonus straight into your exchange account in a single bet.
It may be less unlikely to get a losing bet at very low odds. However, some cases could happen. The best preference would be to see a bet with tight odds that you think will not come in.
Standard Lay Method
There is a reason people are comfortable using the standard lay method, it’s a more natural method for beginners, and it’s also the same method most qualifying bets use. You can use the ‘Free Bet- Stake returned’ setting or a ‘Qualifying Bet’ setting. Even though the display the results differently, they have the same for both sets.
Use the qualifying bet when placing back and laying your bets to qualify for a free bet. If you are looking to extract profit from a bonus, use a free bet. The risk-free bet allows you to calculate the lay stake for a bonus bet.
Underlay Method
When it comes to laying the bonus rollover bets, the underlying method could be the best option. The charm of this technique is that if your bonus bet does win, you will retain the entire value of your bonus. So without sacrificing your $100 bonus, you will be $200 of the way through the rollover condition
If your bonus bet does lose, you will have lost slightly more of the bonus. However, the good news is that your profit will move straight into your exchange account and you don’t need to rollover any more funds.
In this case, if Arsenal wins, you’ll have retained your full bonus plus profit. However, you will need to place another $1000 worth of bets before you can withdraw your bonus. This means if Arsenal doesn’t win, you will have successfully transferred the remaining amount of your bonus straight into your exchange account.
If the match results in a draw 0–0, placing your initial bonus rollover bet on Arsenal would have worked out very successfully indeed. That’s all there is to it, as long as you always read the terms thoroughly and you’ll crack it.
Utilize the Free Bet Strategy
There is a reason people place free bets at very high odds. This is because when you receive a free bet, your primary goal should be to extract your profit. Most of the free bets mean that you only receive the winnings back but not the free bet stake
For example, if you have a $10 free bet that wins at 3.00 odds, then you will win $20 cash and the $10 free bet stake won’t be returned.
However, you can maximize your extracted profit by placing your free bets at high odds, which will account for the fact that the free bet stake isn’t returned.
What You Have To Keep In Mind
The good thing about this bonus bets trick is that you will be able to save a considerable portion of the bonus with minimal or almost no risk. It is wise to keep in mind that there could be instances where things could go wrong.
For instance, there is a chance of making an error when you are entering your bonus bets. This means that you should triple check to make sure the entries are all right. For example, you could accidentally bet on Arsenal win with both accounts. You could also place a bet of more money than you wanted even with a risk-free bonus transfer.
Some bookmakers have different rules for repayments in the instances where the games are canceled. Betting on football leagues is much better because cancellations are rare and therefore the rules tend to be very consistent.
However, in tennis, things are different. This is why you need to be very familiar with the specific rules of the bookmakers before placing your bonus bets to avoid any surprises.
Conclusion
It is always crucial to always note the exchange liability figures. The exchange liability is the amount you stand to lose from your betting account in the event that your lay bet loses. It is clear that the higher the odds, the more cash you need in your betting account to place the lay bet.
There may be occasions where you won’t quite have enough exchange funds to go for the highest and most profitable option. There is no mistake in placing the bets on outcomes that have smaller exchange liability; you will still make a more modest profit.
Laying your outcomes on your betting platforms is the key to successfully getting and cashing out your bonus bet. When you lay, you are betting on something that’s not to happen. For instance, if you were to lay on Manchester United in the 2017/18 Premier League, you are not betting on it not to win, you will only lose your stake if Manchester United did win.
In the bookmaking world, you can effectively lay across other bookies if you don’t want to be restricted to one account. For example, in a Win-Draw-Win game, you can use your bonus bet on a single option and then cover the other two on different platforms.
You can read the full article via BetterBets | https://medium.com/@support_16046/the-complete-guide-to-bonus-bets-e6e973658a4e | ['Virgil Townsend'] | 2017-12-23 01:56:57.626000+00:00 | ['Football', 'Betting', 'Betting Tips', 'Csgo', 'Sports Betting'] |
The Probability Mass Function | Probability Distributions
Probability distributions describe how probabilities are distributed over the possible values that a random variable can take. A random variable can be discrete- taking on only certain fixed values such as 0,1,2,… etc., or it can be continuous- taking any numerical values in an interval or collection of intervals.
In this article, I will focus on some frequently used discrete distributions, and attempt to explain the mathematical formulation of their Probability Mass Function, using simple examples.
The Probability Mass Function
The Probability Mass Function, or the PMF, provides the probability for each value of the random variable.
It is denoted by fₓ(x), where X is the random variable. Let us say that random variable X, takes an arbitrary value k. The PMF will give us the probability of that happening. Mathematically,
Properties of the PMF
(i) Positivity
A probability cannot be negative, and since fₓ(x) is a probability, it has to greater than or equal to zero.
(ii) Normalization
The sum of fₓ(x) over all possible outcomes k of the random variable is 1. This expression gives the probability of random variable X taking on any value, which is 1.
This concludes the not so interesting definition part! We are now equipped with the knowledge that we need to delve into the various distributions.
1. Bernoulli Distribution
The random variable X has a binary outcome. Let us denote these outcomes by 1 and 0. If the probability of X being 1 is p, then the probability of X being 0 is 1-p (Normalization property!).
A simple example of this is the tossing of a fair coin i.e there is a 50% chance of it landing heads up and a 50% chance of it landing tails up. Let 1 denote heads and 0 denote tails. Then the PMF can be described as-
2. Binomial Distribution
Suppose we toss a fair coin 3 times. Each toss is independent. What is the probability of getting exactly 1 head? There are 3 outcomes that give us exactly 1 head- (H, T, T), (T, H, T), and (T, T, H). Probability for this is
We can generalize this to tossing a coin n times. The probability of it landing heads up is p and of tails up is (1-p). What is the probability of getting exactly k heads? Where k=0,1,…,n
We will have k heads and (n-k) tails appearing in different sequences in the n tosses. Using combinatorics, we can work out how many such sequences exist. Then, we just need to multiply this to the probabilities for getting k heads and (n-k) tails. This will give us-
This expression is exactly the PMF of the Binomial Distribution!
This distribution gives the probability of obtaining exactly k success in n independent trials, where each trial has a binary outcome (Bernoulli Trial)- success with probability p, and failure with probability (1-p).
3. Geometric Distribution
Consider the following scenario- we toss a fair coin and it lands tails up. We toss is again and it lands tails up again. We keep tossing and it lands tails up 19 times in a row. Finally, on the 20th toss, it lands heads up. Assuming each toss is independent of the other, 19 tails in a row seems a bit unlikely, doesn’t it? But how unlikely (or likely) is it really?
We can answer these questions using the Geometric Distribution.
In Geometric Distribution, the random variable is the number of trials required to obtain success in independent Bernoulli trials (success or failure, 1 or 0, Heads or Tails, etc. ). The PMF is as follows-
Where k=1,2,3… is the number of trials before we see success and p is the probability of success.
For success to occur on the kᵗʰ trial with the probability p, there have to be (k-1) failures with probability (1-p). We simply multiply these probabilities and we have the PMF!
Let’s go back to the example. We have k=20 and p=0.5. So,
That is a very low probability! What if the coin was not fair? What if the probability of getting heads was only 30%? What is the probability of such a scenario then? Let’s calculate.
The probability is relatively high, but this scenario still seems very unlikely!
4. Negative Binomial Distribution
We are tossing a fair coin and suppose we have tossed it 9 times already. It has landed with heads up 4 times. What is the probability that we will get the 5th heads up on the 10th toss? This probability can be computed from the PMF of the Negative Binomial distribution.
In a series of independent Bernoulli trials, the random variable denotes the trial at which the nᵗʰ success occurs, i.e the nᵗʰ success occurs on the kᵗʰ trial.
If the nᵗʰ success occurs on the kᵗʰ trial, it means that in k-1 trials there have already been n-1 successes and k-n failures that occur in different sequences. We calculate the number of these sequences using combinatorics. We then simply multiply that number with the probability of getting n success and that of getting k-n failures. We denote the probability of success as p. Et voila, we have the PMF!
Let us calculate the probability for the example we discussed above. Here n=5 and k=10, and since we are using a fair coin, p=0.5 and (1-p)=0.5
5. Poisson Distribution
Poisson Distribution is one of the most frequently used discrete distributions. The random variable in this distribution counts the number of events in a given volume. For example, it can be the number of potholes in a 1 mile stretch of road, number of cars arriving at a toll booth in a 15-minute interval, etc.
The PMF, which gives the probability of observing k events in an interval, is given by the following expression-
Where k=0,1,2,… , e is the mathematical constant approximately equal to 2.718, and λ is the average number of events per interval (also called the event rate or parameter).
Let’s look at an example to understand this better.
Suppose we are interested in the number of arrivals at a drive-through during a 10-min period on a weekday morning. Historical data shows that the average number of cars arriving in a 10-min period is 6. We want to know the probability of there being 8 arrivals in a 10-min period.
Here, k=8 and λ=6, and we calculate the probability as-
The derivation for the PMF is a little math-intensive and could easily be a separate article, which is why I will not be going through it here. There are quite a few excellent articles that explain this in detail. I particularly liked this one. | https://medium.com/swlh/the-probability-mass-function-af2280e3073e | ['Kanupriya Bhargava'] | 2020-11-07 08:02:52.419000+00:00 | ['Probability', 'Probability Theory', 'Mathematics', 'Data Science', 'Statistics'] |
“100 Books To Read Before You Die” … | 2020 has been one whole involuntary roller-coaster ride. I kind of saw it coming, when the Chinese speaking online community started hypothesizing about the occurrence of a “SARS 2.0” in late January. Back then, I already fantasized about how big of a deal it could become later down the line.
What I never expected however, was for it to turn into a pandemic.
Due to the pandemic, the majority of mankind spent its days at home this year. It’s quite probable that due to a combination of hard lock-downs and curfews, in some countries, more man-hours have been spent at home this year, than in any previous year. This was done in order to prevent the transmission rate of the Coronavirus from growing exponential. In other words, people were battling the virus by staying at home. A lot of things that were part of our ordinary lives like meeting up with our group of friends, having dinner or even the daily commute just faded away. Realistically speaking though, most people were battling boredom more than the virus. And I was no exempt from this. Overwhelmed with this new downtime and the simultaneous restrictions on how this downtime can be utilized I had to find something to do. Quick.
During the first wave around April 2020, the first thing that came to my mind were the virtual realms of video games. Without much hesitation I went on eBay to order a complete PS2 set, a modchip, some old games and went on try to recapture the experience of gaming as a careless young kid during the mid-2000s. It quickly dawned on me however, that playing video games merely due to the fact that I had no other options of using my prime time was only going to lead to me being miserable. Instead I had to find something more produce, something that tingled my brain and wouldn’t turn me into a vegetable of what now turned out to be the course of the year (and possibly the first quarter of the coming year).
And so, the searching continued. I was pacing up and down my living room one day, when the solution to my dilemma appeared (quite literally) right in front of my face. On top of my book drawer, the lay a pile of books I had bought over the last 3 months that I had placed there, ready to be read at my signal. That signal never came however, and the books ended up staying up there for months. I am sure you know exactly what it’s like, as this seems to a more widespread “problem”.
The Japanese in fact already had a word for what I unwittingly practiced: 積ん読 (Tsu-n-doku). 積ん読 is made up of 積んでおく(Tsu-ndeoku), which means to pile something up for later and 読書(Doku-sho) which simply means to read books.
It seems like it took something drastic, like a lockdown for me to finally break the cycle of Tsundoku and finally get to read these books… there was no way I could put it off any further. I didn’t expect to go through those books as fast as I did, nor did expect that I would develop of a habit of reading whenever I had more than five minutes to spare. More. I had to find more books. But where to start?
Then it came to me, I could look up one of those “100 Books to read before you DIE” lists and go through those too… but simultaneously, I remembered the near 100 books in the “saved for later” list of my online bookstore account. The time was ripe, so I thought, for me to read through all the books I held an interest in reading over the past years, but never go to for some reason. So I decided to compile my own list
Without further ado, here’s the list I compiled:
As you might have noticed, this list is an eclectic mix of genres, time periods and languages and this reflects the way I have compiled this list. All of these books either
were on another “100 Books to Read before you Die” list I found online,
were on my To Read list already,
were recommended to me by my friends,
books I’ve read a long ago and simply want to read again
or were lifted out of the reading lists of some of my colleagues
I highly encourage you to read as much as you can this year, if you haven’t already done so (I’m only 25% through the list and I hope be by 33% by the next year). Who knows, it’s quite possible that your newly acquired knowledge will come in handy by the time the world goes back to normal…Feel free to use this list just as it is or make a couple re-adjustments. The ordering, a random mapping from (1…100) to (1…100) isn’t my recommended way of enjoying these books or anything, it was actually generated by a computer. If you’re interested in generating your own order, here’s the code:
I’m not reading these books alone though, as I managed to cobble together a small, private book reading club were we discuss the books during a FaceTime session after our read-through. It’s not a necessity for you to read books at the same time as someone else for purpose of exchanging your thoughts and ideas about it afterwards although it does go a long way towards maintaining your social connections through all this.
I’m going to end this post here, I gotta get back to reading after all ;-)
I wish you all a Happy list-compiling and reading! | https://medium.com/@oliverpaoliang/100-books-to-read-before-you-die-7a5d57fb8697 | ['Oliver Paoliang'] | 2020-12-27 18:48:28.372000+00:00 | ['Reading', 'Readinglist', 'Reading Challenge', 'Book Recommendations', 'Books'] |
2D Mapping using Google Cartographer and RPLidar with Raspberry Pi | In this experiment I’m going to launch opensource SLAM software — Google Cartographer — on Raspberry Pi b3+ with 360 degrees LDS RPLidar A1m8
All the SLAM process is launched on the Raspberry PI. Cartographer is configured to use only Lidar data for map building and position estimation. No IMU is used.
I assume that you have Ubuntu 18.04 and ROS already installed on your Raspberry Pi.
Cartographer in action
This is a more detailed description and answers to questions in comments for the video published on Robotics Weekends YouTube channel.
What is Google Cartographer
Cartographer is a system that provides real-time SLAM (simultaneous localization and mapping. In other words using information from lidar and other sensors Cartographer can build a map of environment and show where robot is located related to the map. Cartographer can build maps in 2D and 3D. But for 3D map Pointcloud as a source is needed. We are going to use RPLIDAR a1m8 as a main sensor. This sensor provides 360 degree distance measurements as a thin line — Laserscan.
Cartorgapher supports multiple lidars, IMUs and other approaches to increase SLAM quality. But the minimum sensor set is a single 360 degree LDS sensor (RPLIDAR in our case), and I’m going to test how it works.
Cartographer has a ROS integration package — cartographer_ros. You can build it from source or get from the ROS repository. For the first try I’d recomment a second option.
3D printed parts
To make experiment more convenient I’ve printed several parts to hold RPLIDAR, Raspberry Pi and powerbank together. You can download STL files from Thingiverse
Raspberry Pi and RPLIDAR mount
The whole device consists of RPLIDAR as a sensor, Raspberry Pi as a computer, and powerbank as a power source. Assembled device is rather simple and solid:
Assembled test bench for mapping
Explaining software part
Below you can see a high level description of the entire setup. RPLIDAR is connected to the Raspberry Pi, Cartographer is installed and running on Raspberry Pi, laptop is used for visualization.
High level ROS modules schema
As I’m using headless system on Raspberry Pi (without GUI), visualization will be performed on laptop. Laptop also has ROS installed.
To configure ROS to work through the network you have to specify two Environment Variables: ROS_IP and ROS_MASTER_URL on both computers. You can find detailed information in ROS Wiki.
Raspberry Pi
Note — I assume that you have already installed Ubuntu on Raspberry Pi and installed ROS Melodic.
First you have to install Cartographer and rplidar ROS package:
sudo apt install ros-melodic-cartographer-ros ros-melodic-rplidar-ros
After the package is installed, create catkin workspace and clone gbot_core package from Github into src directory:
The package does not contain any code to compile, but has configuration files and launch scripts important for SLAM system. Let’s take a look at them in detail.
In configuration_files directory you can find a configuration for Cartographer:
include "map_builder.lua"
include "trajectory_builder.lua" options = {
map_builder = MAP_BUILDER,
trajectory_builder = TRAJECTORY_BUILDER,
map_frame = "map",
tracking_frame = "base_link",
published_frame = "base_link",
odom_frame = "odom",
provide_odom_frame = true,
publish_frame_projected_to_2d = true,
use_odometry = false,
use_nav_sat = false,
use_landmarks = false,
num_laser_scans = 1,
num_multi_echo_laser_scans = 0,
num_subdivisions_per_laser_scan = 1,
num_point_clouds = 0,
lookup_transform_timeout_sec = 0.2,
submap_publish_period_sec = 0.3,
pose_publish_period_sec = 5e-3,
trajectory_publish_period_sec = 30e-3,
rangefinder_sampling_ratio = 1.,
odometry_sampling_ratio = 1.,
fixed_frame_pose_sampling_ratio = 1.,
imu_sampling_ratio = 1.,
landmarks_sampling_ratio = 1.,
} MAP_BUILDER.use_trajectory_builder_2d = true TRAJECTORY_BUILDER_2D.min_range = 0.5
TRAJECTORY_BUILDER_2D.max_range = 8.
TRAJECTORY_BUILDER_2D.missing_data_ray_length = 8.5
TRAJECTORY_BUILDER_2D.use_imu_data = false
TRAJECTORY_BUILDER_2D.use_online_correlative_scan_matching = true
TRAJECTORY_BUILDER_2D.real_time_correlative_scan_matcher.linear_search_window = 0.1
TRAJECTORY_BUILDER_2D.real_time_correlative_scan_matcher.translation_delta_cost_weight = 10.
TRAJECTORY_BUILDER_2D.real_time_correlative_scan_matcher.rotation_delta_cost_weight = 1e-1
TRAJECTORY_BUILDER_2D.motion_filter.max_angle_radians = math.rad(0.2)
-- for current lidar only 1 is good value
TRAJECTORY_BUILDER_2D.num_accumulated_range_data = 1 POSE_GRAPH.constraint_builder.min_score = 0.65
POSE_GRAPH.constraint_builder.global_localization_min_score = 0.65
POSE_GRAPH.optimization_problem.huber_scale = 1e2
POSE_GRAPH.optimize_every_n_nodes = 35 return options
Most of the parameters were obtained experimentally or calculated according to RPLIDAR’s specification, but some of them I’d like to highlight. As the only sensor we are going to use is RPLIDAR, localization will be calculated from spatial difference between scanned data and a calculated map, built from previous scans. So Cartographer will be a source of odometry.
Parameter provide_odom_frame = true means that Cartographer will publish transforms between published_frame and map_frame. In our case it will be TF transform between “base_link” and “map”.
Parameter publish_frame_projected_to_2d = true means that published transforms will be only in X and Y coordinates, no elevations exist.
Parameter use_odometry = false means that there are no other odometry sources. Cartographer will just stick “odom” frame to “map”. Again, for our simple case it is fine.
TRAJECTORY_BUILDER_2D.use_imu_data = false says that we won’t use IMU and count only on Laserscan points matching algorithm.
More information about specific parameters you can find in Cartographer documentation.
In the rviz directory there is a RViz view file with the list of already set up topic listeners, to make mapping visualization more convenient to use.
In the urdf directory there is a so-called robot description — file which contains a robot’s physical description to ROS. There is information like sensors’ spatial position and relations, joints’ type, length, mass, inertia, etc. More information is on ROS Wiki page. For our test bench creating URDF is an overhead— there is only one link between base and lidar and it could be added to the launch file as a static transform — but this file is a good starting point if you decide to add an IMU sensor, range finders or something else. URDF description language is build on top of XML:
<robot name="head_2d"> <material name="orange">
<color rgba="1.0 0.5 0.2 1" />
</material>
<material name="gray">
<color rgba="0.2 0.2 0.2 1" />
</material> <link name="laser">
<visual>
<origin xyz="0 0 0" />
<geometry>
<cylinder length="0.03" radius="0.03" />
</geometry>
<material name="gray" />
</visual>
</link> <link name="base_link">
<visual>
<origin xyz="0.01 0 0.015" />
<geometry>
<box size="0.11 0.065 0.052" />
</geometry>
<material name="orange" />
</visual>
</link> <joint name="laser_joint" type="fixed">
<parent link="base_link" />
<child link="laser" />
<origin rpy="0 0 3.1415926" xyz="0 0 0.05" />
</joint> </robot>
And finally, in the launch directory there are two scripts which are used for starting SLAM process (gbot.launch) and starting visualization (visualization.launch). Structure of gbot.launch is simple enough — first is starting robot description node, then Lidar and at the end — Cartographer. The last node, cartographer_occupancy_grid_node, is used for conversion of Cartographer map data to more popular in ROS navigation stack Occupancy Grid. Occupancy Grid represents the environment as a grid of cells where each cell holds a probability value that the cell is occupied in the range [0,100]. This launch file should be executed on the Raspberry Pi.
<launch>
<!-- Load robot description and start state publisher-->
<param name="robot_description" textfile="$(find gbot_core)/urdf/head_2d.urdf" /> <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" />
<!-- Start RPLIDAR sensor node which provides LaserScan data -->
<node name="rplidarNode" pkg="rplidar_ros" type="rplidarNode" output="screen">
<param name="serial_port" type="string" value="/dev/ttyUSB0"/>
<param name="serial_baudrate" type="int" value="115200"/>
<param name="frame_id" type="string" value="laser"/>
<param name="inverted" type="bool" value="false"/>
<param name="angle_compensate" type="bool" value="true"/>
</node>
<!-- Start Google Cartographer node with custom configuration file-->
<node name="cartographer_node" pkg="cartographer_ros" type="cartographer_node" args="
-configuration_directory
$(find gbot_core)/configuration_files
-configuration_basename gbot_lidar_2d.lua" output="screen">
</node> <!-- Additional node which converts Cartographer map into ROS occupancy grid map. Not used and can be skipped in this case -->
<node name="cartographer_occupancy_grid_node" pkg="cartographer_ros" type="cartographer_occupancy_grid_node" args="-resolution 0.05" />
</launch>
Structure of visualization.launch is even simpler. The purpose of it is to start rviz with prepared view configuration.
<launch>
<!-- Start RViz with custom view -->
<node pkg="rviz" type="rviz" name="show_rviz" args="-d $(find gbot_core)/rviz/demo.rviz"/>
</launch>
Laptop
To visualize Carographer data properly cartographer-rviz package should be installed on laptop:
sudo apt install ros-melodic-cartographer-rviz
To be able to launch visualization script I recommend to clone gbot_core on laptop also.
And just to note - Cartographer and visualization can be launched from a single computer without any changes in configuration.
Cartographer testing
Well, assuming than network connection between computers is established, we are ready to start the Cartographer and visualization nodes.
On Raspberry Pi launch mapping process:
roslaunch gbot_core gbot.launch
You should see a similar response in terminal:
Launching Google Cartographer
Then on laptop (or just in a separate terminal, if you are using same PC) start visualization:
roslaunch gbot_core visualization.launch
The result should be a started RViz:
RViz rendering Cartographer’s Submaps data
During moving the lidar around the room map will become more detailed and with higher and higher contrast. It means that Cartographer’s uncertainty about the environment decreases. Also you will see a trajectory as a blue line, extracted from calculated odometry:
Cartographer mapping process
Conclusion
Cartographer can provide 2D odometry of decent quality, by using only low cost 360 degree LDS with pretty low data rate (5–7 Hz).
IMU and additional odometry source (for example wheeled platform odometry or visual odometry) can increase resulting map quality on big environment areas. But for indoor mapping of about 50–60 square meters area they are not so important — Cartographer’s internal loop closure algorithm is capable of keeping such small maps consistent.
Thank you
I would like to thank Arbitrary Constant and Doina Moga for inspiring and supporting me to finish this story.
Links
3D files: https://www.thingiverse.com/thing:3970110
Project on GitHub: https://github.com/Andrew-rw/gbot_core
Cartographer Wiki: https://google-cartographer.readthedocs.io/en/latest/
YouTube video: https://youtu.be/qNdcXUEF7KU | https://medium.com/robotics-weekends/2d-mapping-using-google-cartographer-and-rplidar-with-raspberry-pi-a94ce11e44c5 | ['Robotics Weekends'] | 2020-10-12 11:44:51.304000+00:00 | ['Robotics', 'Slam', 'Mapping', 'Cartographer'] |
Reducing Commercial Aviation Fatalities | Kaggle Competition | From Google Image
I was working on Kaggle competition which is organized by Booz Allen Hamilton company on Kaggle. From the title, we can understand competition is about aviation fatalities we have to predict fatalities based on given data in competition. So let’s discuss data provided for competition and how I solve this.
Table Content
Overview About Kaggle Problem All about Data Problem Type EDA -Exploratory Data Analysis Feature Engineering Model Training Conclusion Improvement
1. Overview
Any medium of travel one of import thing is safety which may occur due to mechanical or human error. There are many reasons why a plane might crash, including bad weather, imprecise navigation and pilot error. In fact, pilot error is the single biggest factor leading to a CFIT(Controlled flight into terrain) incidents (according to aircraft manufacturer Boeing). Pilots who may be distracted, sleepy or in other dangerous cognitive states.
2. About Kaggle Problem
In this dataset, we have given 18 pilot’s physiological data with their cognitive state and events. Our goal is to build a model which can detect their events, So pilots could then be alerted when they enter a troubling state, preventing accidents and saving lives. The data acquired from actual pilots in test situations and our models should be able to run calculations in real-time to monitor the cognitive states of pilots.
3. All about Data
Data is collected in Line Oriented Flight Training consists of a full flight (take off, flight, and landing) in a flight simulator (non-flight environment, outside of a flight simulator).
The pilots experienced below distractions
1. Channelized Attention (CA): the state of being focused on one task.
2. Diverted Attention (DA): the state of having one’s attention diverted by actions.
3. Startle/Surprise (SS): the state of surprising by when one event is occurring.
Variables with the EEG prefix are electroencephalogram recordings.
id — (test.csv and sample_submission.csv only) A unique identifier for a crew + time combination. we must predict probabilities for each id.
crew — unique id for a pair of pilots. There are 9 crews in the data.
experiment — One of CA, DA, SS or LOFT. The first 3 comprise the training set. The latter the test set.
time — seconds into the experiment
seat — is the pilot in the left (0) or right (1) seat
EGG- eeg_fp1,eeg_f7,eeg_f8,eeg_t4,eeg_t6,eeg_t5,eeg_t3,eeg_fp2,eeg_o1,eeg_p3,eeg_pz,eeg_f3,eeg_fz,eeg_f4,eeg_c4,eeg_p4,eeg_poz,eeg_c3,eeg_cz,eeg_o2 : Electroencephalogram recordings (EGG) which have different parameters.
ECG — 3-point Electrocardiogram signal. The sensor had a resolution/bit of .012215 µV and a range of -100mV to +100mV. The data are provided in microvolts.
R— Respiration, a measure of the rise and fall of the chest. The sensor had a resolution/bit of .2384186 µV and a range of -2.0V to +2.0V. The data are provided in microvolts.
GSR — Galvanic Skin Response, a measure of electrodermal activity. The sensor had a resolution/bit of .2384186 µV and a range of -2.0V to +2.0V. The data are provided in microvolts.
Event — The state of the pilot at the given time: one of A = baseline, B = SS, C = CA, D = DA
4. Problem Type
This is a classification problem in that we have to predict any event out of 4 events-based on given data.
A = baseline, B = SS, C = CA, D = DA
5. EDA -Exploratory Data Analysis
Let’s explore the data for that we have to do exploratory data analysis. So I had loaded both train and test data.
Let’s see train data sample
training file sample data
5.1. The first step to explore train data is to check data is balanced or imbalanced. So I plotted a bar plot based on the label value.
Bar plot of the Target value in training data
As we can see in above plot Event A, B, C, D have the completely different count percentage, we can understand from above plot D and B are in very less number like D=4.8 % and B is 2.7% which very less in compare to A and C. So if labels count in the dataset is not equal or about to equal that dataset is called imbalanced dataset. For make data balanced we need to oversample and under-sample after that we can train the model.
5.2 Experiment count plot
Compare between experiment and Event
when CA experiment happen A event is occurred on 0.1%, B=0%, C=34% and D=0% When DA experiment Event A is 29%,B=0,C=0%,D=4.8% When SS experiment event A is 29%, B=2.7%,C=0%,D=0%
Let’s see data in the test dataset
Test Dataset experiment data
We can see in Test dataset we have a completely different experiment which is LOFT — Line-oriented flight training
5.3 Time
Time for every record
Time taken for event A, C, D is about to equal. if we see mean of time taken for these 3 events is about 175 where event B meantime is 100. Event B,50% data is taken 60 to 100 and the remaining 50% data time is between 100 to 310 millisecond. Other 3 event A, C, D time taken of the experiment is about equal, we can see the plot having very small changes in that.
Let’s compare time distribution in train and test data set
Train and Test time distribution
From the above plot, we can understand that there is a huge difference in Train and test time of the experiment. So the use of time in training time may make our prediction makes the wrong direction. time in flight simulator nothing to do with time in the experiment
5.4 Electrogastrogram Reading:
EGG Box plot
I have plotted Box plot for EGG feature.Box plot show the quartile of data like 0,25,50,75,100 quartile .
1. In every plot off EGG feature we can see that event B have 0 to 100% shape larger than other which can distinguish from others event.
2. Using Box Plot we can see every plot looks about to same so we can’t get more details from box plot.
I have plotted EGG reading for every event.you can visit EDA ipython notebook to check this information I am putting final observation of EGG over here
Observation is as below.
1. As we know EGG reading taking with 20 electrode which is putting on different different position on scalp of head.
2. Based on every event reaction There is electrode getting more fluctuate . You can see in below image
Event A = Fp1,Fp2,F3,C3,Cz
Event B = F8,T4,T3,F4,P4,C2
Event C = Fp1,F8,T4,O1,F2,F4,F3,C2,C3
Event D = Fp1,Fp2,F3,C4,C3,C2
Now Let’s check train and test data distribution
Train and Test data distribution
We can see above both Train and Test EGG data follow about to normal distribution but test data have a bigger peek at zero(0) and more variance in test data. In the end, we can say train and test EGG data is coming from the same distribution. If train and test are coming from the same distribution model better because the model will find train type data in test dataset which is easy for the model to predict it.
5.6 ECG
Based on Labels I have plot violin plot of ECG features
ECG plot bases of Labels
In ECG violin plot we can see that all event plot looks about the same and there is 50% data range is also the same for all event. The event we can see data density between 700 mV to 1000 mV is high in all event.
I have plotted every Event frequency, from this plot we can say that every event has followed a different frequency range in ECG data. So ECG may play a big role in event classification. If we notice Event B has the clean frequency and Event A have noisy frequency based on that separation of both event is very easy.
5.7 Respiration:r
From respiration violin plot we can understand that A, C, D Event violin plot about same there 50% value is also coming about same on about 740μV event B There 50% data is coming on about 760μV. And if you Notice on Event A data is between 450 to 850 μV but other event data is about 540μV to 850μV
5.8 Galvanic Skin Response( GSR )
GSR
GSR Violin plot is almost the same as all other feature we have same B feature little difference. Every event data distribution for GSR is also about the same because if we see bottom side all violin plot is the same expect then B.
If we see the dark line of violin plot about the same range but the event is little larger then others.
Conclusion from EDA
Data is completely imbalanced so I have to oversample or under-sample dataset before the train. Time in Train and Test dataset is completely different distribution so the use of this in the train may our prediction in the wrong direction. EGG Reading violin plot is about same for every event, if we see EGG signal plot we will see different between every event on the different -different electrode on the scalp(see in scalp image). If we see the frequency plot of ECG, respiration,r we can see all three feature follow about the same pattern of frequency for every event.
Now let's come to Feature engineering part
6. Feature Engineering
We know that we training dataset is imbalanced so we have to make balanced with oversampling. So I have used the SMOT technique to oversample data I found one article about physiological parameters in that he has mentioned how EGG data in clinically reading form so that we can found something meaningful. So based on that we did feature engineering.
We can see below image how the electrode is set on the head scalp. For clinically reading we need to subtract from one electrode to another electrode for example eeg_fp1 — eeg_f7. It doesn’t matter which way you do it, as long as it’s consistent. I did this from front to back.
electrode setup on head scalp
Now our training data is ready for the train. Let’s talk about the model training part
7. Model Training
In the end, we need to train the training dataset using different machine learning algorithms and find the best result for test data because at the end we need to get the best score on Kaggle, the basis of test data prediction.
Approach 1
We have seen in EDA, time and experiment data in train and test dataset is completely different.so basic idea is drop those feature because they lead us in a different direction.
For check worst log loss I had created a random model which predict randomly event I found that on this dataset worst log -loss is about 1.64 that means our machine learning model must be less than this log loss.
Now I have tried first model Logistic regression which log loss is about 1.37. So this was not much improved so I understand any linear model will perform badly.
So I had tried Random forest and LightGBM in that I found there is much improvement in log loss, it is about 0.144 of Random Forest and 0.133 for LightGBM.I found a very good log loss so I have submitted test data basis on these models.
I found Test result on Kaggle completely shocking
Approach 2
I thought maybe I have to try with time and experiment features also so I have trained and submitted test data I found the result is about to same not change much. I am still far from the top score which is 0.300
Approach 3
If you notice first count plot of labels we can see only about 50% of train data is event A and other 3 are reaming 50%.So if we doing oversampling then we training event mostly B and D on mock data. which may cause our model prediction wrong. So I have tried model prediction without oversampling using LightGBM. I found a huge improvement in my Kaggle score.
Now I am on 12th position on the basis of the leader board score.
Approach 4
We know that we have time feature in the training dataset if you notice time is given interval of 0.003906. so I have treated as time-series data and did not split the training data randomly and train the model.
I found this model score on Kaggle test data has huge improvement it took my score from 12th position to about 3 rd position.
8. Conclusion
From model training part you can understand, Machine learning is all about the experiment. According to EDA, I thought time and experiment may cause prediction in the wrong direction but they play a big role in prediction. So my approach 4th on train dataset with time, experiment and without shuffling train_test_split giving very good result on test data.
9. Improvement
In this dataset about all features are biological reading so if we use the biological library and extract meaning full data then we can improve results. As we have seen time feature made a huge improvement in the Kaggle score. So we improve model using time series analysis using LSTM. We can also try Deep Learning Model on this dataset.
You can find my complete solution on Github link if have any suggestion or you like this article please comment your view on this article.
Reference | https://medium.com/@vkmauryavk/reducing-commercial-aviation-fatalities-kaggle-competition-4d49b3966f5c | ['Vijay Maurya'] | 2020-10-03 06:08:42.024000+00:00 | ['Kaggle', 'Aviation Fatalities', 'Kaggle Competition', 'Lightgbm'] |
Demonization of Dissent: A Discourse | In the lead up to an election, candidates provide their constituents with a manifesto, attempt to secure mandate of their constituency. Holding manifestos equal, manifestos that are backed by a philosophy are more believable than those that seem not to be bonded with a philosophy. In the last elections, say what you want about how much you like or hate Donald Trump, only the President and Bernie Sanders had well bonded philosophies behind their manifestos.
Bernie wanted to and still wants to socialize capitalism — he means well, but attempting to socialize capitalism is akin to pretending that wearing a large shirt means your abs are flat. Socialism and Capitalism are like oil and water, you can put them together, but they never will become one new substance, always will be no more than a mixture.
Capitalism can be given a human face in the sense of ensuring dignity of life for each and every person. Mixing of capitalism with socialism means neither is effective, results in a confounding recipe that does not taste quite as well as intended.
With respect to Donald Trump, there was a manifesto, and the philosophy was capitalism rounded out with a commitment to bringing back lost jobs, or creation of new jobs, particularly blue collar jobs. Never mind that white collar jobs are not in quite as much supply as could be assumed.
The President has attempted to fulfill his manifesto, albeit using strategies I personally very much disagree with. I felt tax cuts were not the most effective strategy for bringing back jobs or creating new jobs because they were not targeted, with outcome firms could deploy the monetary largesse towards other objectives.
So far, I have been more right than wrong.
With respect to the North Korea thing, Trump was belligerent enough to get Kim to have new found respect for America, with outcome Kim was willing to come to the negotiation table. But then Trump would not let go of the process, would not let the process proceed down the path with highest likelihood of success.
Whenever negotiations are handled by career diplomats, diplomats on both sides of the aisle know they will be assessed on basis of capacity for finding a deal their bosses can find acceptable. In presence of this incentive, negotiations handled by career diplomats have greatest chances of success. On the contrary, with egos of Trump and Kim at the negotiation table, talks faltered, and the initiative Trump had begotten he himself ended up short circuiting.
In so far as the whole ‘wall’ thing was concerned, I never did believe in that project, but then neither did many Republican senators, and just about all of Democratic senators. I had thought, however, that Trump would get his way through the Senate, but was I wrong.
In order for Senator Clinton to win the Democratic ticket, she had to convince voters not that her philosophy was best, but that she stood the best chance of besting Donald Trump at the polls. In context of party primaries, the probability of besting the other party at the polls always is more important than the philosophy at play. So then, the compromise always is that the candidate considered to have the best chance of winning borrows something from any other strong candidate.
If a country seeks to produce leaders with a philosophy, essence of philosophy of leadership must be demanded to be revealed in party primaries.
So then Senator Clinton adopted Bernie’s socialist philosophy of free college education.
It worked like a charm.
Democrats felt her mix of socialism, which would be restricted to mixing in of free college education into capitalism, had a better chance of beating Trump than Bernie’s all out socialist capitalism philosophy. So then, Bernie’s supporters became the dissenters within the Democratic party, Clinton’s the voices of the Democratic party.
On the Republican side of the equation, it pretty much was Donald Trump that distinguished himself with a philosophy. So then Donald Trump and his supporters became the voices of the Republican party, supporters of his opponents became dissenters.
Suppose, however, that Bernie did not initially dissent with Clinton over free college education. We would have that no matter how bad an idea I believe free college education to be for an innovation driven economy, that free college education never would have become part of Senator Clinton’s manifesto.
On the Republican side of things, the fight against the wall, and willingness to dissent with a white supremacist ideology within the Republican party reveals voices of dissenters within the party.
In the lead up to elections, parties fray into dissenters and voices of the party.
Consider, however, that absent presence of dissenters within either of the Republican party, or the Democratic party, or within the larger body polity, Democracy would lack credibility.
It is presence of dissent, acceptance of right to dissent, and willingness to engage with dissent that distinguishes democracies from autocracies. We have then that for any elected official to demonize dissenters implies he or she does not understand that dissent and all that surrounds dissent are essence of well functioning democracies.
Absent dissent, acceptance of the right to dissent, and willingness to engage with dissent, democracies lack credibility.
In presence of normativeness of dissent for credibility of a democracy, dissenters never should be demonized, are essential to well functioning of any democracy.
Dissent that occurs in context of politics never must be personalized.
While dissent merely for the sake of dissent does not augur well for democracies, dissent that is rooted in well founded opinions always is good for well functioning of democracies. In the engagement that occurs around dissent, ideally a country arrives at the best solutions to it’s varied challenges.
Dissent is much like diversity, it is beneficial for ‘profitability’ of a country.
If the United States of America is to become great again, it must once again be a country within which within the Senate or the House, or in the larger body polity, dissent and willingness to negotiate dissent are seen for what they are, which is, evidence for credibility of a democracy. | https://oghenovoobrimah.medium.com/demonization-of-dissent-a-discourse-33a02b8dd5f0 | ['Oghenovo Obrimah'] | 2019-05-01 21:45:51.025000+00:00 | ['Politics', 'Manifesto', 'Elections', 'Bernie Sanders', 'Dissent'] |
Chatbots: Where Are They Now? | What They Claimed: Chatbots would replace human workers.
Early on, some painted a picture in which chatbots replaced human workers en masse. As recently as fall of 2018, some tech insiders continued to speculate about the potential of intelligent machines to eliminate jobs. For some, this was a point of concern; others saw it as a potential to save on labor costs.
The Reality
So have chatbots changed the face of the labor market?
Overall, it’s complicated. Chatbots may have reduced the need for humans to work shifts outside of normal business hours, such as after-hours and weekend support. And companies may overall need fewer front-line employees to serve their customers.
But humans are still essential to many customer-facing functions. Like many groundbreaking technologies, chatbots’ true value has come not from replacing humans but from augmenting them. This is especially true for teams where quickly executing repetitive tasks is a key concern, such as customer support.
“By using bots to deal with lower-level inquiries, support teams can spend more time answering complex questions that are more valuable to the business,” explains Mike Murchison, CEO of Ada Support.
What the Future Holds
Some of the most exciting chatbot use cases, in fact, are the ones that intentionally factor in human input.
At Tenable, a leading cybersecurity company, bots play a crucial role in directing helpdesk questions to the correct experts. If the bot can’t answer an employee’s question with existing knowledge base articles, they can use the bot to contact a Subject Matter Expert.
“There’s an ‘Ask An Expert’ button that posts to a different Slack channel, and all of the channel members are SMEs who could potentially help with the question,” explains Bill Olson, a Product Manager who helped design this intelligent helpdesk.
“If two people ask for expert help, their questions will appear in that Slack channel. Another employee can go into that Slack channel, see the two questions, and say, ‘I know the answer to this one.’ They can hit the ‘Claim’ button, which will open a thread inside of Slack so they can have a conversation [with the person who asked the question].”
Other companies like Nutanix use chatbots to facilitate approval workflows such as approving the provisioning of virtual machines. By pulling these processes into a chat app — where today’s employees spend the majority of their time — bots can make it easier for human workers to accomplish everything they need to do more efficiently.
What They Claimed: Chatbots would create better experiences.
Perhaps the most exciting claim about chatbots was that they would totally redefine common experiences like returning an item or requesting time off of work. They’d offer instant, intelligent help to both employees and customers — and at a lower cost to businesses.
The Reality
From the start, however, designing a useable chatbot interface proved challenging.
“There are technical and UX problems that limit the efficacy of a text-based, conversational UI,” says Dave Feldman, Vice President of Product Design at Heap.
Without sufficient AI to power things like Natural Language Processing — where users can talk to a chatbot the way they would talk to another human, instead of with rigid commands — it can be hard to feel like you’re having a high-quality experience with a chatbot.
Similarly, in order to actually improve experiences, chatbots have to alleviate some of the work that would otherwise fall to humans. It’s nifty if you can ask a chatbot to reschedule your flight, but if a human still has to input the request into a system or make the change manually, the chatbot is relatively useless.
Achieving this requires a fairly sophisticated degree of automation and integration, something that enterprises still struggle with. Only 16% of enterprises have deployed multiple automation use cases at scale, according to a Capgemini study.
This lack of automation may be to blame for the failure of high-profile chatbot projects like Facebook’s M.
“Facebook’s goal with M was to develop artificial-intelligence technology that could automate almost all of M’s tasks,” writes Alex Konrad of Forbes. “But despite Facebook’s vast engineering resources, M fell short: One source familiar with the program estimates M never surpassed 30% automation.”
What the Future Holds
Thankfully, this is one area where the technology is actually quite promising — especially as more automation platforms see bots as a fundamental part of their promise to help lines-of-business staff work more efficiently.
“To understand how bots and automation go hand-in-hand, you have to jump into the day-to-day lives of your target users,” says Ee Shan Sim, a product manager for automation platform Workato. “You have to ask, ‘Okay, if I were a sales manager, what functionality would I want?’ Or ‘As a project manager, which of my daily tasks could a bot make easier?’”
This process also involves understanding the way language impacts the user experience — especially for first-time chatbot users.
“[With chatbots], user inputs are required, but you want them to be intuitive. The challenge is finding a balance between how powerful a chatbot’s automations should be vs. how intimidating it is to the user to go through the workflow. [You have to] continually ask, ‘What makes sense for a first-time user? What’s going to look weird to them?’,” Sim continues.
Similarly, businesses should consider the power of an instant, if imperfect, answer. Despite not offering perfect knowledge all the time, chatbots can still play a valuable role in a world where customers and employees alikeexpect fast, seamless experiences.
“There’s a misconception that chatbots aren’t good enough to be customer-facing,” says Murchison. “In reality, customers are more likely to interface with a bot, because they know they’ll get an instant answer.”
What They Claimed: Everyone would love chatbots, and adoption would skyrocket.
Experts predicted that because of their potential to help cut costs and deliver top-notch experiences, they’d be the hottest new enterprise tech. In fact, a 2017 Deloitte report indicated that 67% of professionals expect chatbots would outperform mobile apps in the next five years.
The Reality
As businesses have realized that (like any new technology) chatbots come with their own challenges, adoption has slowed. Many companies have looked to tech industry leaders like Facebook, who have given up on their chatbot initiatives, and followed suit or at least scaled down their chatbot efforts.
But others aren’t ready to abandon ship. It really depends on what line of work you’re in — for example, 95% of content management professionals surveyed said they still planned to adopt chatbots by 2019. Similarly, the banking sector continues to debut high-profile chatbot projects like Bank of America’s Erica, who managed to attract 1 million users in just three months.
What the Future Holds
As we move into 2019, chatbot adoption will probably continue to boom in some sectors and slow in others. Some experts believe that adoption ultimately boils down to how well you communicate the purpose of the bot to your prospective users, whether they’re customers or employees.
“[Bank of America] had email campaigns for some time saying Erica is coming; here’s what it is [and what it can do],” says Emmett Higdon, director of digital banking at Javelin Strategy & Research. “They did a good job prepping the audience for its introduction.”
Experts also agree that successful projects like Erica have high user growth because of how well-integrated they are with other services, like content libraries, search tools, and AI services. So as the automation and cognitive technologies surrounding chatbots improve, adoption will, too, as long as companies can keep pace with user demands.
That’s one thing experts broadly agree on: whether a chatbot is customer-facing or internal, its long-term success depends on how well it can anticipate what users want — and then go above and beyond their expectations.
For example, Higdon imagines a scenario where a customer asks Erica how much they spent on Uber last month. To really improve adoption and retention, the bot can’t just name a dollar amount.
“[It should be able to say] ‘By the way, that’s twice as much as you’ve spent in the last three months, is there something wrong here?’ Something that gets the customer to go ‘Hmmm’ and think more about their financial health overall,” he says.
Moving Forward With Chatbots: Patience Is the Answer
Overall, chatbots may have evolved differently than we expected. They haven’t turned into a revolutionary tool as quickly as many thought they would, leaving businesses disenchanted and disappointed with marginal improvements.
But as Intercom CEO Eoghan McCabe points out, this lack of buzz around chatbots is par for the course when it comes to emerging technologies.
“I don’t think there’s ever been a new technology that hasn’t followed that cycle,” he comments. “We insiders who get so excited about the future will always jump on the hype and excitement ahead of its practical reality. Virtual reality, self-driving cars — [all of these] technologies will get less sexy before they get real.”
Curious about how bots and automation can change your business processes? Download our free ebook> | https://medium.com/@Workato/chatbots-where-are-they-now-550ac9b9f8 | [] | 2019-02-01 14:32:59.869000+00:00 | ['Innovation', 'Slack', 'Technology', 'Bots', 'Chatbots'] |
Keras Dataset Loading with helper functions | Keras becomes a synonym for deep learning with computer visions nowadays. It has a nice built in API for building deep learning models such as Sequential and Parallel model. It provides a high level function to built, train and test deep neural network model such as CNN, RNN and LSTM.
However in this post, I will discuss another nice feature of Keras, helper function to load built in datasets that comes with Keras installation.
There are following dataset that can be load with load_data() functions. | https://medium.com/@tejshahi1984/keras-dataset-loading-with-helper-functions-259c1502d01b | [] | 2020-10-13 01:44:49.312000+00:00 | ['Dataset', 'Scikit Learn', 'Keras'] |
What is Instagram? How to Create an Instagram Account | What is Instagram? How to Create an Instagram Account
Instagram is a social media platform, where people share photos and short videos. In this article, I will tell you what instagram is and how to create an Instagram account. And also some other stuff about Instagram. Zihad Hossain Dec 27, 2020·5 min read
Over the past few years, Instagram’s popularity has grown very fast — In December 2016, the number of Instagram users are 428.1 million. And now, in 2020 the number of users are more than 854.5 million!
Taking selfie for Instagram
If you don’t have an Instagram account and even don’t know that what Is Instagram. Then this article is for you. In this article, we are cover all the basics. So that you know what Instagram is and how you can create an account on Instagram.
The number of Instagram users has increased a lot over time. Millions of people and celebrities use this social media platform to engage with their audience. People usually share their life moments with their followers on Instagram.
What Is Instagram?
Instagram is a free photo and video sharing social media, available for Android, Apple, iOS and Windows Phone. You can upload your photos or videos and share them with your followers or friends.
Simply, Instagram is a social media platform where you can engage with others by sharing your photos and videos. You can also build your own followers on Instagram. Instagram is now biggest online photo sharing platform.
Instagram allow users to edit and share photos and short videos through a mobile app. Users can add a caption to every of their posts and use hashtags to reach more people. And also users can use location-based geotags to index these posts. Every post by a user, show’s on their followers Instagram feeds and those post are also reach to the new people by using the right hashtags.
Users have the right of making their profile private. If they do, only their followers able to see their posts.
Using Instagram application
As a social networking platforms, Instagram users also can love(react), comment and share others posts, they can send personal messages to their friends or followers via the Instagram Direct feature.
In Instagram, you can share your photos or videos on many more social media sites — together with Twitter, Facebook and Tumblr- with just one click!
Instagram isn’t just a tool for general use, Instagram is also very effective for businesses. Many digital marketer and business owners using Instagram for their marketing or business. Because, you can easily engage with lots of people in a very short time on Instagram. Also, thousands of people making millions of dollars just using Instagram. But for this, at first you need a good amount of followers.
Don’t worry, if you know the actual methods then it is very easy for anyone to get millions of real Instagram followers. If you want to learn those methods step by step from very beginning to professional influencer, then check this out — click here
Who owns Instagram?
Instagram was launched in 2010 in San Francisco. Instagram was created just to share photos. Instagram was created by Kevin Systrom and Mike Krieger.
But today you can also share videos on Instagram and offer direct messaging and many more. In 2012, Facebook bought Instagram and make it more useful.
Now, you will find many features on Instagram that were not there before. Also, now you will find many types of filters in it to customize your pictures and videos.
Using Instagram
First of all you need to create your account on Instagram, if you have a Facebook account you can sign up directly through it.
After signing up, you have to choose your unique username. Then your own profile should be set up well. You can keep the profile completely private if you wish. If your profile is private, people will not see you directly, they will need to send a request first and then you will accept it. Only then will they be able to see your photos and videos.
Using Instagram application
You can follow your friends, family members, celebrities, favorite personalities, etc. so that when they share a picture, you can see them all. People also follow you, if they like your contents. If your wishing to become a successful Instagram influencer then this met can help you — click here
Creating an Instagram Account
1. To create an Instagram account, first you need to go to the Google Play Store and download the Instagram apps.
2. After downloading the Instagram app, you need to sign up. You can sign up with one click with the help of your Facebook account or you can open an Instagram account with your mobile number, email id.
3. If you click on Facebook and sign up for an Instagram account, then your Instagram account has been created.
And if you create an Instagram account with a mobile number or email ID, you will be asked to enter your name and the Instagram account will be credited as soon as you enter the name. You can customize your account from “Edit Profile”
Thanks for reading this. To learn more about Instagram and how to earn from this, I recommend you to take this course. If you buy course from this link, you will get a big discount! check this out.
link — click here.
[Disclaimer: This post may contain affiiate links, which means I may receive a small commition, at no cost to you, if you make a purchase through a link.] | https://medium.com/@zihadh687/what-is-instagram-how-to-create-an-instagram-account-dd4a2d5c457a | ['Zihad Hossain'] | 2020-12-27 03:27:57.670000+00:00 | ['Instagram', 'How To Create', 'Instagram Account', 'What Is Instagram', 'Social Media'] |
Case Study: Using Tethered Drones with GoodVision’s Traffic Analytics for Traffic Surveys in Singapore | In this article, you will learn:
Understand the key points for preparing for a traffic study and using the Volarious V-Line Tethered System for unlimited flight time Familiarise with using the GoodVision Video Insights traffic analytics platform to gather insights from the traffic data
Fig. 1
1. Planning for the shoot
Before going to the traffic site, we plotted out the distance and the area of traffic site we wanted to capture and made sure it is within the covered field of view of the drone. (The field of view is mentioned in the video and seen in Fig. 1)
Fig. 2
We checked the weather before going the traffic site. (Fig. 2)
Fig. 3
2. Things to Bring
Mavic Drone with RC Smart Controller*
Portable Charger with Type-C cable
V-Line Tethered station with V-Line Power Module
x6 TB47S Batteries*
SD Card
GoodVision Video Insights platform account, which you can get at my.goodvisionlive.com
A set of fresh set of batteries is x6 TB47S Batteries will last for two hours long. For long operation, you may choose to bring more fully-charged set of TB47S batteries. Alternatively, you can bring an AC generator.
3. Configuration
Keep camera angle from -90 to -45 degrees ideally if you plan to analyse pedestrian traffic too. If you are looking for a higher drone shot of vehicular traffic, you can keep the drone at a straight down angle. We flew the drone within the variable distance of 15–30 meters from traffic site. We recorded the video with 24 FPS and resolution of 4K. By following this guide, you will get the most suitable traffic footage videos that can be used for processing and analytics.
Fig. 4
4. Traffic Analytics In GoodVision Video Insights
Recorded drone footage was uploaded and processed with GoodVision Video Insights. The platform provided us with a very clear view of the traffic flows on the junction. Our video was 2 hours long and divided into 20 separate video files. We were impressed how simple it was to upload the footage, as GoodVision’s platform manages it all automatically and stitches all your files without your necessary intervention. A bonus was, that GoodVision automatically stabilised all drone videos and removed all drone movements caused by the strong wind.
“GoodVision’s traffic analytics platform is so powerful and easy to use. It is just the right tool for anyone looking to collect and analyse traffic data quickly and easily.” (Douglas Wong, Volarious)
GoodVision Video Insights user dashboard
Clear distinction of traffic lanes allowed us to precisely select traffic movements our client needed to analyse. From the drone survey, we were looking to obtain the following parameters:
Multimodal traffic counts Vehicle speed Vehicle gaps on each entry to the junction
All of these parameters we were able to obtain very precisely in the matter of seconds in the platform, and it was also exported in the Excel report vehicle by vehicle. This way we have saved a significant amount of time on traffic data collection versus the manual traffic counters our clients were using before. The detail and granularity of traffic data provided by GoodVision is unreachable for any conventional method to deliver.
Both solutions are available to users globally. Please reach out to us at [email protected] if you want to know more.
Visit GoodVision’s website and the LinkedIn profile if you want to know more about what GoodVision does. If you want to try our traffic analytics platform for free, visit my.goodvisionlive.com. | https://medium.com/goodvision/case-study-using-tethered-drones-with-goodvisions-traffic-analytics-for-traffic-surveys-in-ba282b466638 | ['Daniel Stofan'] | 2021-01-15 15:06:55.948000+00:00 | ['Surveys', 'Analytics', 'Transportation', 'Traffic', 'Drones'] |
A Guide How to Find Guest Post Accepting Websites or Blogs | Guest posting/blogging is viewed as a way to promote and build link mass and it has been one of the most popular trends in the SEO industry. The existing trends definitely show that its popularity will just grow. This guide is about guest posting and how to get good links to your website.
Define guest posting
Shortly, guest posting can be best described via examples:
If you wrote an article and you are willing to pay to a website for publishing that article — then it is advertising or a sponsored article or post.
If you pay money to another person for writing a post with links to your website or brand in the text of the article — then we call it native advertising.
If you have a piece of writing and you are not willing to pay but offer it to popular websites who are publishing your article for free — then it is a guest post.
The most important difference between a guest post and other kinds of sponsored content is that you don’t pay to publish it. Also, guest posts are normally of better quality than sponsored content.
Why everybody is so excited about guest posts?
In the first place, why would you need to publish a guest post? With a guest post you 1) get your business or brand mentioned 2) establish a link from a respected resource, and 3) get a flow of visitors from their page to your website. These three benefits represent the key ranking factors used by search engines to determine the ranking of a website in the search results.
Now, why an authoritative website would want a guest post from you? There may be different answers to this: different guest post accepting websites have their own reasons:
allow industry specialists to share their ideas
write new content on the blog for its promotion
if they need editorial staff to constantly update their blog
receive free and high quality content
Based on the above, guest posting serves as a mutually beneficial strategy for both writers of guest posts and publishers of the new content.
Can publishing a guest post harm my website?
In most cases — no. But there may be exceptions:
- Paid content, where a website offers to publish a guest post for a fee. This can be potentially a problem since such links are often viewed by search engines as an attempt to manipulate search ranking.
- Irrelevant links. This one is simple: users and search engines would not be happy if a automobile how-to guide is published on a culinary blog. The desired referral traffic is hard to achieve from such a link; incoming conversions are inappropriate and will just worsen your website bounce rate; irrelevant links may be perceived as spam by search engines.
- Low quality website. It is the same as if you establish a link from a link farm. Always see the donor’s quality before suggesting a guest post. | https://medium.com/@msmallet07/a-guide-how-to-find-guest-post-accepting-websites-or-blogs-4c796cfb7c6e | ['Mike Smallet'] | 2020-12-16 12:07:10.283000+00:00 | ['Promotion', 'SEO', 'Content Writing'] |
Keep only what gives you joy; Get rid of everything else! | Keep only what gives you joy; Get rid of everything else!
Image by Victoria_Borodinova from Pixabay
I did not see this one coming!
The Netflix production with cleaning guru Marie Kondo is not necessarily my cup of tea. BUT she got one thing so damn right!
Her signature method is called the Marie Kondo method and goes something like this; Get rid of everything that doesn’t give you joy.
How simple and clever isn’t that? Hands up, how many of us live our lives by the ‘’get rid of everything that doesn’t give you joy’’ rule? No one I know anyway.
Reflection
Surely you can’t get rid of your mother-in-law if she doesn’t give you joy. It’s illegal. But the Jamie Oliver cookbooks from 2005 have probably served their purpose. The same applies to your old T-shirts that smell of B.O. even when they are straight from the wash. I know you kept a few of them out of sentiment.
You see where I’m heading, and now your wheelhouse is going wild, making long lists of all the precious junk you should send off to charity or bury in the bin.
And while you’re at it. Why not try to cleanse all the rubbish in your head? Say goodbye to the crazy mind-chatter that doesn’t bring anything but cold sweat and nightmares.
Clean out rubbish thinking and dump the whole shebang in a big imaginary container? Sort of how Danny Torrance learned from Dick Halloran how to trap the evil spirits from The Overlook in ‘’boxes’’. (by the way, The Shining sequel, Doctor Sleep, is a great read).
Image by Here and now from Pixabay
Thank you, Marie, for your wisdom
I hope she’s alright with my take on her method…
Keep what brings you joy.
Get rid of the old B.O. concert T-shirts.
Flush out the poopy thoughts running amok in your head. Shit is supposed to run in the sewer system, not in your brains.
A neat house also equals neat thinking.
Seriously, The Marie Kondo signature sentence will stick with me to the end of the day. And by next week, I’ll probably have forgotten where the ‘’get rid of everything that doesn’t bring you joy comes from’’. Two weeks from now, I’ll be convinced I came up with the concept myself. | https://medium.com/@Helene_Roberts/keep-only-what-gives-you-joy-get-rid-of-everything-else-7883adb39aca | ['Helene Roberts'] | 2021-09-09 07:30:42.954000+00:00 | ['Letting Go', 'Reflections', 'Personal Development', 'Mindfulness', 'Cleansing'] |
7 interior-design trends that will start to disappear by 2021, and 8 you’ll see everywhere | By Sophia Mitrokostas
Interior-design trends are always going in and out of style.
Insider spoke to professional interior designers to find out which decorating trends will be everywhere in 2021, and which styles will fade away in the new year.
Shiplap is falling out of fashion.
Shiplap was popular throughout the 2010s. ocwarfford/Shutterstock
Interior designer Rachel Street, host of DIY Network’s “Philly Revival,” told Insider that shiplap is one of the fastest fading trends.
Once used to waterproof boats, shiplap siding became a trendy way to decorate interior walls in the 2010s.
“Shiplap appears in nearly every TV home-makeover show, but there are so many other emerging ways of bringing texture into a space,” she said.
Street added that tile, plaster, rattan, or living walls of plants are set to become more popular in 2021, instead.
Gray kitchen interiors may become less popular.
The all-gray look is going out of style. onurdongel/Getty Images
Dennese Guadeloupe Rojas of Interiors by Design told Insider that the trend of having all-gray kitchen cabinets and walls will fade in 2021.
“Gray kitchen interiors can look cold and lack distinction,” she said. “Instead, I foresee bolder colors gaining popularity.”
Rojas mentioned that indigo blue may be a particularly trendy kitchen color in 2021.
All-white interiors may start to date themselves.
Like all-gray designs, all-white rooms seem to be falling out of favor. Royalty-Free / Getty Images
Street predicted that the age of minimalistic, all-white interiors may be coming to an end.
“For a few years now, we’ve been making everything from walls to countertops bright white,” she told Insider. “Next year, I predict people will return to creating visual interest through color.”
Those looking to give their all-white interiors more pizzazz without adding bright colors can try combining different light-colored patterns and textures.
The mid-century modern furniture trend may finally be over.
The 1950s and 1960s designs have been popular for the last few years. united photo studio/Shutterstock
Mid-century modern design borrows from style elements that were popular in the 1950s and 1960s, and it’s been everywhere in recent years.
Heather Goerzen, interior designer with Havenly, told Insider that this trend may finally be fading away.
“We’re shifting away from the ‘Mad Men’ look and spaces dominated by walnut wood, spindle legs, and geometric prints,” she said.
Barn doors could be replaced by other types of statement entryways.
Like shiplap, the barn-door trend was influenced by home-makeover shows. Breadmaker/Shutterstock
Barn doors exploded onto the interior design scene in the early 2010s, but they may not retain their popularity as we head further into the 2020s.
“The trend for barn doors, often painted in drab brown, will be replaced by pocket doors or classic French doors,” Rojas told Insider.
Pocket doors slide directly into the adjacent wall, and French doors normally open outward and feature large panes of glass.
Accent walls likely won’t be as trendy in 2021.
Accent walls can add a pop of color to a room, but they can also be distracting. PlusONE/Shutterstock
An accent wall is one that’s painted or wallpapered differently than the others in a room.
The trend started as a way to make a space more interesting, but Rojas said that the age of the accent wall is drawing to a close.
“Accent walls can look childish and be too distracting,” she told Insider. “We’ll hopefully be returning to monochromatic walls that blend seamlessly with the decor without the startling drama of an accent wall.”
Matching furniture sets may start to look dated.
Having everything match doesn’t allow people to let their personal style shine through. Shutterstock
Kobi Karp, principal designer at Kobi Karp Architecture and Interior Design, told Insider that matching furniture sets may soon be considered unfashionable.
“Identical furniture and matching sets don’t showcase personal style,” he said. “I believe matching sets will soon be seen as a design flaw.”
Instead of coordinating all your furniture or buying a complete set from a showroom, consider selecting nonmatching pieces with complementary colors or designs.
On the other hand, the “grandmillennial” or “granny chic” style may rise in popularity.
Floral wallpapers are considered “granny chic.” 3523studio/Shutterstock
Goerzen described the rising “grandmillennial” trend as a modern revival of homey design elements that you might find in your grandparents’ home.
She told Insider that the style is meant to evoke comfort, nostalgia, and tradition.
“Think floral wallpaper, antique paintings, delicate china, crocheted throws, and vintage touches with whimsical flair,” she said. “This trend will certainly be one to watch in 2021.”
Peel-and-stick wallpaper will likely be trendy in 2021.
Wallpaper is having a bit of a resurgence right now. PhotoMavenStock/Shutterstock
Self-adhesive, removable wallpaper is gaining popularity with people who rent or are looking for a low-commitment way to upgrade their living space.
“Peel-and-stick wallpaper is perfect for an easy and dramatic room change,” Rojas said. “You can get creative and simply peel it off when you get tired of it.”
If you’re wary of covering an entire room in wallpaper, try adding it above the chair rail in dining rooms, above the molding in bathrooms, or even in closets.
Rustic ceramics may edge out smooth tiling in the kitchen.
Interesting, handcrafted designs are becoming popular for kitchen tiles. Fotoluminate LLC/Shutterstock
The kitchen designs of 2021 may swap smooth, uniform backsplashes for colorful, handcrafted ceramic tiling.
“Gone are the days of smooth porcelain or glass subway tile,” Street told Insider. “I’m starting to see a lot of hand-thrown ceramic tiling that shows some natural variation, like Moroccan zellige tile.”
You can use ceramic tiling to create backsplashes or cover entire walls. But handmade tiles are often more expensive than manufactured ones, so individual pieces can also be added as accents to cut costs.
Green cabinets could be one of the biggest kitchen trends of 2021.
Designers think green will be a big color for 2021. Shutterstock
Anyone looking to add drama to their kitchens may want to consider painting their cabinets green. Street told Insider that both lighter and deeper shades will be popping up in kitchens everywhere in 2021.
“Because green is a mix of blue and yellow colors, it works with both cool color palettes and warm, cozier kitchen designs,” she said.
The designer added that pairing green cabinets with Carrara-marble counters can help highlight the gray veining in the stone.
Industrial styling may be the next big trend in 2021.
Mixing wood and metal in the home is becoming more trendy. Westend61/Getty Images
Industrial interior style often incorporates elements such as exposed stonework, high ceilings, wood and metal elements, and neutral colors.
Karp explained that this fuss-free style may be a rising trend in 2021 as people continue to spend more time at home.
“Industrial style has a mix of modern and traditional design and works for interiors that have to serve as places to live, work, and play,” he said.
Plaster walls may make a comeback.
Plaster is more difficult to install than drywall, but the effect is nice. Syda Productions/Shutterstock
Before the invention of drywall, interior walls were often created by layering plaster over wooden strips called laths.
“Drywall is quicker to install and provides a more uniform surface, but the depth and texture of plaster is making a comeback,” Street told Insider.
To explore this trend without ripping out your walls, she suggested coating your drywall with a thin layer of plaster.
Wicker and rattan furniture will likely be trending.
Natural-looking furniture is coming back in style. brizmaker/Shutterstock
Ross Thompson, interior designer at QE Home, told Insider that woven furniture styles will be popular in 2021.
“Rattan and wicker details are on their way in,” he said. “These natural materials add warmth and lightness to home decor.”
Rattan furniture is made from woven palm stems, and wicker pieces are typically made of woven willow twigs. Both styles are lightweight and can work indoors and outdoors.
Natural fabrics may gain popularity over synthetics.
There’s a trend shifting toward more sustainable home fabrics. Golubovy/iStock
Synthetic fabrics such as polyester, nylon, and rayon may lose ground to natural and recycled textiles in 2021.
“With the growing awareness of environmental issues, I foresee a trend for using more sustainable materials and natural fabrics in the home,” Thompson said.
He singled out textiles like organic cotton, recycled polyester, and low-impact linen as prime candidates for 2021 trends.
For more great stories, visit Insider’s homepage. | https://medium.com/insider/7-interior-design-trends-that-will-start-to-disappear-by-2021-and-8-youll-see-everywhere-66afba26bcff | [] | 2021-08-30 00:00:00 | ['Trends', 'Home', 'Interior Design', 'Lifestyle', 'Furnishings'] |
Colouring Your Christmas | The holidays are just around the corner. Chances are, you’ve walked into the kitchen to the smell of freshly baked cookies, Christmas décor, a thawing turkey and a Shepherd’s Pie recipe stuck to the fridge. Or perhaps, you were scrolling on e-commerce websites for sales, searching for the perfect Christmas gifts for loved ones. The excitement of Christmas festivities approaching is unlike any other feeling
Although Christmas in Malaysia is unlike that in other countries (it doesn’t snow in Malaysia), the feel & vibe of it is similar throughout. Christmas is a time where families spend time together, sing Christmas carols, exchange gifts, snap #OOTDs in ridiculous-looking Christmas sweaters (or in our case, funky outfits) & most importantly, enjoying a lovely home-cooked feast. This year, however, some of us would want to make our Christmas celebrations extra special
AkzoNobel, the leading global paints and coatings company & maker of Dulux, has an idea on how you can make that happen. Drawing inspiration out of the Expressive palette from AkzoNobel’s ColourFutures 2021 colour trend-forecast, let us take you through on how you can add Christmas ambience to your home year-round
First & foremost, let’s see how you can spruce up your dining area this Christmas. This room-set is inspired by an expressive shade of red and balanced by the soft neutral tone in AkzoNobel’s Colour of the Year 2021, Brave Ground. The colour split of the wall is placed higher to give the room a warmer atmosphere, creating a homely feel ~ a subtle but outstanding ambience you can adopt for the year. The colour combination also adds verve & vitality to your dining area, making it an ideal space for you to catch-up & enjoy the friendly family banter
If you are interested to replicate this, here are the colours which are used in this room set:
Tea Dance / 10YR 21/436 Brave Ground / (also known as Wright Stone 10YY 30/106)
Expressive colours are all about empowering people to be themselves & there is no other place more suitable for this than your living room. Sitting in the living room & spending time with your family and friends is an ideal setting to just be yourself. It’s the best place to reminisce or even watch Home Alone for the 100th time. The light shade of pink in this room-set creates a relaxing feel & when complimented with the warm tone of Brave Ground, provides a unified & balanced atmosphere. The pairing of expressive colours encourages us to stay grounded and balanced throughout the year. And with Christmas around the corner, add in a mini Christmas tree to amplify the holiday vibe & to stash your presents
This room-set used:
African Glow / 12YR 40/146 Brave Ground / (Wright Stone 10YY 30/106)
Last, but not least, as some of us are still working from home, this might be a good opportunity to spruce up your workspace with Christmas colours. The shades of pink, red & brown work brilliantly together, with this room-set being a picture-perfect example. The colours are positive, energising & could help boost your creativity & free-thinking while working. Also, this is a space where you can really express yourself & bolster a sense of self. The Christmas inspired colours also work to provide a positive & creative environment all year-round
If you are interested to replicate this, here are the colours which are used in this room set:
Tea Dance / 10YR 21/436 Brave Ground / (Wright Stone 10YY 30/106) Canterbury Lane / 70RR 17/372
So now that you have a general idea on what to do in preparation for Christmas, we can’t wait to see you add a fresh new colour to your home. Regardless of any combination you decide to go with, it will amplify the mood & create a perfect setting for the Christmas celebrations
If you would like to find out more about the Colour of the Year 2021 or explore the colours brought to you by Dulux, all you have to do is point your browser to https://www.akzonobel.com/en/generic-content/color-year-2021 or www.dulux.com.my | https://medium.com/@siennylovesdrawing/colouring-your-christmas-687833a10bb | [] | 2020-12-22 04:58:19.185000+00:00 | ['Christmas', 'Home Decor', 'Home', 'Lifestyle', 'Home Improvement'] |
A Lesson for Burgeoning Lawyers from Dr. Seuss | Law students, in my view, should be figuring out which area of practice interests them before law school. At the very latest, this should be done during law school and should be figured out by the time a person graduates and looks for meaningful work. This does not mean that a person cannot choose a particular area of law and then change focus after learning more about it, but to graduate from law school having not targeted an area of practice reveals, in my view, an unwillingness to make difficult decisions.
To those that would argue with me about this point, I would simply put another scenario that I contend is comparable: if you met a 27-year-old that told you that they did not know if they wanted to be a doctor or a lawyer, would it be impressive? For me, it would signal a lack of direction to that point in the person’s life and a lack of doing the kinds of things that are necessary in order to make that decision and that those things signify being an adult. I do not consider it any different if I meet a 27 year old who has finished law school and tells me that they are not sure if they want to be a criminal lawyer or a corporate lawyer. That is not a problem that I have any interest in assisting with.
There are many things that a person can and should do to avoid this problem. The advice I would give to law students and pre-law students is to meet with lawyers, go to court, go to tribunals, ask questions, send e-mails and really try to figure out what area of practice interests you and where your skills and abilities lie. In exactly the way that it is unimpressive to me when someone tells me they do not know what they want to do, it is very impressive to me if someone is able to articulate why they are interested in a particular practice area and what they have done to cultivate that interest and arrive at that conclusion. It is also such a vital task because there is little to no overlap between the different practice areas in law.
The law schools, in my view, are not very good at providing resources to focus students on a particular practice area. A lot still depends on the student and his or her own initiative. Nonetheless, we are all ultimately responsible for ourselves and it is often amazing to me when I meet with law students how little they have actually done to figure out which area of practice they are interested in. Whenever I encounter this situation, I always think of Dr. Seuss and his lesson for children: it may not be easy, but you have to make a decision or risk heading to a most useless place, the Waiting Place. | https://medium.com/law-school-life-and-beyond/a-lesson-for-burgeoning-lawyers-from-dr-seuss-c2a34157696d | ['Ryan Handlarski'] | 2020-12-14 01:17:42.533000+00:00 | ['Professional Development', 'Life In Law School'] |
One Step Closer to a Data-Driven Culture | What Does It Mean to Be Data-Driven?
Data is an important resource for growth and innovation.
A data-driven organization places data at the center of every decision-making process, treating it as a valuable asset. The 2 pillars on which a data-driven culture is built are data collection and free access to data.
Take the stock exchange as the ultimate example of a data-driven system. Data plays a big role in setting the prices of assets on the stock exchange; Every investor in the world can access relevant data collected as financial reports, news and alike and use it to make his decisions.
Why is Being Data-Driven so Important?
Data-driven discussions are more efficient; Having concrete evidence to support claims enables better focus and faster decision making. Organizations that allow data to propel them to success usually have a growth mindset. The data-driven culture empowers people to take risks, seek innovative solutions to problems and challenge the status-quo using properly collected and analyzed data. Successful organizations consistently turn data into actionable insights. Data is a driver of change, enabling organizations to explore their options and allowing them to focus on the most suitable path for them. Properly-used data will allow companies to make informed choices and strategic changes to support their growth. Data can even help to create (or discover) a sustainable competitive advantage.
How is Data-Driven Culture Being Managed?
Data-driven culture relies on relevant data that can be trusted to be utilized into actionable insights and determine next moves. In order to achieve value, data should be properly managed. Relying on “bad” data can result in very bad decisions. The decision-making table should have an analytical lead who can properly interpret and explain the meaning of the data in order to bolster or undermine existing decisions and lead management to make new and innovative informed decisions.
3 Simple Steps For Your Organization to Become Data-Driven
1. Single Source of Truth
A central database that everyone can pull from, thereby eliminating the risk of having to mitigate different versions.
This can often be done by simply deciding and enforcing the decision of what the SSoT is. Other times there can be plenty of work around enabling its mere existence. Either way, it’s the healthiest concept to aspire to and will ensure healthy data architecture as well.
2. Consensus Metrics
A set of well-defined metrics that create a common language and help align the organization on goals.
A quick win can be achieved by creating a wiki of all existing metrics in the organization, then iterating on merging proximate definitions and clarifying the remaining. In most cases, once centralizing these metrics, you’ll discover there is a limited number of metrics that are really interesting company wide. Now that they’re well defined and agreed upon, these are your consensus.
3. Democratization of Data
Making data accessible widely in the organization, supporting a culture where everyone should use data in their everyday work.
Start asking the “why” questions more often to drive data as a habit. An important part of democratization of data is providing the training and support needed for meaningful analysis.
Insights:
A data-driven culture enables companies to make optimal decisions and to maximize their potential. The reason it’s considered a “culture” is that using data is contagious; The only way one can argue a data-driven decision is by presenting counter metrics, which justify one’s position. So once decision-makers decide that this is the way they’d like to work, the entire organization “magically” aligns.
An analytical lead is a must in order to seriously utilize company data. You’d be surprised how much a single analytics expert can contribute when given the right access, even without a team.
Establishing a good data operation is not cheap in resources but the gain multiplier is on a much larger scale, going from big to huge. Once you become data-driven you quickly understand the positive impact it has on your company and its bottom line.
Lastly, it’s important to note that operating in this way is not binary but rather on a spectrum. This enables organizations to improve at their own pace while achieving value almost from day one, which is a major advantage. | https://medium.com/@gi.inbal/one-step-closer-to-a-data-driven-culture-f286396c7ab8 | ['Inbal Gilead'] | 2019-09-22 16:38:34.472000+00:00 | ['Metrics', 'Data Driven', 'Analytics'] |
Emergency Motel Vouchers Online San Diego | Stay at our modern San Diego hotel, located near downtown San Diego. It is located in a good place. Mini refrigerator Book online. Microwave oven. Highlights: Quick and easy booking, No reservation fee, Flexible dates.
Visit our site www bookhotelcompare [dot] com
What are motel vouchers?
Hotel / motel vouchers: Communities that do not have enough shelter beds can offer vouchers to pay for a hotel or motel. Transitional housing programs: local government and / or social services agencies or may provide transitional housing to qualified persons for 90 days to two years.
Where can I get emergency hotel vouchers?
Find your local Salvation Army and ask if they have any available. Most counties and / or regions have homeless assistance programs. You can usually find them by visiting the nearest human services offices. They can usually provide you with a motel vouchers or coupon or some type of emergency housing assistance.
How can I get section 8 immediately?
Steps to obtain Section 8 homes or Section 8 apartments
Find your local Public Housing Agency (PHA).
Determine if you are eligible.
Get an application for the Section 8 Housing Choice Voucher Program. …
Complete and submit the application for the Section 8 Housing Choice Voucher Program.
Find out the status of the waiting list.
Can you get monetary assistance if you have no home?
If you receive cash assistance from DSS and become homeless, you can get up to 60 days of emergency motel for homeless once a year.
What is the HOPE program?
Summary: HOPE I helps low-income people buy public housing units by providing funds that nonprofit organizations, resident groups and other eligible beneficiaries can use to develop and implement homeownership programs.
Hope that help to others people too. :) | https://medium.com/@sgupta.asl/emergency-motel-vouchers-online-san-diego-b992170cf0eb | ['Jenna Haron'] | 2020-01-22 03:20:20.232000+00:00 | ['Motel', 'Coupon', 'San Diego', 'Vouchers', 'Homeless'] |
SEO Guidelines — What is On Page (Or On Site) SEO? | When we talk about SEO or Search Engine Optimization, we generally divide it into two major categories. These two categories are:
On-Page SEO
Off-Page SEO
Just to be clear, SEO or Search Engine Optimization is a process in which you use different techniques to improve your website’s search engine rankings. These techniques are either on-page or off-page.
In this post, we discuss what is on-page SEO? Moreover, there are different basic concepts and techniques that most webmasters use in the on-page SEO methods. This article also explains all these on-page SEO methods.
So let’s start with the basic definition of on-page SEO, and then we move on to discuss some useful techniques and methods that you can use to improve your website’s on-page SEO structure.
What is on-Page SEO?
Search engine optimization techniques that directly involve the front-end of your website and webpages are known as on-page SEO methods. Remember that onsite SEO has a great impact on your search engine ranking, and to achieve higher rankings, on-page optimization is imperative.
It is important to understand that off-page SEO is equally important for long-term and sustainable success. However, off-page SEO cannot work independently, as it requires you on-page SEO, too, to complement it.
Most webmasters fail to create a balance between on-page and off-page SEO, which often leads to a failure. The tip is to first learn about on-page SEO and apply it to your webpages. In the meanwhile, you keep improving and optimizing the back-end of your website, i.e. the off-page SEO. Only then, you can really improve your website’s search engine rankings.
So, in short, on-page SEO is what you do on the front-end of your webpages. There are various techniques to optimize your webpages and following are some of the most important ones:
1. Page Titles
Page title is one of the most important key factors of on-page SEO. Every webpage has a unique title. Ideally, that title should contain the primary keyword that your webpage is all about. It is one of the most basic and effective on-page SEO techniques, i.e. to include your primary keyword in the title of your webpage.
Let’s suppose you are a writing a blog post about different tips and techniques for getting rid of lizards. Most users would search for the topic with keywords like “getting rid of lizards” and “how to get rid of lizards”. Therefore, it is wise to name your webpage as one of such keywords, e.g. “How to Get Rid of Lizards?”
In short, the idea is to include the keyword (that you are trying to optimize your webpage for) in your page’s main title.
2. Meta Description
The Meta description is one of the most neglected aspects. It does not only help the actual human readers to find what the post is all about, but it equally helps the search engines as well.
Most of time people don’t add Meta description. Do not make the same mistake. Moreover, a common on-page SEO practice is to include your keywords in that Meta description. Search engines love it!
3. URL Structure
Most webmasters believe that apart from the title of the page, the URL of a website/webpage is the second most important part to add keywords. The search engine crawlers find it easier to read keywords from the webpage’s URL structure. Therefore, most SEO specialists prefer using the main keywords in it.
Tip: If you are using WordPress, adding keywords to your webpage’s URL is fairly simple and easy. You can do it via the post editor. Moreover, you do not have to use include all the “articles” and “prepositions” in the URL structure. Just try adding all your primary keywords in it.
4. H1 and H2 Tags
Just like you add your keywords in the main title and URL of your page, it is equally important that you do the same with your H1 and H2 tags.
H1 and H2 tags are commonly referred to as “heading tags”. Most webmasters and SEO experts believe that search engine crawlers find it easier to evaluate a webpage if its heading tags contain important keywords.
The H1 refers to the main heading of your page. The H2 refers to the sub-headings. Try adding your important keywords to both the main-heading and the sub-headings for better results.
5. Keyword Density
Keyword density is another important factor of on-page search engine optimization. Many people confuse it and end up with unnecessary “keyword stuffing”. Do remember that Google penalizes the websites that unnecessarily use keywords in their webpages.
Although there is no ideal percentage of keyword inclusion, most SEO experts believe that 3%-5% inclusion is considered fair — as long as it seems natural to add those keywords. Moreover, some SEO experts that belong with the modern school of thought believe that “once is enough” and that you should consider writing the posts for the humans, rather than the search engines. Both the theories seem correct in different scenarios. The bottom line is just not to stuff keywords unnecessarily.
6. Image Optimization
Not many people enough attention to images, but you can optimize your website through this source as well. It is important to understand that search engines cannot read and comprehend images. So in order to use them for your website’s search engine optimization, use the Alt Text option. It is the text that goes along with your image, and shows up when the image can’t. Search engines can read that text to identify what the image is all about, and your website can then rank for the same.
In Conclusion:
If you want to properly optimize your website for higher search engine rankings, then you simply cannot neglect the on-page SEO. It is the most basic aspect of search engine optimization, and the one that does not require much time and efforts — especially when compared with the off-page SEO.
In the next post, we will be exploring the second segment, i.e. the off-page SEO techniques. Stay tuned. | https://medium.com/datadeck/seo-guidelines-what-is-on-page-or-on-site-seo-71a6f188ad88 | ['Data Dum Dum'] | 2016-11-11 07:06:27.563000+00:00 | ['Online Marketing', 'SEO', 'Marketing', 'Digital Marketing'] |
I’M AN ADDICT, AND I HAVE A CONFESSION TO MAKE… | I have had this bitter-sweet addiction for as long as I can remember.
My Mum is very much to blame for this, as she used to do this in front of me all the time, growing up…
It’s one of those addictions that is life-long, because everyone around me in my life have always encouraged me to keep it up…
You’re probably thinking right now…What is going on?
What is she talking about?
What could she possibly have gotten herself into???
It’s OK everyone, calm down…
It’s not what you think…
It’s just my…
BAKING ADDICTION! :-)
When I was growing up, I used to always love the fact that my Mum would bake (from scratch). Our home was always filled with amazing smells. Not only that, but baked cakes and biscuits would always taste better when they were made the traditional way.
Having a Polish background, my sister and I were blessed to be able to sample many yummy Polish dishes throughout our childhood.
***I will include a very yummy Sponge with Fruit Cake recipe at the end of my BLOG (this is one of my favourite Polish cake recipes, and super easy to put together, too)***
Sorry for being just a wee bit biased here, BUT, I reckon my Mum is one of the best cooks that I know. Unfortunately, I didn’t really get to learn any great tricks in the kitchen until I was a lot older (in my 20’s), which is partly my fault (I wasn’t overly interested in learning anything in my teens), and partly the kitchen’s fault — as it was tiny, and Mum always used to say there wasn’t enough room for more than one person to be in there while cooking, at any given time.
I remember when I was little, I used to love watching (and still do) cooking shows. So, my passion for cooking/baking had been there all along…
I just hadn’t unleashed my superpower yet!
Over the years, especially since becoming a Mum, I have become more and more passionate about my new found love and hobby. My family and friends are always delighted to try the things that I create.
My most recent experiences with baking are that I am experimenting with very healthy ingredients, such as almond and hemp flour, coconut oil, etc., as I am also passionate about a healthier way of living for our whole family (especially for our kids as they grow and develop in their early years), which has now become our norm.
My Healthy Baking Frenzy — Chocolate Beetroot Cake; Pear, Dried Fruit and Nut Muffins
The bonus with carrying on the BAKING FROM SCRATCH tradition, that has been inspired by and passed down from my Mum, is that you know what you’re putting into your body, my family reap the health benefits AND I can pass down this tradition to our kids and them to theirs (if they have the desire to do so).
Note: Now and again, I even get comments from my two boys (3 and 7yr olds) saying things like, “I LOVE your cooking, Mum”. This warms my heart :-)
Yummy Hemp Flour Pancakes with all the Bells & Whistles — decorated by our Sons
For all the Mamas out there, I want to encourage you to really have a go with baking and make it an enjoyable, relaxing experience for yourself, as it should be :-)
AND…MOST IMPORTANTLY… GET YOUR KIDS INVOLVED!!!
They get to learn how to Cook, they learn about Maths (measuring), Science (reactions) and much more.
But most of all, it creates an amazing opportunity where you can spend time and bond with your children…and everyone gets to enjoy their efforts…
WIN WIN | https://medium.com/@kco2120/im-an-addict-and-i-have-a-confession-to-make-6b64f16b846d | ['Karolina Cutler'] | 2019-08-13 12:06:21.364000+00:00 | ['Nutrition', 'Children', 'Baking', 'Motherhood', 'Food'] |
Productivity Habits I Learned From Chronic Fatigue Syndrome | In It for the Long Haul
My project was to fully heal from chronic disease. The first thing I did was get in a mindset of the “long game.” I am a big believer that we overestimate what we can do in a year and underestimate what we can do in ten years. In my case, I knew it was common for sufferers of CFS to go decades before fully healing (if they ever did).
I immediately took on the expectation that my journey might be just as long. I accepted and integrated the fact this may be a decade-long project or more. Once I came to terms with that, I could face each day with resolve.
Compare this to before my health crash, and I had eight different life projects that I wanted to be done that same year. Good luck with that, dude.
Step 1 in maintaining productivity when you’re exhausted is to have a commitment to the big picture. It is to aim at a desired end that is clearly defined, but far out.
Knowing you must toil for years gets you to focus more on consistency and less on intensity. When your fuel tanks are empty and your mental reserves are spent, intensity doesn’t work. Resolve and consistency, do.
For my part, I knew I needed to start a business that I could do from home. I also knew which treatments would ultimately restore my ability to function. It took nearly two years before I saw my first paycheck as a writer. It also took a whole year for me to spend the $350 on a detox program that restored 70% of my health. If I had been obsessed with the short term, this timeline would have been maddening. Instead, I regained functionality with minimal doctor appointments and no pharmaceuticals, from a condition that can last decades, in two years. | https://betterhumans.pub/productivity-habits-i-learned-from-chronic-fatigue-syndrome-6f8e9548badf | ['Keenan Eriksson'] | 2021-01-22 17:30:48.254000+00:00 | ['Focus', 'Efficiency', 'Productivity', 'Achievement', 'Chronic Fatigue Syndrome'] |
Texas Supreme Court holds lawyers can be liable for defamation when publicizing their lawsuits | Bottom line: Even with this ruling, Texas lawyers should continue publicizing their lawsuits, provided they understand how to avoid defaming adversaries when they do so.
We’re taking a trip to the Lone Star State. Austin to be exact, to check in with the Texas Supreme Court, which is located literally in the shadow of the Texas State Capitol.
But before we grab our Stetsons or Resistols and make our way there, some background is in order.
Since 2004, lawyers in my home state, Pennsylvania, have been on notice from the Pennsylvania Supreme Court through its decision in Bochetto v. Gibson, 860 A.2d 67 (Pa. 2004) that as a matter of law they can be liable for defamation when they send to the media copies of legal documents they’ve prepared and signed containing defamatory allegations, such as complaints and demand letters.
(In Bochetto, Kevin Gibson faxed to a local reporter a malpractice complaint he filed against another lawyer, George Bochetto, on behalf of a client.)
That’s why I always advise the Pennsylvania-based lawyers I work with not to send complaints and similar legal documents they’ve signed containing defamatory allegations to the media.
Instead, I recommend, when possible, they provide reporters with the docket number of a particular lawsuit and tell the reporters to grab a copy of the desired filing themselves.
When I speak with lawyers outside of Pennsylvania about ethically, strategically, and proactively engaging the Court of Public Opinion, I point to Bochetto v. Gibson and say something along the lines of:
“While it is unlikely your own state supreme court has chimed in on this issue, there’s a good chance a trial court in your state has, and they likely came out on the same side as the Pennsylvania Supreme Court. Even if no court in your state has ruled on the issue, do not tempt fate. Do not send copies of court filings you’ve signed containing defamatory allegations to the media.”
On a related point, I also tell lawyers that they, their clients, and their public relations people need to be careful when including in press releases allegations they’ve made in litigation regarding another party or other parties. If not framed properly, including those allegations in a press release could lead to a defamation claim.
Reported appellate decisions about either of these issues are rare. That’s why we’re heading to Austin to check in with the Texas Supreme Court.
A few weeks ago (in mid-May 2021), the court decided Landry’s, Inc., et al., v. Animal Legal Defense Fund, et al., No. 19–0036. (A PDF of the decision is here.)
In Landry’s, the Texas Supreme Court addressed both of these issues. It held that under Texas law lawyers can be liable for defamation as a matter of law when they (i) send to the media copies of legal documents they’ve drafted and signed that contain defamatory allegations, or (ii) include defamatory allegations in published press releases.
The ABA Journal provided a high-level description of the case and the decision here.
The case arose out of the Animal Legal Defense Fund publishing this press release on its website about serving on Landry’s a notice of intent to sue for violations of the Endangered Species Act. The alleged violations were based on Landry’s treatment of four white tigers exhibited at the company’s aquarium and restaurant in downtown Houston. The ALDF (apparently with help from one of its lawyers) then sent the press release and the notice of intent to various media outlets.
The Texas Supreme Court (like the Pennsylvania Supreme Court did and most other courts that have ruled on this issue have done) held that the allegedly defamatory allegations were not protected by the judicial proceedings privilege to defamation when they were sent to the media as part of the notice of intent to sue or when they appeared in a press release.
The court held the judicial proceedings privilege does not apply in situations where allegations are repeated for publicity purposes “outside the protected context within which the statements originally were made” — legal documents filed in court or created in connection with legal disputes.
Also, the court noted that while “press statements often serve an important function for the party issuing them and for the public, . . . they are in no way part of a judicial proceeding or preparatory to one in any formal sense,” nor would bringing them under the privilege serve the interests of the privilege which is to “facilitate open and vigorous litigation of matters inside the courtroom.”
Though you can read the 21-page decision via the link I included to it above, here are three points you should take away from the decision and this blog post:
1. While this decision could be problematic for lawyers who seek to proactively publicize the filing of their cases, the court’s decision was on a motion to dismiss. The ALDF lawyer — and other lawyers who find themselves in a similar position — will have a number of fact-based defenses available as the litigation proceeds and the facts are developed.
2. Ensure your press releases MAKE CLEAR through language — such as “as we allege,” “according to the complaint,” and “the complaint alleges” — that the allegations you are referencing in your press releases ARE ALLEGATIONS. If you do not, you may open yourself up to a defamation claim.
The Texas Supreme Court’s opinion does not walk through the ALDF’s statements Landry’s alleged were defamatory, but the Texas Court of Appeals opinion that was partly overruled by the Supreme Court’s decision did. You can read that decision here. The ALDF could have done a better job writing its press release in a way that stressed it was discussing allegations, which would have possibly helped it steer clear of a defamation claim.
3. Think long and hard before emailing a reporter a copy of a complaint, demand letter, etc., you signed and filed/served. If somehow it becomes known to opposing counsel that you’ve done so, in most jurisdictions the ensuing defamation lawsuit filed against you will likely survive the inevitable motion to dismiss you file arguing you are protected by the judicial proceedings privilege to defamation. | https://copocetic.com/texas-supreme-court-holds-lawyers-can-be-liable-for-defamation-when-publicizing-their-lawsuits-e0cc6f1efdbb | ['Wayne Pollock'] | 2021-06-17 17:02:56.351000+00:00 | ['Public Relations', 'Texas', 'Legal', 'Litigation', 'Defamation'] |
5 SECRETS TO LOSING BELLY FAT IN 1O DAYS — WHAT YOU MUST KNOW TO LOSE WEIGHT PERMANENTLY | 5 SECRETS TO LOSING BELLY FAT IN 1O DAYS — WHAT YOU MUST KNOW TO LOSE WEIGHT PERMANENTLY
WARNING: You are NOT going to find anything B-O-R-I-N-G! like doing 2000 minutes of cardio (blah) or eating more fruits and veggies (yawn!) here.
I don’t have plans to bore you (not today!). I’m only giving your practical tips you can start to use instantly to burn un-wanted body fat.
And I’m going to keep this report stupid simple, short, and sweet (If I’m talking too much please stop me :-)
Okay, without any further delay here are the 5 ways to cheat your way to fat-loss…
#1. Make This Simple Oil Change And You Can Burn up to 18 kg of Excess Fat Per Year.
Now, all oils used in cooking, frying and baking contains the same amount of calories.
BUT NOT COCONUT OIL
You see, because of the small size of the fatty acids that make up coconut oil, they actually yield fewer calories than other fats and oil (at least 2.25% fewer calories per gram of fat).
To prove this, researchers at McGill University in Canada conducted some experiments and found that…… If you replace all the oils in your diet (such as soybean oil, canola oil, safflower oil, and the likes) with coconut oil, you can lose up to 18 kg of excess fat per year.
And this is without changing your diet or reducing the number of calories you eat. All you simply have to do is get an oil change full-stop. Now, this small reduction in calories is only part of the picture, Coconut oil also helps you lose weight through other complicated processes (like thermogenesis) that I don’t want to bore you with.
All you should know is this:
If you want to lose unwanted body fat, the best and simple thing you can do is to start using coconut oil in place of other oils you already use (in cooking, baking and frying) now.
If you want to know more about coconut oil and it’s buying rules, so you don’t make the mistake of buying the wrong stuff go HERE
Okay it’s time for trick 2…
#2. Use This Quick Plate Hack That Tricks Your Brain Into Eating Less
Your mind will always play tricks on you when it comes to the relative size perception of your portion and your plate all thanks to the Delboeuf Illusion
Which plate has more food, A or B?…
…If you said A, then you’re wrong.
But if you said B, then……. you’re also wrong.
The right answer is this:
The two plates have exactly the same amount of food. However, plate B looks fuller because of a “visual perception bias” called “Delboeuf illusion”
Some crazy researchers decided to put the power of “Delboeuf illusion” to the test by secretly observing all the attendees of a party…
…And the overall conclusion was that people over served themselves when using larger plates and under served themselves when using smaller ones.
Imagine what a difference like that can make on your waistlines!
So how do we use this to your advantage?
If you’re thinking of breaking all the big plates in your house, well, that’s a hardcore option.
However, I’ll suggest switching to smaller plates to trick your brain to think it’s eating more food (and leaving the big plates for your hungry neighbors and friends. Doing this will help you consume fewer calories, therefore you’ll lose weight without even trying.
If that doesn’t work well, then you’ll definitely want to look at the next trick…
#3. Deploy this ancient Asian trick for weight control
Photo by Nataliya Vaitkevich from Pexels
You see, when you’re eating and you becomes full, your stomach sends a message to your brain. This message alerts your brain that you’re full and eating should stop.
Now, the problem is this:
It takes your brain several minute to receive, recognize and act on the “Stop eating” message (from your stomach).
So by the time your brain acts on it (the message), you would have consumed excess calories the body doesn’t need or want.
Now, is there a way to speed up the signal from your stomach to your brain that allows you to stop eating sooner?… Glad you asked.
Fortunately, there’s a quick fix for that. It is:
Drink a full glass of water 10 to 15 minutes before eating your meals.
How does that help? The answer is simple.
You see, when you drink water before your meal, it fills your stomach up and kick starts the process that sends the “stop eating” signal to the brain. So when you start eating, you feel full sooner, therefore you don’t overeat.
That’s why in many Asian countries, liquids are consumed before meals to fill the stomach in order to reduce hunger and prevent overeating. And also to stimulate digestive juices to improve digestion.
If you find the taste of ordinary water boring then try adding fresh lemon or cucumber (or any other fruit) to your water to make it more delicious and fun.
A bonus trick is to add a lot of fresh pepper while cooking your foods. This does two things;
1. Makes the food peppery, so you drink more water while eating, therefore you’re likely to eat less.
2. It boosts your metabolism because pepper is a good metabolism booster.
Once you’ve added water and more pepper to your food, move onto cheat #4
#4. A Simple Overlooked Weapon to Win The Weight
Of all the weight loss tricks I have for you today, this one is the easiest to implement… and also… easiest to forget.
It is one of the most important and most overlooked weight loss weapons in your arsenal.
What’s this secret weapon I’m talking about?
SLEEP
Now, you might be thinking how can sleep help me lose weight eh?
Just to mention a few….
❖ Enough Sleep can prevent hunger and cravings of junk food.
❖ Sleep prevents disruption of your hunger and fullness hormones ( ghrelin and leptin)- thereby avoiding “by-chemical” tendency for weight gain
❖ Sleeping early cuts down snacking time during the day
Think about this for a second
If you sleep After eating dinner, then that prevents you from trekking back and forth to the fridge Looking for what to eat again after dinner.
Well, as a matter of fact, If you sleep early, you’ll be able to wake up early enough to prepare a nutritious and healthy breakfast.
If you have trouble sleeping then use this simple formula (it works all the time)
What formula?
It called the “10–3–2–1–0” sleeping formula:
10 hours before bed: No more caffeine
3 hours before bed: No more food or alcohol
2 hours before bed: No more work
1 hour before bed: No more screen time
0: The number of times you hit the snooze button in the morning
Once you’re done sleeping, then you’re ready for the next trick.
#5.Hack NEPA to Lose Weight Effortlessly
What’s one major difference between slim people and fat people?
Faster metabolism?… Nope (most times)
Good genes?…. Nope (most times)
Answer is slim peeps engage in more NEPA activities
Wait! NEPA what?!
NEPA stands for Non-Exercise Physical Activity — This is the energy we expend each day for everything that is NOT sleeping, eating, or sports-like activities/exercise.
It’s also called NEAT( Non-Exercise Activity Thermogenesis).
Here is a “jaw-dropping” fact about NEPA/NEAT — It can help you burn 10 times more calories (about 1,500 to 2,400 calories ) than a regular exercise in a day.
You see where I’m going with this, don’t you?
Long story short, What I’ve been trying to say since morning is just “move more” .Look for every excuse to move more.
For example:
● Use the stairs instead of the escalator or elevator
● Walk around your house while having phone calls
● Intentionally park you car a bit far so you can walk
● Get the remote and Wash your plates yourself (I bet your kids and waistline would love that)
● Take quick 10 minute walks every 2 hours to boost your metabolism and refresh your mind.
● Have More Sex (married of course). Ladies especially, don’t just lay down like a log of wood, join the party, you can get in more NEPA/NEAT while having more fun.
Honestly, I couldn’t even finish this report without walking up and down my house — I feel more pumped!
Anyhow:
Use as many of these tactics to up your NEPA time daily and cure the “ sitting disease ” forever!
BONUS CONTENT
Use This Food Re-arrangement Technique To Keep you Full For Days
One of the best ways to be some “kg lighter” without counting calories or portion control is cutting down on carbs and replacing them with protein.
Hollop! Hollop! If you’re starting to think I’m talking about low carb diet — I’m NOT
I’m saying cutting down carbs to increase your protein intake not because carbs are particularly fattening BUT because the types of carbs we mostly eat (like bread, rice and so on) Can’t keep us full for long — And that’s where protein come to the rescue, to keep you full longer.
Which ultimately helps you eat fewer calories at the end of the day. So what Am I asking you to do is a simple “food quantity” re-arrangement
What does that even mean? Simple
Instead of the normal plenty carbs, tiny protein you normally eat, you should re-arrange it to plenty protein and little carbs.
To make it crystal clear as day, here are few example:
➢ Instead of a big hip of rice and one small meat, you should have a big meat/chicken and a small rice
➢ Instead of 10 slices of bread and two eggs you normally eat, you should have 3 slices of bread and 4 eggs… or better still… 3 slices of bread 2 eggs mixed with sardines
That’s all folks.
Remember that it’s the little things that make the biggest difference. And nobody can find time to make this changes, we have to make time.
I have blessed you with some weight loss superpowers, please use them for good
Actually USE the Tricks in this report. Without action, these tricks are useless
Finally, clap for yourself for reading through this article
Follow me on medium for more health & fitness tips | https://medium.com/@ashantiglee/5-ridiculously-easy-tricks-to-cheat-your-way-to-fat-loss-1a46ad4cb077 | [] | 2022-01-06 15:32:28.357000+00:00 | ['Weight Loss Tips', 'Weight Loss', 'Weightloss Recipe', 'Fitness'] |
XRouter’s monetisation system: a design | XRouter, the first “blockchain router,” enables parties on one blockchain (or on no blockchain) to interact with virtually any blockchain in existence, without hosting a single chain locally, and without having to trust its peers or any intermediary. It does so via SPV proofs whose security assumptions are significantly different from typical SPV proofs.
This nifty interchain tool works rather well in its current state, but it would be advantageous if it could also support similarly decentralized (micro-) payments for various interactions. For example, perhaps a dapp would need to request computationally intensive chain data, or perhaps it needs to make a high volume of simple or cheap queries, the volume of which would be burdensome to its peers. Perhaps Bitcoin Core is effectively disabling support for SPV proofs and “light wallets,” and wallet developers are wondering how they might avoid trusting a central server. Or perhaps Bitcoin (or other networks) lack an incentive to run full nodes, the node count is shrinking, and a method of rewarding nodes would reverse the trend. XRouter permits a unique, inherently scalable, and monetisable approach to these problems, and by enabling off-chain proofs of chain state, it can help blockchain ecosystems to scale.
The interchain future, where full nodes thrive on SPV proofs.
Available mechanisms
Several mechanisms are available that appear amenable to use in XRouter, including ZK-starks, atomic swaps (specifically where one party swaps a digital good, not a coin), and payment channels. Payment channels appear particularly advantageous in this context for their speed, their relative simplicity, their support for micropayments and subscription-style services, and for the broad understanding of their nature on the part of developers and the crypto-leaning public (largely due to the mindshare of the lightning network).
Objective
Due to the above, this post is largely a design of payment channels to monetise XRouter requests, although in the next section I give consideration to areas where other mechanisms are required in order for XRouter to handle known use cases. This blog culminates in a protocol with scripts (unvalidated!) for Bitcoin and related coins.
To define the design objective, XRouter requires a mechanism to enable payments from consumers of a given interchain service to providers of that service; moreover, since the interchain is a p2p network of untrusted peers, this mechanism must be “trustless,” in the sense that it will not be required to trust one’s counterparty to act honestly in order to do business with them. This is especially desirable because a suitable mechanism would not only support doing business over the interchain, but in virtue of its trustlessness, I believe that it opens up radical new possibilities for doing business in general.
Sequence
Several behaviours and potential service types are possible over XRouter. What follows is a high-level sequence that maps the service types in which payment would feature — many of which are amenable to the use of payment channels.
A flow diagram is provided for clarity.
Steps:
XRouter client (henceforth, the “consumer” of a service) makes a request (e.g. xrSendTransaction ETH [tx hex]). The service node(s) that execute the request (henceforth the “provider” of a service) sends its response to the consumer.
a. Monetising a call is at the discretion of each service node. If it requires a payment, move to 2b.
Monetising a call is at the discretion of each service node. If it requires a payment, move to 2b. b . Some services cost nontrivial amounts and cannot be paid for in several micropayment steps. Other services involve nontrivial risk for other reasons (e.g. reveal of personal information or other data that cannot be re-concealed after revealing it). Such cases are better suited to an atomic swap transaction, since it removes counterparty risk.
. Some services cost nontrivial amounts and cannot be paid for in several micropayment steps. Other services involve nontrivial risk for other reasons (e.g. reveal of personal information or other data that cannot be re-concealed after revealing it). Such cases are better suited to an atomic swap transaction, since it removes counterparty risk. c . Services that do not carry the requirements in (2b) stand to benefit from the speed and low cost of payment channels, and may move to step 3.
. Services that carry the requirements in (2b) stand to benefit from the speed and low cost of payment channels, and may move to step 3. d. In the case of decentralized services (e.g services deliverable by any node on a p2p network), before requesting a service, the consumer will typically need to ascertain “trustlessly” whether the provider is honest or whether the provider truly offers the digital good it purports to offer. Interchain service providers may support this by enabling consumers to prove in zero knowledge the legitimacy of the service offered. This can be supplied to the consumer out of band.
3. Consumer and provider set up a payment channel.
The channel is required to be funded by some number of coins. This amount can be expected to vary from service to service (and from service node to service node). XRouter should aim to remain agnostic as to specific requirements. I suggest implementing a channelFundAmount request, made during channel setup, and a subsequent check on-chain.
4. Consumer and provider create an update transaction (see the Protocol section below) committing funds to the provider.
a. Depending on the context, this may be payment in full for the service, or an initial, smaller, “deposit” commitment.
Depending on the context, this may be payment in full for the service, or an initial, smaller, “deposit” commitment. b. Depending on the context, this may be before or after the service is delivered.
Depending on the context, this may be before or after the service is delivered. c. Where the service is delivered after payment (e.g. where the provider is a traditional company using XCloud to deliver its service, and is vulnerable to tarnishing its reputation if it fails to deliver), then consumers can reasonably be expected to trust it to behave honestly and can thus make payment in advance.
Where the service is delivered after payment (e.g. where the provider is a traditional company using XCloud to deliver its service, and is vulnerable to tarnishing its reputation if it fails to deliver), then consumers can reasonably be expected to trust it to behave honestly and can thus make payment in advance. d. Where the provider first delivers the service, and then requests payment afterwards (e.g. if its business model is a subscription service that involves repeated payments), its business is not put at risk by offering to deliver before receiving payment, because the consumer only derives value from the service by using it repeatedly. As such, providing a service before payment amounts to a “try before you buy” sales strategy. The provider may thus safely request payment in many stages, delivering before payment at each stage.
Where the provider first delivers the service, and then requests payment afterwards (e.g. if its business model is a subscription service that involves repeated payments), its business is not put at risk by offering to deliver before receiving payment, because the consumer only derives value from the service by using it repeatedly. As such, providing a service before payment amounts to a “try before you buy” sales strategy. The provider may thus safely request payment in many stages, delivering before payment at each stage. e. Where the service is delivered before payment and its delivery is not locally provable by the consumer (e.g. committing a transaction to some chain not hosted locally), the consumer may first check, before making payment, the provider’s on-chain execution of the service by querying other service nodes and constructing an SPV proof from their responses.
Where the service is delivered before payment and its delivery is not locally provable by the consumer (e.g. committing a transaction to some chain not hosted locally), the consumer may first check, before making payment, the provider’s on-chain execution of the service by querying other service nodes and constructing an SPV proof from their responses. e-i. In some cases (e.g. for computationally expensive requests), service nodes may require payment for this, in which case a further payment channel may be required.
5. Consumer and provider close the channel as per the Protocol.
Rationale for the protocol in the next section
Design criteria
While payment channels retain the traditional (i.e. trust-laden) property that if one party receives their counterparty’s goods (or money) before releasing its own, then it may costlessly grief the counterparty. However, with payment channels, the potential for griefing can be limited, by design, to a trivial number of coins (depending on use case, this could vary considerably, e.g. $0.0001 < x < ~$3) because the party who parts with goods or coins first need only risk the smallest value increment, after which both parties may proceed with the exchange in similarly tiny increments, essentially partitioning risk into harmlessly small segments.
I have no direct concern for developing a scalability solution, because this is not a blockchain engineering context. (We just want trustless micropayments.)
No cross-channel settlement (e.g. Lightning) is required, since in interchain contexts, there is an unknown number of chains, so it is not possible to discover all the IOUs (and potential liquidity bottlenecks). As such, if one were to support cross-channel settlement, users would be exposed to an unquantifiable and unmitigated risk of locking up funds.
Desirable properties are: speed, minimal collateral requirements, low cost, predictable settlement behaviour, very broad interoperability (e.g. no novel opcodes).
Significant properties of various channel designs
CLTV channels: parties trustlessly agree to modify an unbroadcast CLTV transaction prior to nlocktime maturation.
channels: parties trustlessly agree to modify an unbroadcast CLTV transaction prior to nlocktime maturation. Poon-Dryja channels: parties trustlessly agree to modify an unbroadcast segwit 2-of-2 multisig transaction.
channels: parties trustlessly agree to modify an unbroadcast segwit 2-of-2 multisig transaction. Decker-Wattenhofer channels: parties agree to create new, unbroadcast “invalidation” transactions which modify the amount in an output funding either of a pair of channels, and which invalidate a prior invalidation transaction by having a lower nSequence value. Trust is eliminated through each party possessing a signed, unbroadcast “kickoff” transaction that commits to waiting until nSequence maturation.
channels: parties agree to create new, unbroadcast “invalidation” transactions which modify the amount in an output funding either of a pair of channels, and which invalidate a prior invalidation transaction by having a lower nSequence value. Trust is eliminated through each party possessing a signed, unbroadcast “kickoff” transaction that commits to waiting until nSequence maturation. Eltoo channels : parties agree both to new, unbroadcast “update” transactions and new “settlement” transactions (which are spent by the new update transaction). Any new update transaction may spend any earlier update transaction, making it pointless to broadcast earlier ones.
: parties agree both to new, unbroadcast “update” transactions and new “settlement” transactions (which are spent by the new update transaction). Any new update transaction may spend any earlier update transaction, making it pointless to broadcast earlier ones. Ethereum state channels: R&D on Ethereum, led by Magmo and Counterfactual, has converged on a generalised state channel design with semantics akin to CLTV channels. This effort is called the State Channels Project. The code is available at https://github.com/statechannels. It seems to be good work, with sound engineering of the setup, messaging, and UX of state channels.
Known limitations
CLTV channels have
a finite lifetime, requiring both extra messaging for each setup sequence,
and that suitable checks be built to close channels prior to nlocktime.
XRouter services that would continue past nlocktime require the additional complexity of closing the old channel and opening a new channel.
Finally, in the event of dispute, funds can only be settled after nlocktime maturation, a ‘punishment” not necessary in some other designs.
Poon-Dryja channels are:
limited to chains that support segwit.
like CLTV channels, they require a punishment branch in the commitment transaction.
Decker-Wattenhofer channels (see also this related paper):
impose a significant “punishment” in the event of dispute: a long wait time if a channel is closed by one party rather than both parties.
Additionally, they are complex, requiring the creation — and potentially the broadcasting — of many transactions.
Eltoo channels:
require the SIGHASH_ANYPREVOUT and/or ANYPREVOUTANYSCRIPT opcodes, which have yet to be introduced to Bitcoin (and elsewhere)
This is expected to ship soon, along with the taproot update.
No other coins will support this, at least initially (though adoption can be expected, as with SegWit). In practice, general support for new opcodes across the blockchain ecosystem takes several years.
(Partial) overview of known failure modes of the various channel designs, to ensure that XRouter isn’t limited in ways significant to any of its anticipated use-cases.
CLTV channels:
Depending on implementation, either the consumer (snode client) is able to evade paying for the first tranche of data provided by the provider (snode), or the snode is able to receive payment without providing data.
In both the above cases, the exploit comes at the cost of locking up the consumer’s funds until nlocktime.
The reason for still considering the feasibility of a channel-based approach is that there is only the potential for a very small loss of coins and/or unpaid service, while, in contrast, the provider loses a repeat customer (on the assumption that opening a channel is essentially like signing up to a subscription service).
If a provider goes offline before and during nlocktime maturation, the consumer can claim back their funds.
Poon-Dryja channels:
Accidentally using old state results in loss of funds
Decker-Wattenhofer channels:
Accidentally using old state results in loss of funds
Wait times in the event of dispute are long (potentially blocking certain use cases)
Eltoo channels
No relevant failure modes were found, provided one uses SIGHASH_ANYPREVOUT rather than the NOINPUT opcode originally specified.
Outcome
A CLTV-type payment channel is recommended, for its broad interoperability, its relative simplicity, and the absence of known unworkable shortcomings.
Protocol
For Bitcoin and related coins
In the following protocol sequence, a Blocknet service node is the service provider and an XRouter client (e.g. Blocknet wallet or standalone XRouter library) is the service consumer.
Script suggestions are provided. Note: scripts have not been validated, so do not count on their security or correctness.
Consumer creates a public key (K1) from private key P1, and requests a public key from the Provider (K2), derived from private key P2. Provider sends K2 to Consumer. Consumer creates and signs but does not broadcast a bail-in transaction (T1) that sets up a payment that, in order to be spent, requires signatures from both P1 and P2. Alternatively, T1 may be spent after a timelock expires by signing using P1.
##output should be a P2SH transaction to a hash of a serialised version of the following:
OP_IF
[timelock block height] CHECKLOCKTIMEVERIFY OP_DROP OP_DUP
OP_HASH160 [Hash160(pubkey K1)]
OP_EQUALVERIFY OP_CHECKSIG
OP_ELSE
OP_2 [Hash160(pubkey K1)] [Hash160(pubkey K2)] OP_2
OP_CHECKMULTISIG
OP_ENDIF Consumer creates a refund transaction (T2) that spends T1 back to itself. Due to T1’s timelock, it is unspendable for some time (e.g. 3 hours), but the Consumer may broadcast this at any time after the timelock expires and have the network accept it as valid.
## scriptsig
OP_1 [signature from P1] [pubkey K1] OP_1 {serialized script of T1} The Consumer broadcasts T1. This commits the Consumer’s coins to the channel. The Consumer then creates an update transaction (T3) which spends T1 upon signature by both Consumer and Provider. It has two outputs, K1 and K2. The Consumer signs T3 and provides the transaction and signature to the Provider.
##prevoutscript
OP_HASH160 [HASH160 of serialised T1] OP_EQUAL
##scriptsig
[signature from P1] [signature from P2] OP_2 [pubkey K1] [pubkey K2] OP_2 OP_CHECKMULTISIG {serialised script of T1} The Provider verifies the output to K2 is of the expected size, verifies the client’s provided signature is correct, signs T3 and sends it to the Consumer.
This process iterates, so that T3 may go through several unbroadcast versions. Its starting version allocates all coins to K1 (that is, it is like T2 except that it is not time locked), and progressively allocates more coins to K2.)
Version updates to T3 continue to be made until either service delivery is complete, or until the timelock is near. In either case, either party may broadcast its most recent validly-signed version of T3, closing the channel.
In the event that the Provider disappears or ceases providing its service early, then the Consumer may sign and broadcast T2 after the timelock expires and receive all its coins back.
For Ethereum
No (low-level) work appears to be required. One may simply use https://github.com/statechannels/monorepo
Documentation is available at https://protocol.statechannels.org/docs/state-channels/overview | https://medium.com/flatus-vocis/xrouters-monetisation-system-a-design-89ec7697aa8e | ['Arlyn Culwick'] | 2020-08-24 10:17:49.184000+00:00 | ['Blockchain Scaling', 'Dapps', 'Payment Channel', 'Ethereum', 'Bitcoin'] |
Watch & Learn Why Blockchain & Cryptocurrencies Will Change Your Life? 💜 | Listen to what Jack Ma, Bill Gates, Elon Musk, GokuMarket and other celebrities have to say about the blockchain & crypto impact on markets! 🔥🚀
Click to play video: https://youtu.be/pamH4yYoGEE
Jack Ma: “strong believer of Blockchain” 🌍
December 12th, 2020 — GokuMarket is a wallet, marketplace & exchange and known as the world’s no.1 crypto utilities provider where everyone all around the world can participate in the blockchain economy and perform all real-world activities using cryptocurrencies.
Bill Gates: “Bitcoin is better than currency” 💰
Blockchain technology is so powerful that it will spread into every aspect of our lives. The overall global market for blockchain related products & services are expected to reach $39.7 Billion by 2025, and $1.76 Trillion by the year 2030.
Elon Musk: “Crypto is replacement for cash” 💵
The GokuMarket Exchange and Marketplace are taking aim at being a major contributor towards global blockchain economic growth.
Shop on GokuMarket 🛍️ Get Cashback in Bitcoin & cryptocurrencies.💰
👉 E-commerce App: https://gokumarket.app.link/qPbTXpO2Ibb
👉 Buy Bitcoin with VISA/MasterCard: https://gokumarket.com/buyCrypto
👉 Exchange: https://gokumarket.com/exchange/BTC_USDT
👉 Staking: https://gokumarket.com/staking
👉 Starter Kits: https://gokumarket.com/starter-kits
We Love To Hear From You! 👋💜
GokuMarket’s exchange and marketplace is all about keeping things easy and simple. The GokuMarket e-commerce features for products shopping and freelancers for services are now available on the GokuMarket App — Download it on Android & iOS Now!
As always, we love to hear from you with feedback on what other ways we can make your experience even better with GokuMarket. Also, keep a watchful eye for new trends and opportunities & do not forget to keep yourself updated on the latest news, by following our community channels, like Telegram, and social media networks like Facebook, Twitter, Instagram, and YouTube.
Welcome to GokuMarket! 🛍️💰
Sincerely yours,
The GokuMarket Team 💜 | https://medium.com/gokumarket/watch-learn-why-blockchain-cryptocurrencies-will-change-your-life-bf84d59220e1 | [] | 2020-12-11 20:34:29.687000+00:00 | ['Blockchain', 'Bill Gates', 'Cryptocurrency News', 'Jack Ma', 'Elon Musk'] |
Demystifying Neural Networks | All you need is a piece of paper and a pencil
“If you want to change the world, pick up a pen and write.” — Martin Luther
I will explain what neural networks do with a real simple example.
I will use this basic neural network architecture:
I will initialize weights, biases, training inputs and output with these following values:
What this is all about is adjusting the values of the weights w1, w2, w3, w4, w5 and w6 to obtain the output we want o = 1 for the given inputs x1 = .10 and x2 = .20.
Hidden Neurons h1 and h2, and Output Neuron o are divided into two different functions: Net (linear regression function) and Act (activation function). I will be using the logistic function as the activation function for all neurons.
These are the formulas for both functions:
Net = w * x + b –linear regression function
Act = Sigmoid (Net) –logistic function 1 / (1 + exp (-x))
Fase I: Fordward or Computing Output
Let’s compute our first output using initialization values:
Neth1 = w1 * x1 + w2 * x2 + b1 –linear regression function
Acth1 = Sigmoid(Neth1) –logistic function
Neth2 = w3 * x1 + w4 * x2 + b1
Acth2 = Sigmoid(Neth2)
Neto = w5 * Acth1 + w6 * Acth2 + b2
Acto = Sigmoid(Neto2)
Sigmoid is the logistic function to squash the output –> 1 / (1 + exp (-x))
Neth1 = .30 * .10 + .40 * .20 + .15
Acth1 = Sigmoid(.26)
Acth1 = .5646363
Neth2 = .50 * .10 + .60 * .20 + .15
Acth2 = Sigmoid(.32)
Acth2 = .5793242
Neto = .70 * .5646363 + .80 * .5793242 + .25
Acto = Sigmoid(1.1087048)
Acto = .7518875 (vs 1.00 target)
Fase II: Computing the Cost
I will be using the popular squared error function for calculating the Total Cost.
Etotal = .5 * (target — output)**2
Etotal = .5 * (1 — .7518875)**2
Etotal = .0307798
Fase III: Backpropagation for Output Layer
We use partial derivatives of Total Error with respect to each weight in order to know how much a change in each weight affects the Total Error.
∂Etotal/∂w5 = ∂Etotal/∂Acto * ∂Acto/∂Neto * ∂Neto/∂w5 –chain rule
∂Etotal/∂Acto = .5 * 2 * (target — output) * -1
∂Etotal/∂Acto = .5 * 2 * (1 — .7518875) * -1
∂Etotal/∂Acto = — .24811243085
∂Acto/∂Neto = output * (1 — output)
∂Acto/∂Neto = .7518875 * (1 — .7518875)
∂Acto/∂Neto = .1865526
∂Neto/∂w5 = Acth1
∂Neto/∂w5 = .5646363
∂Etotal/∂w5 = -.2481124 * .1865526 * .5646363
∂Etotal/∂w5 = -.0261347
In order to decrease the error we need to subtract this value from the current weight multiplied by some learning rate which I will set to 0.5.
w5+ = w5 — α * ∂Etotal/∂w5
w5+ = .70 — .5 * -.0261347
w5+ = .7130673
Repeating this process we get the new weight w6:
w6+ = .8134073
Fase IV: Backpropagation for Hidden Layer
∂Etotal/∂w1 = ∂Etotal/∂Acth1 * ∂Acth1/∂Neth1 * ∂Neth1/∂w1 –chain rule
∂Etotal/∂Acth1 = ∂Etotal/∂Neto * ∂Neto/∂Acth1
∂Etotal/∂Neto = ∂Etotal/∂Acto * ∂Acto/∂Neto
∂Etotal/∂Neto = — .2481124 * .1865526
∂Etotal/∂Neto = — .0462869
∂Etotal/∂Acth1 = — .0462869 * ∂Neto/∂Acth1
∂Etotal/∂Acth1 = — .0462869 * w5
∂Etotal/∂Acth1 = — .0462869 * .70
∂Etotal/∂Acth1 = — .0324002
∂Acth1/∂Neth1 = Acth1 * (1- Acth1)
∂Acth1/∂Neth1 = .5646363 * (1- .5646363)
∂Acth1/∂Neth1 = .2458221
∂Neth1/∂w1 = x1
∂Neth1/∂w1 = .10
∂Etotal/∂w1 = — .0324002 * .2458221 * .10
∂Etotal/∂w1 = — .0007964
w1+ = w1 — α * ∂Etotal/∂w1
w1+ = .30 — .5 * — .0007964
w1+ = .3003982
Repeating this process we get the new weights w2, w3 and w4:
w2+ = .4007964
w3+ = .5004512
w4+ = .6009024
Now it's time to go back to Fase I and compute our new output and new total error using the new weights:
Acto = .7547169 (vs 1.00 target)
Etotal = .0300818
You might think this is not a big change, but by repeating this 5.000 times you will get the following results:
Acto = .9830802 (vs 1.00 target)
Etotal = .0001431
Neural networks are no longer a mystery! | https://medium.com/@arielnovelli/demystifying-neural-networks-9d3483f3d09d | ['Ariel Novelli'] | 2020-11-03 13:42:45.820000+00:00 | ['Deep Learning', 'Backpropagation', 'Neural Networks', 'Gradient Descent'] |
Keto recipes you must try! | What Is The Keto Diet?
There’s no denying that the keto diet still reigns as one of the most popular — and highly researched — diets out there right now. But there are two clear sides to the keto debate: There are folks who are all for the high-fat lifestyle and those who, well, absolutely isn’t.
Short for “ketogenic diet,” this eating plan is all about minimizing your carbs and upping your fats to get your body to use fat as a form of energy, says Scott Keatley.
While everyone’s body and needs are slightly different, that typically translates to:
60 to 75 percent of your calories from fat
15 to 30 percent of your calories from protein
5 to 10 percent of your calories from carbs
That usually means eating no more than 50 grams of carbs a day (some strict keto dieters even opt for just 20 grams a day).
The first keto recipe for today is GARLIC BUTTER STEAK AND SCALLOPS
SURF AND TURF made in less than 30 min! The steak + scallops are so perfectly cooked with the best garlic butter sauce!
yield: 2 SERVINGS
prep time: 15 MINUTES
cook time: 15 MINUTES
total time: 30 MINUTES
INGREDIENTS:
2 (1 1/2 inch thick) beef tenderloin fillets (about 6 to 8 ounces)
Kosher salt and freshly ground black pepper, to taste
3 tablespoons unsalted butter, divided
8–10 large sea scallops
FOR THE GARLIC BUTTER SAUCE!
3 cloves garlic, minced
6 tablespoons unsalted butter, cubed
2 tablespoons chopped fresh parsley leaves
2 tablespoons chopped fresh chives
1 tablespoon freshly squeezed lemon juice
2 teaspoons lemon zest
Kosher salt and freshly ground black pepper, to taste
DIRECTIONS:
Heat a 12-inch cast-iron skillet over medium-high heat for 8–10 minutes. Using paper towels, pat both sides of the steak dry; season with salt and pepper, to taste. Melt 2 tablespoons butter. Place the steaks in the middle of the skillet and cook until a dark crust has formed about 4–6 minutes. Using tongs, flip, and cook for an additional 3–4 minutes, or until the desired doneness; set aside, loosely covered. While the steak rests, wipe the skillet clean and melt the remaining 1 tablespoon butter. Remove the small side muscle from the scallops, rinse with cold water, and thoroughly pat dry. Season scallops with salt and pepper, to taste. Working in batches, add scallops to the skillet in a single layer and cook, flipping once, until golden brown and translucent in the center, about 2–3 minutes per side. Set aside and keep warm. To make the garlic butter sauce, reduce heat to low; add garlic and cook, stirring frequently, until fragrant, about 1 minute. Stir in butter, parsley, chives, lemon juice, and lemon zest; season with salt and pepper, to taste. Serve steak and scallops immediately with garlic butter sauce.
Enjoy your keto steak!
More at desire to cook! | https://medium.com/@bokiblanusa/keto-recipes-you-must-try-9bbfdc55f5c5 | ['Gatka Only'] | 2020-12-24 21:34:43.872000+00:00 | ['Keto', 'Keto Diet', 'Cooking', 'Recipes For Cooking', 'Ketogenic Diet'] |
12 Ways the World Will Change When Everyone Works Remotely | Workplace studies in 2019 have reached a common conclusion — remote work is here to stay. Once people try working remotely, up to 99% want to continue, while 95% would recommend the practice to others.
But that’s not all.
A Zapier survey revealed that 74% of workers would quit their jobs for the ability to work from anywhere. Two in three believe that the traditional workplace will be obsolete within the next decade.
Source: Buffer State of Remote Work Report
They’re right.
According to the U.S. Census Bureau, the number of people working remotely has been rising for the past ten years. Meanwhile, UpWork projects that the majority of the workforce will be freelancing as soon as 2027. Globally, one billion people could be working in a remote capacity by 2035.
Whether people become remote employees, online entrepreneurs, freelancers, or other gig workers — one thing’s for sure — life will be nothing like the current 9–5. The world will change and reflect this new reality. | https://medium.com/swlh/12-ways-the-world-will-change-when-everyone-works-remotely-cb8927ef1853 | ['Kristin Wilson'] | 2020-04-16 18:46:39.615000+00:00 | ['Work', 'Freelancing', 'Business', 'Startup', 'Future'] |
Wyze Thermostat review: First-class feature set, bargain-basement price tag | Wyze Labs’ motto seems to be “anything you can do, we can do cheaper.” The company that impressed us earlier with a range of incredibly inexpensive home security cameras, noise-cancelling headphones, a smart bulb, and a smart deadbolt has done again it with the Wyze Thermostat, which sells for just $50—less than half the price of the new entry-level Nest Thermostat.
Not only is it less expensive, it comes with everything you might need, including an optional trim plate to cover up holes left over from a previous installation, and a wiring-conversion harness should you not have a C-wire in the wall to deliver power to the thermostat. Google says its Nest thermostats don’t require a C-wire, but when you go through steps in either company’s compatibility checker, you might find that it actually does to work with your HVAC system.
This review is part of TechHive’s coverage of the best smart thermostats, where you’ll find reviews of competing products, plus a buyer’s guide to the features you should consider when shopping for this type of product.The Wyze Thermostat isn’t the prettiest device you’ll install in your home, and its dial controller isn’t the most sophisticated control option in its class. It is simple, though; you just turn the dial to adjust the target temperature shown on its small display, push the dial in to access its menu, spin to choose menu options, and push the dial in to select them.
Jason D’Aprile The Wyze Thermostat is compatible with more types of HVAC systems than most budget-priced smart thermostats.
Installing the thermostat is simple enough once you’ve downloaded the Wyze app and created an account. The app helps to make your initial configuration and programming easy, you can create a unique schedule with home, sleep, and away target temperatures for each day of the week and weekend or for entire weeks. You can also access most of the thermostat’s options via its onboard display if you don’t always want to be tied to your smartphone.
Jason D’Aprile / IDG A series of simple questions helps the Wyze app determine optimal settings for its thermostat.
You’ll mount the wiring backplate to the wall, using the onboard bubble level to level it, and then push the wires coming from your HVAC system into the labeled clamping sockets. Having 10 wiring sockets (see photo) enables the Wyze Thermostat to accommodate a broader array of HVAC systems than the budget-priced Nest Thermostat.
There are connections for Rc (power, if your cooling system has its own transformer), Y1 (for air conditioning, stage 1), Y2 (air conditioning, stage 2), O/B (for a heat pump), G (for fan blower operation), Rh (power for your heating system transformer), W1 (heat, stage 1), W2 (heat, stage 2), asterisk (for one accessory, such as a ventilator, humidifier, dehumidifier, or emergency heat), and C (aka “common,” for power to the thermostat itself).
Budget thermostat, high-end featuresIn addition to programming schedules when your HVAC system should operate, the Wyze Thermostat supports geofencing, so that temperature settings adjust according to your location (provided your take your smartphone with you, of course). It will set your system to its away setting when you leave the perimeter you’ve established, and return to its home setting when you re-enter that perimeter. You can also establish safety limits that will automatically turn on your furnace if temperatures drop too low (protecting your pipes from freezing), or your air conditioning if they rise too high (protecting your pets).
An onboard passive infrared motion sensor wakes the Wyze Thermostat’s display when you approach it, and if no one passes by for a long period, the thermostat will automatically switch to its energy-conserving “away” mode. A humidity sensor will inform you of the relative humidity in your home, and it will also activate a connected humidifier or dehumidifier if you have one installed.
Jason D’Aprile / IDG You might not fall in love with the Wyze Thermostat’s industrial design, but its price tag can’t be beat for the features delivered.
Reminders to change your air filter on becoming increasingly common on smart thermostats, but Wyze takes a more detailed approach than most. It takes the size and longevity rating of the filter to provide a percentage of usage. It also allows you to save different types of filters for tracking purposes. If you switch to a higher-performance filter during the year, you can just select that entry and the app tracks it. Unlike the Nest Thermostat, however, Wyze doesn’t provide any means of tracking your HVAC system’s overall performance and won’t provide maintenance reminders or alerts.
Mentioned in this article Nest Thermostat Read TechHive's reviewSee it The Wyze thermostat supports Amazon Alexa, so you can ask about the current temperature and make thermostat adjustments with voice commands, but Google Assistant support is described as “coming soon.” It has both a Bluetooth radio and a Wi-Fi adapter onboard, but the latter is 2.4GHz only.
New features in the works Jason D’Aprile / IDG The separate settings for Home, Away, and Sleep are easy to configure.
Wyze says that in the future, the thermostat’s humidity sensor will also be used to perform more precise comfort calculations. It will take into account the amount of moisture in the air (higher humidity makes your environment feel warmer, while lower humidity makes it feel cooler) to set a “feels like” temperature target. Wyze says it’s also working on a learning algorithm that will automatically create heating and cooling schedules based on activity in the home and how users interact with the thermostat, similar to how the high-end Nest Learning Thermostat behaves.
The company also promises to offer a remote 3-in-1 sensor in early 2021. Similar to the higher-end offerings from Ecobee and Nest (Nest’s remote sensors won’t work with its budget-priced model), these sensors will monitor temperature, humidity, and motion in other rooms in your home. Wyze says these sensors will help balance the temperature in your home, eliminating hot and cold spots. These features will make the Wyze Thermostat an even stronger value if they’re delivered, but no one should buy a product today based on features that are promised to come later.
Bottom lineI’m not crazy about the Wyze Thermostat’s industrial design, but a smart thermostat with this level of sophistication for a price this low is very easy to recommend.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@darren47777783/wyze-thermostat-review-first-class-feature-set-bargain-basement-price-tag-e51b5a8ea9a8 | [] | 2020-12-24 00:33:19.781000+00:00 | ['Streaming', 'Mobile', 'Tvs', 'Home Tech'] |
A Letter to my Mother | A Letter to my Mother
Photo credit: Author
I recognize that this global pandemic has affected every human in a different way. So many have known so much loss. So many have lost their jobs and are struggling to put food on the table. For me, I just really miss my mommy.
I understand that so many have seen so much dire hardship. It doesn’t take away my sadness that the last time I saw my parents was over a year ago. I live 2,370 miles from my parents and, due to health concerns, we have been apart since September 2019 (we were supposed to see them in March of 2020.)
My parents are not in the best health and my husband and I are both considered “high risk,” so we have been waiting patiently for a reunion. For a long time. I have a two-year-old son who has seen more of his grandparents on the computer than in person. And most of all, the more time marches on, I realize that our finite resource is running out, whether we have decades, years, or months.
I am the only child of the most loving parents one could imagine. I have tried for my many years on the planet to mess up the solid foundation that they have instilled in me. To no avail. I still always say thank you, I believe that I can do anything, and I would literally give my shirt to someone who needs it. This is none of my doing. I could only be described as a good human because they created me as such.
My mommy is sick. And she’s tired. And she has responsibly been inside her house for over nine months because she is ultra-high risk. I want to hug her. I want to drive/fly/run/crawl to her, but I know that I can’t take that risk. I don’t know what else to do other than . . . write. Because this is what I do know what to do.
http://www.globetravelcentre.com/koy/us-open-go-lf-liv011.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv012.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv013.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv014.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv015.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv016.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv017.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv018.html
http://www.globetravelcentre.com/koy/us-open-go-lf-liv019.html
http://bitscripter.com/Video/us-open-go-lf-liv011.html
http://bitscripter.com/Video/us-open-go-lf-liv012.html
http://bitscripter.com/Video/us-open-go-lf-liv013.html
http://bitscripter.com/Video/us-open-go-lf-liv014.html
http://bitscripter.com/Video/us-open-go-lf-liv015.html
http://bitscripter.com/Video/us-open-go-lf-liv016.html
http://bitscripter.com/Video/us-open-go-lf-liv017.html
http://bitscripter.com/Video/us-open-go-lf-liv018.html
http://bitscripter.com/Video/us-open-go-lf-liv019.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv011.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv012.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv013.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv014.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv015.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv016.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv017.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv018.html
http://www.dakotatechnologies.com/noc/us-open-go-lf-liv019.html
http://palmfs.com/video//us-open-go-lf-liv011.html
http://palmfs.com/video//us-open-go-lf-liv012.html
http://palmfs.com/video//us-open-go-lf-liv013.html
http://palmfs.com/video//us-open-go-lf-liv014.html
http://palmfs.com/video//us-open-go-lf-liv015.html
http://palmfs.com/video//us-open-go-lf-liv016.html
http://palmfs.com/video//us-open-go-lf-liv017.html
http://palmfs.com/video//us-open-go-lf-liv018.html
http://palmfs.com/video//us-open-go-lf-liv019.html
https://primo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss01.html
https://primo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss02.html
https://primo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss03.html
https://primo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss04.html
https://primo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss05.html
https://webshop.domo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss01.html
https://webshop.domo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss02.html
https://webshop.domo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss03.html
https://webshop.domo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss04.html
https://webshop.domo-elektro.be/video\be-Antwerp-Club-Brugge-tsbss05.html
http://geboortekaartjesbeveren.be/MySkin/be-Antwerp-Club-Brugge-tsbss01.html
http://geboortekaartjesbeveren.be/MySkin/be-Antwerp-Club-Brugge-tsbss02.html
http://geboortekaartjesbeveren.be/MySkin/be-Antwerp-Club-Brugge-tsbss03.html
http://geboortekaartjesbeveren.be/MySkin/be-Antwerp-Club-Brugge-tsbss04.html
http://geboortekaartjesbeveren.be/MySkin/be-Antwerp-Club-Brugge-tsbss05.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd11.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd12.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd13.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd14.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd15.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd16.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd17.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd18.html
http://www.ins3.com/golf/jp-fuji-tv-golf-hd19.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd11.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd12.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd13.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd14.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd15.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd16.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd17.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd18.html
https://aquamcorp.co.uk/Rx-golf/jp-fuji-tv-golf-hd19.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd11.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd12.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd13.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd14.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd15.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd16.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd17.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd18.html
http://www.dakotatechnologies.com/sky-tv/jp-fuji-tv-golf-hd19.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd11.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd12.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd13.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd14.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd15.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd16.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd17.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd18.html
https://www.globetravelcentre.com/Mvc/jp-fuji-tv-golf-hd19.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss21.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss22.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss23.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss24.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss25.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss26.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss27.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss28.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss29.html
http://www.dakotatechnologies.com/sky-tv/HD-g-o-l-f-tbssss30.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss21.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss22.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss23.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss24.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss25.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss26.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss27.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss28.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss29.html
http://www.ins3.com/golf/HD-g-o-l-f-tbssss30.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss21.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss22.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss23.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss24.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss25.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss26.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss27.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss28.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss29.html
https://www.globetravelcentre.com/Mvc/HD-g-o-l-f-tbssss30.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss21.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss22.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss23.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss24.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss25.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss26.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss27.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss28.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss29.html
https://www.senderosazules.org/sites/default/files/webform/HD-g-o-l-f-tbssss30.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs20.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs19.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs18.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs17.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs16.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs15.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs14.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs13.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs12.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs11.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs10.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs9.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs8.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs7.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs6.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs5.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs4.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs3.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs2.html
http://bitscripter.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs1.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs16.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs15.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs14.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs13.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs12.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs11.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs10.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs9.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs8.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs7.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs6.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs5.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs4.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs3.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs2.html
https://www.senderosazules.org/sites/default/files/webform/jp-Videos-Hinako-Shibuno-Final-Round-tvs1.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs16.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs15.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs14.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs13.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs12.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs11.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs10.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs9.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs8.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs7.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs6.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs5.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs4.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs3.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs2.html
http://www.dakotatechnologies.com/fujji-liv/jp-Videos-Hinako-Shibuno-Final-Round-tvs1.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs16.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs15.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs14.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs13.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs12.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs11.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs10.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs9.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs8.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs7.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs6.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs5.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs4.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs3.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs2.html
https://aquamcorp.co.uk/fuck/jp-Videos-Hinako-Shibuno-Final-Round-tvs1.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs15.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs14.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs13.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs12.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs11.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs10.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs9.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs8.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs7.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs6.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs5.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs4.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs3.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs2.html
http://palmfs.com/css/jp-Videos-Hinako-Shibuno-Final-Round-tvs1.html
Dear Moogie,
I know that I haven’t been the best daughter always. I know I have said things that have hurt you, I have been selfish, and six years ago, I moved 2,370 miles away from you (though, I moved to get away from other things — not you). These things don’t change the fact that you are and have always been the best mommy and grandmommy in the world.
I know we don’t agree on politics. And I know that I have chosen a faith that is different than the one you raised me to follow. Neither of these things diminishes the bond that we have and will always have. The rest of the world has nothing on us. Distance, though it is hard, has nothing on us. We are as inseparable as we were when I was in your tummy. You raised me to be smart and independent and loving. And you did an impeccable job.
The summer will bring fresh flowers, sunshine, and hopefully our first hug in what will be over 18 months. I don’t know what the near future brings. I don’t know how long it will be until I see you and until you get to kiss your grandson again. I wish that it could be tomorrow, but I know it won’t be. That makes me sad, but it makes me oh so grateful that I have a person whose physical absence in my life makes my heart ache.
I don’t send you as many care packages as I wish I could and I don’t call as much as you wish I would. We can’t hear each other on Zoom and your reception is terrible (please let me get you an iPhone?). No matter what, this one thing will always remain true: I love you. And I will be forever grateful to you for making and cultivating everything that I know to be good about myself.
Love,
Michelle | https://medium.com/@night006bird/a-letter-to-my-mother-dd04a4aba6df | [] | 2020-12-13 13:33:07.373000+00:00 | ['Relationships', 'Family', 'Covid 19', 'Love', 'Letters'] |
Technical Debt and Product Success | Technical Debt and Your Product
To understand if and to what extent your product is affected by technical debt, talk to the development team, for example, in the next sprint retrospective. I find that development team members usually have a good understanding where issues in the architecture and code are.
Additionally, consider asking the team to collect data that shows how much technical debt there is, where it is located, and how bad it is, for example, by using code complexity, dependencies, duplication, and test coverage as indicators. There are a number of code analysis tools available that collect the appropriate data and show how adaptable the architecture and how clean the code is. [2]
Once you understand the amount and severity of tech debt in your product, analyse its impact on meeting the product goals and achieving product success together with the development team. Take into account the cost of delay, the cost of not addressing the technical debt now but delaying it to a future point in time. Should you, for example, continue adding new features to the product for the next six months and plan in bigger technical improvement work afterwards? Or would it be better to address the worst debt now?
Furthermore, consider the life cycle stage of your product. Technical debt is particularly bad for new and young products: If your product has a closely-coupled architecture with lots of dependencies, if it lacks (automated) tests and documentation, or if it is full of spaghetti code, then experimenting with new ideas and adapting the product to user feedback and new trends will be difficult and time-consuming. Similarly, if you want to extend its product life cycle, you may have to first remove (some of) the technical debt before you can make the necessary changes and add new features or create a variant.
Having said that, it is a valid strategy to launch a minimum viable product (MVP) whose architecture, technology, and code has been intentionally compromised in order to reduce time to market — as long as the quality is good enough to adapt the product to feedback from the early market. But apply this strategy with caution: You will have to spend time addressing the technical debt incurred and putting your product on solid technical foundations. This should be done before reaching product-market fit, as you will otherwise struggle to scale up and keep your product growing.
If, however, your product is in maturity — or even decline — and you do not intend to extend its life cycle but focus on maximising the business benefits it generates, you probably want to do as little debt removal work as possible. | https://romanpichler.medium.com/technical-debt-and-product-success-42ec1c5718a7 | ['Roman Pichler'] | 2018-12-04 09:10:13.244000+00:00 | ['Product Management', 'Technical Debt', 'Product Lifecycle', 'Innovation', 'Agile'] |
Blockchain Only Solves Part of the Fraud Problem | Blockchain Only Solves Part of the Fraud Problem
As the world becomes smaller and technologies more advanced, cases of fraud have skyrocketed. In the UK, fraud costs the economy a staggering £193 billion a year — equating to more than £6,000 lost per second every day. Meanwhile, in the US, the Federal Trade Commission received more than 2.1 million fraud reports from consumers in 2020,
Blockchain over the last half-decade has proven that the distributed ledger technology goes way beyond digital currencies. The value of blockchain in being able to reduce financial transaction risk, fraud and cost amongst multiple parties with a trusted, decentralized digital ledger; has not gone unnoticed or unused.
Blockchain technology offers integrity, traceability, transparency and of course, security. But is this enough? The rise of cryptocurrency and the development of blockchain fuelled exchanges meant growth in money laundering activities.
In 2019, criminal entities laundered approximately $2.8 billion through crypto asset exchanges. Whilst banks have a heavily-regulated and distinct global system of legal protections and obligations, the crypto asset market isn’t as universally regulated.or protected. However, this is changing.
Importance of the KYC Process
Yotam Namir, the CEO of Tech View, a compliance company that operates Cointandem, the digital and blockchain service, spoke of the importance of the KYC process in the prevention of fraud in the blockchain environment.
“Blockchain goes a long way to help solve the growing problem of fraud but for that can only be truly affected in the crypto asset space with a KYC ( Know Your Customer) process in place within a universally regulated environment.”
“At Cointandem we are regulated and operate a strict KYC process which means that everything is transparent and safe for the user. As it should be.”
Technology and Adoption Will Advance
Looking beyond the crypto asset market, once technology and adoption improves, the KYC will be fully fuelled by blockchain technology.
Institutions and governments are embracing blockchain technology. El Salvador recently became the first country to make Bitcoin legal tender, and as each day passes blockchain technology creeps further into our lives.
Preventing the growing menace of fraud in the blockchain environment, however, at present, requires a rigid and transparent KYC process, for the benefit of everyone. | https://medium.com/@social-30673/blockchain-only-solves-part-of-the-fraud-problem-8e0c207a4abd | ['Jacob T'] | 2021-06-17 13:09:01.073000+00:00 | ['Tech View', 'Fintech', 'Compliance', 'Yotam Namir', 'Cointandem'] |
Attribution analysis: How to measure impact? (Part 1 of 2) | By Lisa Cohen, Saptarshi Chaudhuri, Daniel Yehdego, Ryan Bouchard, Shijing Fang, and Siddharth Kumar
A common question we hear from business stakeholders is “What is the impact of x?”, where x may be a marketing campaign, a customer support initiative, a new service launch, and so on. It makes sense — as a business, we continuously make investments to nurture Azure customers. In each instance, we want to understand the impact or effectiveness to help guide our future direction.
Controlled experiment
The most effective approach to quantify the impact of a change (or “treatment”) is to run a randomized controlled trial (RCT). We identify the set of customers for the experiment and randomly divide them into treatment and control groups, making sure confounding factors like customer size, monthly usage, growth rate, license type, and geography are equally represented in each group. Then we compare the outcomes of each group with each other. This approach allows us to evaluate causality and determine the “lift” from the program.
Defining “success” is a key step in this process and is a good way to clarify goals. In Azure, our overall goal is for customers to be successful in their adoption of the cloud. Below is an example comparing Azure usage between two populations as an indicator of engagement. Usage is an indirect measure of a customer’s success, because it demonstrates that they’re able to leverage and find value in the service versus experiencing blockers. Therefore, in this post, we refer to the customer’s usage as a proxy for their success. (Note: For features like Azure Cost Management, focused on helping drive efficiencies, we can actually measure success as a decrease in overall usage.) Additional success metrics that we generally consider include retention, net promoter score, and satisfaction. There are also scenarios where we consider success metrics that are more specific to the particular focus of a program, for example, to help facilitate activations or deployments.
Next, we choose the appropriate statistical test to evaluate the treatment’s performance. In this case, we use Student’s t-test to check that the p-value is below a predetermined alpha level. We most often use an alpha level of 0.05 and assess the range of potential impact with a 95 percent confidence interval. However, we also keep in mind the potential shortcomings of frequentist statistics that could warrant the use of over-comparison corrections or Bayesian techniques. (Note: In cases of non-Gaussian distributions, we also leverage nonparametric tests such as the Mann-Whitney U test.)
To determine the amount of impact the treatment had, we measure the area between these two curves after the treatment was applied. Finally, if we want to understand the return on investment (ROI) of the treatment, we can divide the incremental consumption that it produced by the cost:
Retrospective cohort study
In some cases, however, an experiment might not be possible. For example, we may be working with similar customers, and we don’t want to unfairly exclude any of them from a core customer benefit. But when we have rich observational data, we could measure impact through various observational studies. For example, we use a retrospective cohort study to measure the correlation of a customer nurture initiative (i.e., the “treatment”) on the output variable by comparing trends before and after the treatment using a single population analysis. Since the treatment might have been applied to different customers at different calendar months (depending on each customer’s lifecycle), first we start by normalizing the customers’ usage by the date of the treatment. We construct the chart below to confirm our intuition that a correlation actually exists between the treatment and Azure usage. In addition to the mean, we also check the median, as well as the twenty-fifth and seventy-fifth percentile (“box plot”) versions of this chart, to understand how the treatment affects different parts of the customer population. While in the end this is still a correlation, normalizing by the treatment date rather than a calendar date — when the treatment occurred at different points in time — helps exclude other time-based confounding variables. A curve like the one below (where the trajectory shift starts at the time of the treatment) helps us be more confident that the change is related to the treatment instead of to other confounders.
The other value of this chart is that it helps establish whether we’ve considered the correct “event” from the initiative as part of the treatment’s impact. For example, when evaluating the impact of support, do we check the support plan entitlement date, the ticket open date, or ticket close date? Similarly, when a technical professional engages with a customer to consult on a project, do we consider the engagement start date or the project deployment date? Finally, is there a typical “delay” or “ramp up time” from the time of the treatment event to the point of measurable impact? Using the perspective from these charts, we can conclude that the program correlates with helping the customer use Azure, when we see an increased growth rate at that point in time.
Now (just as in the experiment case), beyond determining whether the treatment correlates with a statistically significant difference or not, we also want to know how much of a difference exists. To measure the amount of impact the treatment had overall, we construct a view as follows. First, we forecast how much the population would have continued to grow on its own. We test multiple forecast techniques for this and choose the one that works best for this data set. The test includes an SMAPE calculation and divides the pre-treatment timeline as 70 percent training and 30 percent test to evaluate forecast performance. Once we have the forecast “baseline” defined, we compare the actual growth of the population and compare. In the end, we attribute the shaded area between the two curves to the treatment.
In employing these approaches, we avoid a couple common “gotchas”:
Comparing the growth rate of a non-randomly selected group with the treatment group. Sometimes when there is no planned control, there’s a temptation to compare the growth rates between populations who received the treatment with those who did not. However, often there are specific program criteria for the treatment. Therefore, there is an inherent sampling bias in this approach. Attributing all of a customer’s growth to the treatment (without considering that the customers might still have grown without the treatment). We use the forecast baseline method, as outlined above, to avoid this over-attribution.
In the retrospective cohort approach above, we caution the reader that this is still a correlation (versus a causal impact). Causal inference is an additional technique to determine causation, which we will explore in our next post.
Multi-attribution analysis
An additional complexity that we encounter in these analyses is when multiple treatments are applied to the same customers at the same time. In “real world” enterprise scenarios, this is often the case, since there are multiple teams, programs, and initiatives all working to help customers in different ways. For example, here is an illustration of a customer who engages with multiple programs over time:
To determine the impact of each individual program, we need to apply multi-attribution analysis.
One way to check whether this is required is by analyzing the overlap among programs. For example, what percentage of customers who experienced treatment A also had other treatments?
Note that this scenario reinforces the importance for data science organizations to bring together broad data sets in order to represent the end-to-end customer experience, as well as the need for a customer model that allows them to be connected with common identifiers, in order to produce these kinds of insights.
If the overlap among programs is small enough, and the action we plan to take doesn’t require extreme precision, we may choose to proceed with the single attribution analysis, knowing that the results are still directionally relevant. If the overlap is material, however, as in the example above, we apply multi-attribution approaches as follows.
First, we forecast the dynamic baseline, starting at the point of the first treatment. Then, we use proprietary machine learning models (based on Markov chain and Shapley value approaches) in order to divide the area between the baseline and actuals during the period where multiple treatments are present:
Fig. Usage attributed to two treatments, beyond the dynamic baseline. (Visual by Elizabeth Kronoff.)
Investment programs can be considered as a stochastic process where their sequence of events are treated as a Markov chain in which the probability of each event depends only on the state attained in the previous event (Paul A. Gagnic, 2017, and Markov). The Shapley value method is a general credit allocation approach in cooperative game theory. It is based on evaluating the marginal contribution of each investment in a customer’s journey. The credit assigned to each investment, i.e., Shapley value, is the expected value of this marginal contribution over all possible permutations of the investments.
Using this approach, we can conclude the correlated usage for each respective program.
Additional attribution scenarios
Web page attribution: In addition to the customer nurture activities described above, another scenario involving attribution analysis is in web analytics. Tools like Google Analytics and Adobe Analytics use heuristic (rule-based) multi-channel funnel (MCF) attribution models, which include the following methods:
First-touch attribution: Attributes all credit to the first touch point of the customer’s journey.
Attributes all credit to the first touch point of the customer’s journey. Last-touch attribution: Attributes all credit to the last touch point of the customer’s journey.
Attributes all credit to the last touch point of the customer’s journey. Linear attribution: Attributes the credit linearly across all touch points of the customer’s journey.
Attributes the credit linearly across all touch points of the customer’s journey. U-Shaped attribution: Attributes a fifty-fifty split of the credit to the first and last touch points of the customer’s journey.
Attributes a fifty-fifty split of the credit to the first and last touch points of the customer’s journey. Simple decay attribution: Attributes a weighted percentage of the credit to the most recent touch point in the customer’s journey.
However, the challenge with rule-based models is that you must know the correct rules to choose. Therefore, we’ve researched data-driven models for these scenarios as well. In our scenario, we want to understand the impact of our websites and documentation in helping users adopt and engage with our products. Using a Markov chain approach we’re able to observe the difference in conversion rates between those users who do and don’t visit our web pages, as well as determine which pages correlate with the strongest outcomes.
Customer satisfaction attribution: Another application of attribution analysis comes up in the case of customer satisfaction (CSAT). We typically learn about our customers’ satisfaction through survey data. By asking customers about their levels of engagement with our product and communities, we can then correlate those experiences with their overall satisfaction. Here is some sample data to illustrate this scenario:
Data visualization by Nancy Organ
While both this and the previous web example are correlation analyses, they do tell us about the prominence of particular web pages with users who do versus don’t convert, as well as the product and community engagement, for users who have high versus low CSAT. Given this, even if we don’t prove that Document A caused a customer’s conversion, the fact that it is frequently visited by users who convert means that we should likely invest in it.
Conclusion
In this article, we walked through methods of attribution analysis for both single- and multi-attribution scenarios. We explored an example in the context of a customer engagement program, and also shared references to web page attribution and customer satisfaction surveys as additional use cases where they are applicable. Beyond the initial RCT example, this article primarily focused on correlation. In the next article in this series, we’ll dive into causal inference approaches to determine causality.
We’d like to thank the Marketing, Finance, and Customer Program teams for being great partners in the adoption of this work. | https://medium.com/data-science-at-microsoft/attribution-analysis-how-to-measure-impact-part-1-of-2-324d43fbbba0 | ['Lisa Cohen'] | 2020-09-01 10:19:19.316000+00:00 | ['Data Science', 'Attribution', 'Experiment', 'Causation', 'Correlation'] |
How to get 1000 subscribers on Youtube in a one day(fast) for free | Our Special Tips For 1000 Youtube Subscribers
Would you like to create the first 1000 subscribers for your YouTube channel? We teach you some of the best ways to do so by putting your audience first, rather than your need to gain money. Your count of YouTube subscribers is not just another metric of ego. On the second-largest platform in the multiverse, having more subscribers is the perfect way to increase your organic presence.
If your goal is to really make money on YouTube, it is important to access many monetization functions to meet subscriber thresholds.
In this post, I will going to teach you on the excursion to 1,000 YouTube subscribers, above all, answer this inquiry — for what reason do you need 1,000 subscribers?
On the off chance that your answer is on the grounds that you need to adapt your YouTube channel, indeed, remember, you need 4,000 hours of Watch Time as well. Remember that you additionally need to apply for the YouTube Partner Program and be acknowledged into it, in light of the fact that YouTube will check whether you’re attempting to purchase subscribers or control the measurements.
Why you should not buy a YouTube subscribers ?
Possibly you discovered this article since you were searching for a simple hack. A hack so natural that it could put forth an individual YouTube well known with zero attempt?
See, we comprehend. We aren’t disgrace nuns; we get that bustling individuals should be productive.
However, the makers behind the world’s best YouTube channels aren’t investing their energy or cash on bot subscribers. They’re too bustling making wonderful recordings.
They try not to, and neither should you.
The whole set-up is very close to the time we attempted Instagram commitment pods. The service wins either way: they either get your time or your money. What are you getting?
Bot subscribers who don’t engage
A poor look for the actual audience, who might be very involved in authenticity
The danger of falling afoul of the bogus engagement policy of YouTube (tl;dr you could get banned)
Potential stink-eye with any organization that wishes to work with you
It is just not worth it at the end of the day.
In the interim, there are a great deal of trick recordings out there that imply to have the key to a large number of subscribers, While we love the creativity.
Let’s we see how to get free Youtube subscribers instantly/fast for free . There are some special tips for get free Youtube subscribers so you just follow it and get free Youtube subscribers.
Read more.. | https://medium.com/@parejiyatushar/how-to-get-1000-subscribers-on-youtube-in-a-one-day-fast-for-free-fed91289b29f | ['Tushar Parejiya'] | 2020-12-26 09:00:36.140000+00:00 | ['YouTuber', 'SEO', 'Technical', 'YouTube', 'Youtube Subscribers'] |
What Exactly Does the Serializer Do? | Cereal Guy
After dabbling in Ruby on Rails and venturing into JavaScript, I found it very useful to have Rails as the backend API. It is setup in such a way where it can easily render JSON data to be used on the JavaScript side and therefore the browser.
Building my app was going well until it came time to render attributes of a model in relation to another model.
In Ruby, the relationships are set up like so and through naming conventions it knows where to look for whatever it’s looking for. Here, I have 3 models in a many-to-many relationship.
class Review < ApplicationRecord belongs_to :user belongs_to :wine end class User < ApplicationRecord has_many :reviews has_many :wines, through: :reviews end class Wine < ApplicationRecord has_many :reviews, :dependent => :destroy has_many :users, through: :reviews end
In a Rails app, to create the frontend I would just need to create html.erb files within a folder (within app/views folder) that correlates with the above models by sharing the same name (naming convention). Then, in the controller actions I would just need to specify what to render. Through naming convention of the files, Rails automatically (almost magically) knows what is linked to what and therefore what to render.
But, I’m venturing into using JavaScript/HTML as my frontend, remember?
So, in the controller actions I need to tell the app what to render (via ‘render json:’ then whatever specified to render).
For example, the show action in app/controllers/users_controller.rb:
class UsersController < ApplicationController def show render json: User.find(params[:id]) end end
After a couple attempts I realized that the relationships I established in rails were not coming through on the frontend. For awhile I thought my JavaScript was wrong. No, I need a serializer. | https://levelup.gitconnected.com/what-exactly-does-the-serializer-do-9eee3c2e61b7 | ['Jonelle Noelani Yacapin'] | 2020-11-29 14:14:49.983000+00:00 | ['Active Model Serializer', 'Javascript Tips', 'JavaScript', 'Rails', 'Serializers'] |
Why I Joined Forces With Mass Ndow-Njie to Build Bridging the Bar | Why I Joined Forces With Mass Ndow-Njie to Build Bridging the Bar
Why?
It was late 2015. My Mum, Dad and I were all huddled in my Mum’s front room. Mum and I were sat on the sofa, whilst a couple of metres away my Dad was stood up, permanent marker in hand, next to flipchart paper which was blue-tacked to the wall.
The flipchart paper was entitled “PUPILLAGE”. There were four subheadings:
FINANCES
BUILD CV
NETWORK
ADDITIONAL QUALIFICATIONS (BPTC/MASTERS?)
Under each heading there were various scribblings from each of us which resembled our brainstormed ideas about how I could fill the gap between where I was, and where I wanted to be.
Photo by Nick Fewings on Unsplash
Despite having graduated with a First Class degree in Law, there was a distinct feeling amongst us that even if I went on to complete the BPTC and picked up relevant work experience along the way, I would fall short of what was needed to secure a pupillage. There were other forces at play.
All three of us had spent weeks looking through the profiles of recent pupils at dozens of chambers, from magic circle sets to the lesser known regional sets. There was a stark and prevailing theme: none of these people bore any resemblance to me. It was apparent to all of us that the overwhelming majority of the pupils and barristers we came across were of a completely different background to me in a whole range of aspects, from class and race, to their networks and educational opportunities. What was even more obvious was that the scope of variance for these different “background characteristics” was alarmingly narrow.
The problems we faced (I use the term “we” deliberately because attaining pupillage was a team effort and my parents’ unconditional support was critical) primarily fell into 5 categories:
How was I going to afford this? The BPTC (as it was named at the time) was just shy of £20,000. Neither of my parents had incomes or inheritance which could spawn that amount of money (and it certainly wasn’t hidden away in a cupboard in one of the social housing flats I grew up in). Despite graduating with a first, I still had to shake off the stigma of not going to Oxbridge or even a Russel Group university. Did I have to go and undertake a Masters degree at one of these universities just to have their name on my CV? How much would that cost? We had no lawyers in the family. I didn’t personally know any barristers. I had heard about barristers such as Courtenay Griffiths QC and Leslie Thomas QC who had risen to the top of their profession despite their non-traditional backgrounds. Nevertheless, based on our research, their experiences seemed to be the exception rather than the rule. How was I going to develop a network of colleagues and mentors which could provide guidance and opportunities? How could I breach the seemingly impermeable wall of prestige? How could I set myself apart from other applicants by demonstrating something uniquely different about me? What was my X factor? Maybe I didn’t have one yet. Where does one find an X factor? Why do some chambers have little to no people of colour, with the exception of a few people who appeared to be of Asian heritage? Of the very few who were black, why is it that so many went to private school and/or Oxbridge? Of course at the time this laid the foundations for imposter syndrome. Even if I managed to surmount all of the aforementioned hurdles, would the journey be worth it? If I am to get there, will I feel like I belong?
(I might write a short follow up piece about how I eventually overcame these difficult questions in the coming weeks. Follow/Connect with me to be notified when this is published)
Photo by Emily Morter on Unsplash
“Are you sure you don’t want to be a solicitor?”
This was a question my Dad had asked me repeatedly over a number of months. It was a valid question. At the time, the solicitor profession seemed a lot more accessible from my perspective. There were far more of them, and therefore more opportunities. It also seemed like a “safer” route for me at the time, given that it was common to have a firm pay for the LPC, and it’s the route the majority of my university friends took. It also seemed like it could be easier to build my network there.
Perhaps I could start at a high street firm, and move around until I found somewhere comfortable? Perhaps for a while I could endure doing work I didn’t really like, adopting working habits that didn’t suit me (I’ve always preferred being self employed) and instructing barristers with a hint of resentment whilst “what could have been” lingered at the back of my mind…
“No.” I replied. “This is what I want. I appreciate the journey will be challenging, but I’m not going to compromise my career intentions.”
My conviction was clear; it needed to be if I were to surmount the challenges I was up against at the time. I was acutely aware, even back then, that not everyone faced these barriers. I was also aware that many people faced far more barriers than I did, owing to their different characteristics. This seemingly unending system of inequality needed to change, and I wanted to be a part of the solution one day.
….
Returning to the present day, as a pupil barrister at a chambers which has thus far been unwavering in its support of my career, I often look back on my pre-pupillage days (which were not long ago). I reflect on them partly with gratitude for how the challenging journey left me ‘hardened by battle’, but partly with the perspective of knowing how other people must feel when they see our profession from the outside. I still remember that feeling, and I can see how it affects others with different characteristics to me — some visible, some hidden, some psychological, some ideological. When they peer over the edge and look just past the surface to see what largely resembles a homogeny of appearance, outdated views, slow progress and privilege — what do they think?
Of course, this can deter them from diving in altogether. Many chambers, and indeed entire areas of law, are being deprived of diverse talent as a result of this perception problem.
That feeling of being afflicted by a recruitment culture that is not made with you in mind, and does not account for inequity of starting positions, is what I want to change. An injection of diverse characteristics is precisely what the Bar needs to secure its own prosperous future.
After all, a Bar that represents society, benefits society. | https://medium.com/@aaronmayers/why-i-joined-forces-with-mass-ndow-njie-to-build-bridging-the-bar-588652119565 | ['Aaron Mayers'] | 2021-03-02 17:10:44.732000+00:00 | ['Barrister', 'Pupillage', 'Charity', 'Reflections', 'Self Development'] |
The power of social media management tools | Photo by Carlos Muza on Unsplash
A social media manager’s role has expanded since the role was created and brands began hiring people for the position. From scheduling posts and tweets to creating content and monitoring comments from customers, the role continues to grow. In fact, some brands have their social media managers form teams to focus solely on the social media.
And it looks like as social media platforms continue to add more features and users find new ways to utilize it, more expectations will fall onto the social media manager. This includes monitoring and listening to what users are saying about the brand and other subjects that pertain to the brand.
With many responsibilities bestowed on the social media manager, social media management tools have become the forefront of every social media manager’s needs. They’ve also have made their jobs easier.
What is a social media management tool?
A social media management tool basically helps you manage the bulk of your social media tasks. It’s a tool that can streamline your workflow.
This can be a dashboard that allows for multitude of options that can include posting, monitoring, listening, analytics, interacting with followers, and anything else for your social media task needs. In the end, it should simplify the process of building and executing social media campaigns.
Why is a social media management tool important?
Here are some reasons I found and learned that prove a social media management tool can be useful for any social media manager or user in general:
Ability to manage multiple platforms on the same page
Saves time and cuts costs
Can organize content
Enables content to be delivered to your audience when you want or should
Allows social media listening (which will get into in more depth below)
Provides a rich, holistic view of data
Helps prove ROI
Updates your CRM to modern time
Helps you get a peek of where your campaign is heading
Of course there are other useful features a social media management tool can do. Let’s take a look at two social media management tools that illustrate the necessity for them.
Hootsuite is a social media management tool that basically covers the role of the social media manager. The platform allows the user curate content, schedule posts, measuring ROI, and even managing a team.
The platform also allows social media monitoring. If a brand needs to know every mention, follow, and likes, Hootsuite allows it. This also includes multiple accounts at the same time making it easy for any social media manager monitor what customers are saying about the brand.
Users can also see detailed analytics using the platform. This includes performance over time and compare data with other social media accounts.
Sample Hootsuite dashboard (Hootsuite)
Detailed analytics from Hootsuite (Hootsuite)
What is also great about Hootsuite is the customization. However you want to view your dashboard and the analytics the platform allows it. It makes it simple for the social media manager to track, monitor, and understand social media campaigns and the brand’s social media performance overall.
As a former user of the tool, I highly recommend for beginners who want to get their hands dirty on learning how manage multiple accounts at the same time. Also makes reports easy to digest.
Sprout Social logo (Sprout Social)
Sprout Social is a social media management tool, similar to Hootsuite, that allows social media managers manage multiple social media accounts at the same time.
Some of the features the tool has include creating, drafting, and scheduling posts and tweets, social content calendar, profile, keywords and locations monitoring, CRM tools, and analytic reports. What makes this one different from other social media management tools I’ve used before is the paid promotion tools for Facebook posts and trend analysis for Twitter keywords and hashtags.
Sprout Social features (Sprout Social)
Sample analytics report from Sprout Social (Sprout Social)
If you want to pay more, the tool allows more monitoring and reports for other social media platforms such as Instagram and LinkedIn, and other features such as custom URL tracking and helpdesk integration. There are plans to meet a brand’s needs depending on the size of the social media team and/or brand.
Sprout Social has been in the social media game for quite some time. So, the company understands the changing landscape of the business and social media managers’ needs in order to optimize their engagement, content, and campaigns.
Putting it into play with community engagement & social listening
Social media communities are vital for a brand in terms of building relationships with its customers. It allows the brand and customers to engage in a two-way conversation, form relationships, offer solutions, and customers can meet other customers with similar passions and bond.
Utilizing a social media management tool can strengthen the engagement that occurs within that community. Using tools such as Hootsuite and Sprout Social allows any brand to find creative ways to engage with followers across multiple platforms with features such contests, polls, questions, sharing user-generated content (UGC), hosting a Q&A, and more.
There are also content creation tools that allow for engaging, vivid content for customers to actually read or listen that the call to action is. Canva is one I use frequently to create engaging Twitter and Instagram posts that is easy to use. I like to call it an easier version of Adobe Photoshop.
Most important, most social media management tools will allow a brand to look at what kinds of posts are most effective when trying to engage within their communities, thanks to analytic reports. Brands will be able to learn if they should focus on asking followers for UGC rather than creating polls which have little to no engagement.
At the same time, it will let the brand know if they should shake up their community engagement strategy. Should they invite key influencers into the community or be more upfront what they want from the customer? The social media management tool will provide the answer.
And this is where social listening comes into play.
Social listening basically tells you what your customer wants and why they want it. This often confused with social monitoring, which is waiting for the issue to come up, responding to it, and moving on from it.
Listening requires that you pay attention to the industry hashtags, keywords, mentions, and competitor accounts. It also entails reading through tweets, posts, and other social media elements in order to learn what the customer wants and why.
Social media management tools can help any brand and anyone with social listening because of the capabilities mentioned above. This includes monitoring hashtags, setting up keywords you want to pay attention to, mentions, and keeping up with the trends that everyone is talking about. These same tools can also recommend which hashtags, keywords, and other social media elements you need to pay attention to stay in the game. But there’s more!
Some of the other tools’ capabilities include:
Learning key demographics
Finding influencers within your brand’s industry
Sentiment analysis to tweak and/or adjust your content strategy
Market research
Finding leads and start social selling
Determining the appropriate listening strategy depends on your brand’s goals. This will define the effort the social media team will put into the content and engagement strategies. | https://medium.com/@eduardo.m.gonzalez/the-power-of-social-media-management-tools-87693da399a6 | ['Eduardo M. Gonzalez'] | 2021-06-17 21:33:58.680000+00:00 | ['Social Listening', 'Social Media Management', 'Community', 'Community Engagement', 'Social Media Tool'] |
Carbon capture and storage works – and these projects prove it | The UK continues to be a global leader on climate action, pledging to reduce greenhouse gas emissions by at least 68% on 1990 levels by the end of this decade. And carbon capture technology has been recognised as one of the key elements of the UK’s Ten Point Plan for a Green Industrial Revolution to get us there, launched by the Prime Minister last month.
Drax Power Station is already trialling Europe’s first bioenergy carbon capture and storage (BECCS) project. Combining sustainable biomass with carbon capture technology could remove and capture millions of tonnes of carbon dioxide (CO2) a year and put the power station at the centre of wider decarbonisation efforts across the region as part of Zero Carbon Humber, a partnership to develop a world first net zero industrial cluster in the North of England.
But who else is making carbon capture a reality?
Snøhvit & Sleipner Vest
Who: Sleipner — Equinor Energy, Var Energi, LOTOS, KUFPEC; Snøhvit — Equinor Energy, Petoro, Total, Neptune Energy, Wintershall Dean
Where: Norway
Sleipner Vest offshore carbon capture and storage (CCS) plant, Norway
Sleipner Vest was the world’s first offshore carbon capture and storage (CCS) plant. Active since 1996, it separates CO2 from natural gas extracted from beneath the sea.
Snøhvit, in Norway’s northern Barents Sea, operates similarly but here natural gas is pumped to an onshore facility for carbon removal. The separated and compressed CO2 from both facilities is then stored in empty reservoirs under the sea.
As of 2019, Sleipner had captured and stored over 23 million tonnes of CO2 while Snøhvit stores 700,000 tonnes of CO2 per year.
Petra Nova
Who: NRG, Mitsubishi Heavy Industries America, Inc. (MHIA) and JX Nippon, a joint venture with Hilcorp Energy
Where: Texas, USA
In 2016, the largest carbon capture facility in the world began operation at the Petra Nova coal-fired power plant. However, the 240-megawatt project was interrupted in July 2020 when falling oil prices meant that Petra Nova was unable to find an economically sustainable way to deploy coal-based CCUS at scale. | https://medium.com/drax/carbon-capture-and-storage-works-and-these-projects-prove-it-d98c0e11ec8f | [] | 2020-12-10 16:08:28.939000+00:00 | ['Carbon Emissions', 'Carbon Capture', 'Climate Action', 'Innovation', 'Technology'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.